I need to stitch few images using OpenCV in C++, so I wrote the following code:
#include <opencv2/opencv.hpp>
#include <opencv2/stitching.hpp>
#include <cstdio>
#include <vector>
void main()
{
std::vector<cv::Mat> vImg;
cv::Mat rImg;
vImg.push_back(cv::imread("./stitching_img/S1.png"));
vImg.push_back(cv::imread("./stitching_img/S2.png"));
vImg.push_back(cv::imread("./stitching_img/S3.png"));
cv::Stitcher stitcher = cv::Stitcher::createDefault();
unsigned long AAtime = 0, BBtime = 0;
AAtime = cv::getTickCount();
cv::Stitcher::Status status = stitcher.stitch(vImg, rImg);
BBtime = cv::getTickCount();
printf("%.2lf sec \n", (BBtime - AAtime) / cv::getTickFrequency());
if (cv::Stitcher::OK == status)
cv::imshow("Stitching Result", rImg);
else
std::printf("Stitching fail.");
cv::waitKey(0);
}
Unfortunately, it always says "Stitching fail" on the following files -- http://imgur.com/a/32ZNS while it works on these files -- http://imgur.com/a/ve5sY
What am I doing wrong? How can I fix it?
Thanks in advance.
cv::Stitchers works by finding common features in the separate images and use those to figure out where the images fit together. In your samples where the stitching works you can find a lot of overlap: the blue roof, the features of the buildings across the road, etc.
In the set where it fails for you, there is no overlap, so the algorithm can't figure out how to fit them together. It seems like you can 'stitch' these images by just putting them next together. For this you can use hconcat as described at this answer: https://stackoverflow.com/a/20079134/1737727
There is a very simple way of displaying two images side by side. The following function can be used which is provided by opencv.
Mat image1, image2;
hconcat(image1,image2,image1);//Syntax->
hconcat(source1,source2,destination);
This function can also be used to copy a set of columns from an image to another image.
Mat image;
Mat columns=image.colRange(20,30);
hconcat(image,columns,image);
vconcat is a similar function to stich images vertically.
Related
I have one gray scale image which is just the R channel of a photo, now I'm trying to write that R channel into a new image, which is an RGB image. Ideally, the new image would look just like the old image, but red.
What happens though is that in the new image, the old image appears three times squished next to each other.
Here you can see the gray scale image and the output image.
Here is my code, I think it's pretty straightforward:
Mat img_in = imread("in.png", CV_LOAD_IMAGE_GRAYSCALE);
Mat img_out = Mat::zeros(img_in.size(), CV_8UC3);
for (int i = 0; i < img_in.rows; i++)
{
for (int j = 0; j < img_in.cols; j++)
{
img_out.at<Vec3b>(i,j)[2] = img_in.at<Vec3b>(i,j)[2];
}
}
imwrite("test_img_in.png", img_in);
imwrite("test_img_out.png", img_out);
At first I thought it was some kind of indices mixup, but I've tried a lot of combinations, and it always multiplies the output image three times horizontally, never vertically.
Now my thought is that it comes from some OpenCV specification, like the CV_8UC3 type (I've tried others too), which I've chosen because I think it support RGB images. Unfortunately, I don't know too much about OpenCV itself, that's why I'm seeking help here.
PS: This is part of a whole bigger program which wants to generate a color image from three gray scale channel images, but I'm currently stuck on combining the aligned gray scale images, since this happens. The code I posted is isolated from the rest of the program and works like this on its own.
My OpenCV version is 2.4.11.
The problem is here:
img_out.at<Vec3b>(i,j)[2] = img_in.at<Vec3b>(i,j)[2];
As you said the input image is gray. So, just use:
img_out.at<Vec3b>(i,j)[2] = img_in.at<unsigned char>(i,j);
you will get the same result by loading your image as 3 channel and subtract Scalar(255,255,0)
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int argc, char **argv)
{
Mat src = imread(argv[1]);
imshow("src", src );
src -= Scalar(255,255,0);
imshow("Red channel", src );
waitKey();
return 0;
}
According to this post OCR: Difference between two frames, I now know how to find pixel differences between two images with OpenCV.
I would like to improve this solution and use it with high resolution images (from a video) with rich content. The example above is not applicable with big images because the process is to slow (too much differences found, the "findCountours method" fills the tab with 250k elements which takes a huge time to process).
My application uses a RLE decoder to decode the compressed frames of the video. Once the frame is decoded, I would like to compare the current frame with the previous one in order to store the differences between the two frames in a "Mat" tab for example.
The goal of all of this is to be able to perform an analysis on the different pixels and to check if there is any latin character. This allows me to reduce the amount of pixels to analyze and to save precious time.
If anyone has other ideas instead of this one to perform such operations, feel free to propose it please.
Thank you for your help.
EDIT 1:
Example of two high resolution images of a computer screen. These are for the moment the perfect example of what I'm trying to analyse. As we can see there is just a window as difference between the two big images and I would like to analyze just the new "Challenge" window for any character.
EDIT 2:
I'm trying to tune the algorithm depending on the data analyzed. Typically on the two following pictures I only get the green lines as differences and no text at all (which is what is the most interesting). I'm trying to understand better how things work for this.
1st image:
2nd image:
3rd image:
As you can see I only have those green lines and never the text (at the best I can have just ONE letter when decreasing the countours[i].size())
In addition to the post you mentioned, you need to:
When you binarize the mask, use a threshold higher then 0 to remove small differences.
Remove some noise. You can find all connected components, and remove smaller ones.
Find the area of the bigger connected components. You can use convexHull and fillConvexPoly to get the mask of the different objects on screen
Copy the second image to a new image, with the given mask.
The result will look like:
Code:
#include <opencv2/opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
Mat3b img1 = imread("path_to_image_1");
Mat3b img2 = imread("path_to_image_2");
Mat3b diff;
absdiff(img1, img2, diff);
// Split each channel
vector<Mat1b> masks;
split(diff, masks);
// Create a black mask
Mat1b mask(diff.rows, diff.cols, uchar(0));
// OR with each channel of the N channels mask
for (int i = 0; i < masks.size(); ++i)
{
mask |= masks[i];
}
// Binarize mask
mask = mask > 100;
// Results images
vector<Mat3b> difference_images;
// Remove small blobs
//Mat kernel = getStructuringElement(MORPH_RECT, Size(5,5));
//morphologyEx(mask, mask, MORPH_OPEN, kernel);
// Find connected components
vector<vector<Point>> contours;
findContours(mask.clone(), contours, CV_RETR_EXTERNAL, CHAIN_APPROX_NONE);
for (int i = 0; i < contours.size(); ++i)
{
if (contours[i].size() > 1000)
{
Mat1b mm(mask.rows, mask.cols, uchar(0));
vector<Point> hull;
convexHull(contours[i], hull);
fillConvexPoly(mm, hull, Scalar(255));
Mat3b difference_img(img2.rows, img2.cols, Vec3b(0,0,0));
img2.copyTo(difference_img, mm);
difference_images.push_back(difference_img.clone());
}
}
return 0;
}
Thought I'd try my hand at a little (auto)correlation/convolution today in openCV and make my own 2D filter kernel.
Following openCV's 2D Filter Tutorial I discovered that making your own kernels for openCV's Filter2D might not be that hard. However I'm getting unhandled exceptions when I try to use one.
Code with comments relating to the issue here:
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char** argv) {
//Loading the source image
Mat src;
src = imread( "1.png" );
//Output image of the same size and the same number of channels as src.
Mat dst;
//Mat dst = src.clone(); //didn't help...
//desired depth of the destination image
//negative so dst will be the same as src.depth()
int ddepth = -1;
//the convolution kernel, a single-channel floating point matrix:
Mat kernel = imread( "kernel.png" );
kernel.convertTo(kernel, CV_32F); //<<not working
//normalize(kernel, kernel, 1.0, 0.0, 4, -1, noArray()); //doesn't help
//cout << kernel.size() << endl; // ... gives 11, 11
//however, the example from tutorial that does work:
//kernel = Mat::ones( 11, 11, CV_32F )/ (float)(11*11);
//default value (-1,-1) here means that the anchor is at the kernel center.
Point anchor = Point(-1,-1);
//value added to the filtered pixels before storing them in dst.
double delta = 0;
//alright, let's do this...
filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT );
imshow("Source", src); //<< unhandled exception here
imshow("Kernel", kernel);
imshow("Destination", dst);
waitKey(1000000);
return 0;
}
As you can see, using the tutorials kernel works fine, but my image will crash the program, I've tried changing the bit-depth, normalizing, checking size and lots of commenting out blocks to see where it fails, but haven't cracked it yet.
The image is, '1.png':
And the kernel I want 'kernel.png':
I'm trying to see if I can get a hotspot in dst at the point where the eye catchlight is (the kernel I've chosen is the catchlight). I know there are other ways to do this, but I'm interested to see how effective convolving the catchlight over itself is. (autocorrelation I think that's called?)
Direct questions:
why the crash?
is the crash indicating a fundamental conceptual mistake?
or (hopefully) is it just some (silly) fault in the code?
Thanks in advance for any help :)
The assertion error should be posted which would help someone to answer you other than questioning why is the crash. Anyways, I have posted below the possible errors and solution for convolution filter2D.
Error 1:
OpenCV Error: Assertion failed (src.channels() == 1 && func != 0) in cv::countNo
nZero, file C:\builds\2_4_PackSlave-win32-vc12-shared\opencv\modules\core\src\st
at.cpp, line 549
Solution : Your input Image and the kernel should be grayscales. You can use the flag 0 in imread. (ex. cv::imread("kernel.png",0) to read the image as grayscale.) If you want to apply different kernels to different channels, split the image into separate color planes using split() and process them individually.
I don't see anything other than the obove error that may crash. Kernel size should in odd numbers and your kernel image is 11X11 which is fine. If it stills crashes kindly provide more information in order to help you out.
As you may know, many things changed in OpenCV 3. In previous verion of OpenCV I used to do it that way:
Mat trainData(classes * samples, ImageSize, CV_32FC1);
Mat trainClasses(classes * samples, 1, CV_32FC1);
KNNLearning(&trainData, &trainClasses); //learning function
KNearest knearest(trainData, trainClasses); //creating
//loading input image
Mat input = imread("input.jpg");
//digital recognition
learningTest(input, knearest);//test
Also I found an example how to figured it out, but I still have errors in create function:
Ptr<KNearest> knearestKdt = KNearest::create(ml::KNearest::Params(10, true, INT_MAX, ml::KNearest::KDTREE));
knearestKdt->train(trainData, ml::ROW_SAMPLE, trainLabels);
knearestKdt->findNearest(testData, 4, bestLabels);
Can you please provide me with information, how to rewrite the actual code of KNearest to openCV 3 properly?
The API has changed once again since #aperture-laboratories answer. I hope they keep up with the documentation when they release new features or changes in the future.
A working example is as follows
using namespace cv::ml;
//Be sure to change number_of_... to fit your data!
Mat matTrainFeatures(0,number_of_train_elements,CV_32F);
Mat matSample(0,number_of_sample_elements,CV_32F);
Mat matTrainLabels(0,number_of_train_elements,CV_32F);
Mat matSampleLabels(0,number_of_sample_elements,CV_32F);
Mat matResults(0,0,CV_32F);
//etcetera code for loading data into Mat variables suppressed
Ptr<TrainData> trainingData;
Ptr<KNearest> kclassifier=KNearest::create();
trainingData=TrainData::create(matTrainFeatures,
SampleTypes::ROW_SAMPLE,matTrainLabels);
kclassifier->setIsClassifier(true);
kclassifier->setAlgorithmType(KNearest::Types::BRUTE_FORCE);
kclassifier->setDefaultK(1);
kclassifier->train(trainingData);
kclassifier->findNearest(matSample,kclassifier->getDefaultK(),matResults);
//Just checking the settings
cout<<"Training data: "<<endl
<<"getNSamples\t"<<trainingData->getNSamples()<<endl
<<"getSamples\n"<<trainingData->getSamples()<<endl
<<endl;
cout<<"Classifier :"<<endl
<<"kclassifier->getDefaultK(): "<<kclassifier->getDefaultK()<<endl
<<"kclassifier->getIsClassifier() : "<<kclassifier->getIsClassifier()<<endl
<<"kclassifier->getAlgorithmType(): "<<kclassifier->getAlgorithmType()<<endl
<<endl;
//confirming sample order
cout<<"matSample: "<<endl
<<matSample<<endl
<<endl;
//displaying the results
cout<<"matResults: "<<endl
<<matResults<<endl
<<endl;
//etcetera ending for main function
KNearest::Params params;
params.defaultK=5;
params.isclassifier=true;
//////// Train and find with knearest
Ptr<TrainData> knn;
knn= TrainData::create(AmatOfFeatures,ROW_SAMPLE,AmatOfLabels);
Ptr<KNearest> knn1;
knn1=StatModel::train<KNearest>(knn,params);
knn1->findNearest(AmatOfFeaturesToTest,4,ResultMatOfNearestNeighbours);
/////////////////
The names of these functions will help you find them in the documentation.
However, the documentation might be a little confusing until it is fully updated, so the best way to do exactly what you want is to make a small toy example and use the trial-and-error way of things.
This is a working example, pasted right out of my own code, which is proven to be working. Hope that helps.
I'm new to OpenCV and am working on a video analysis project. Basically, I want to split my webcam into two sides (left and right), and have already figured out how to do this. However, I also want to analyze each side for red and green colors, and print out the amount of pixels that are red/green. I must have gone through every possible blog to figure this out, but alas it still doesn't work. The following code runs, however instead of detecting red as the code might suggest it seems to pick up white (all light sources and white walls). I have spent hours combing through the code but still cannot find the solution. Please help! Also note that this is being run on OSX 10.8, via Xcode. Thanks!
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/core/core.hpp"
using namespace cv;
using namespace std;
int main( int argc, char** argv )
{
VideoCapture cap(0); //capture the video from webcam
if ( !cap.isOpened() ) // if not success, exit program
{
cout << "Cannot open the web cam" << endl;
return -1;
}
namedWindow("HSVLeftRed", CV_WINDOW_AUTOSIZE);
namedWindow("HSVLeftGreen", CV_WINDOW_AUTOSIZE);
while (true) {
Mat image;
cap.read(image);
Mat HSV;
Mat threshold;
//Left Cropping
Mat leftimg = image(Rect(0, 0, 640, 720));
//Left Red Detection
cvtColor(leftimg,HSV,CV_BGR2HSV);
inRange(HSV,Scalar(0,0,150),Scalar(0,0,255),threshold);
imshow("HSVLeftRed",threshold);
//Left Green Detection
cvtColor(leftimg,HSV,CV_BGR2HSV);
inRange(HSV,Scalar(still need to find proper min values),Scalar(still need to find proper max values),threshold);
imshow("HSVLeftGreen",threshold);
}
return 0;
}
You're cropping a 640x720 area, which might not fit exactly your contents. Tip: Check your actual capture resolution with capture.get(CAP_PROP_FRAME_WIDTH) and capture.get(CAP_PROP_FRAME_HEIGHT). You might want to consider Mat threshold --> Mat thresholded. This is just some ranting :)
What I suspect is the actual issue is the threshold you use for HSV. According to cvtolor, section on RGB to HSV conversion,
On output 0 <= V <= 1.
so you should use a float representing your V threshold, i.e. 150 -> 150/255 ~= 0.58 etc.