HSV and in range method doesn't work properly - c++

I have a problem in using the .inRange() method in the OpenCV.
I converted the frame to HSV and while I'm using .inRange() it doesn't filter correct color.
Can anyone help me?
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
while(true){
Mat input = imread("/home/xenups/Desktop/szpAl.png");
Mat hsv;
Mat output;
cvtColor(input, hsv, CV_BGR2HSV);
inRange(hsv, Scalar(244, 194, 194), Scalar(255, 0, 0), output);
imshow("ss",input);
imshow("redOnly", output);
waitKey(2);
}
}
I used a different scalar color Scalar(244, 194, 194), Scalar(255, 0, 0) from this site and still I have that problem.

Your Scalar values are wrong, it has max values of (179,255,255) for hsv images. Use a colour picker to determine the values you need.

Related

Isolate image shadow with OpenCV C++

I have seen some algorithms on how to remove a shadow from an image using OpenCV with C++. I have looked around but haven't find the way to not just erase the shadow, but store it on a new image alone.
What I am doing with this code is to convert the original image (that I obtained from the Internet) to the HSV color space, change the value of V=180, which somehow removes the shadow, and then converting the image back to the BGR color space. I am clueless on how to 'extract' the removed shadow and save it to a different image...
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc.hpp>
using namespace cv;
using namespace std;
int main()
{
Mat srcImg;
Mat hsvImg;
Mat bgrImg;
srcImg = imread("pcb-2008.jpg");
cvtColor(srcImg, hsvImg, CV_BGR2HSV);
imwrite("1.hsv.jpg", hsvImg);
Mat channel[3];
split(hsvImg, channel);
channel[2] = Mat(hsvImg.rows, hsvImg.cols, CV_8UC1, 180);
merge(channel, 3, hsvImg);
imwrite("2.hsvNoShadow.jpg", hsvImg);
cvtColor(hsvImg, bgrImg, CV_HSV2BGR);
imwrite("3.backToBgr.jpg", bgrImg);
return 0;
}
Sample image of a PCB

Rotate area to align the major axis horizontally with opencv

Please can someone here who can help me with this. I'm trying to rotate a segmented region of an image to align the major axis horizontally.
I have a segmented region in the center of the image following the steps used herein. Move area of an image to the center using OpenCV
I read this OPENCV: PCA application error in image_proc, but it does not help me solve my problem.
I have this
I want this
Slightly different than how Miki suggested, I used findNonZero, minAreaRect, and WarpAffine.
You can either use 270 or 90 on the getRotationMatrix2D to align the major axis with the horizontal.
#include "stdafx.h"
#include <opencv/cxcore.h>
#include <opencv2\core\mat.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
#include <opencv/cxcore.h>
#include <opencv/highgui.h>
#include <opencv/cv.h>
#include <opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/videoio/videoio.hpp>
using namespace cv;
using namespace std;
int main() {
//getting the image
Mat image = imread("C:/this/is/a/path/to/an/image.png");
//create new image that looks exactly like old image
Mat rot_image = image.clone();
rot_image = Scalar(0);
//showing the image
namedWindow("Image", CV_WINDOW_NORMAL| CV_WINDOW_KEEPRATIO | CV_GUI_EXPANDED);
namedWindow("Rotated Image", CV_WINDOW_NORMAL| CV_WINDOW_KEEPRATIO | CV_GUI_EXPANDED);
imshow("Image", image);
waitKey(0);
imshow("Rotated Image", rot_image);
waitKey(0);
//convert image
Mat img_bw;
inRange(image, Scalar(1,1,1), Scalar(255,255,255), img_bw);
imshow("Rotated Image", img_bw);
waitKey(0);
//find coordinates
Mat nonZeroCoordinates;
findNonZero(img_bw, nonZeroCoordinates);
RotatedRect rect = minAreaRect(nonZeroCoordinates);
rect.center = Point(image.cols/2, image.rows/2);
//get the Rotation Matrix
Mat M = getRotationMatrix2D(rect.center, 270, 1.0);
// perform the affine transformation
warpAffine(image, rot_image, M, image.size(), INTER_CUBIC);
//displaying the image
imshow("Rotated Image", rot_image);
waitKey(0);
//saving the new image
imwrite("C:/this/is/a/path/to/a/rotatedImage.png", rot_image);
}
That code turns this:
to this:
You can take the rect.center line out if you're sure your object is already going to be in the center.

OpenCV SIFT key points extraction isuue

I tried to extract SIFT key points. It is working fine for a sample image I downloaded (height 400px width 247px horizontal and vertical resolutions 300dpi). Below image shows the extracted points.
Then I tried to apply the same code to a image that was taken and edited by me (height 443px width 541px horizontal and vertical resolutions 72dpi).
To create the above image I rotated the original image then removed its background and resized it using Photoshop, but my code, for that image doesn't extract features like in the first image.
See the result :
It just extract very few points. I expect a result as in the first case.
For the second case when I'm using the original image without any edit the program gives points as the first case.
Here is the simple code I have used
#include<opencv\cv.h>
#include<opencv\highgui.h>
#include<opencv2\nonfree\nonfree.hpp>
using namespace cv;
int main(){
Mat src, descriptors,dest;
vector<KeyPoint> keypoints;
src = imread(". . .");
cvtColor(src, src, CV_BGR2GRAY);
SIFT sift;
sift(src, src, keypoints, descriptors, false);
drawKeypoints(src, keypoints, dest);
imshow("Sift", dest);
cvWaitKey(0);
return 0;
}
What I'm doing wrong here? what do I need to do to get a result like in the first case to my own image after resizing ?
Thank you!
Try set nfeatures parameter (may be other parameters also need adjustment) in SIFT constructor.
Here is constructor definition from reference:
SIFT::SIFT(int nfeatures=0, int nOctaveLayers=3, double contrastThreshold=0.04, double edgeThreshold=10, double sigma=1.6)
Your code will be:
#include<opencv\cv.h>
#include<opencv\highgui.h>
#include<opencv2\nonfree\nonfree.hpp>
using namespace cv;
using namespace std;
int main(){
Mat src, descriptors,dest;
vector<KeyPoint> keypoints;
src = imread("D:\\ImagesForTest\\leaf.jpg");
cvtColor(src, src, CV_BGR2GRAY);
SIFT sift(2000,3,0.004);
sift(src, src, keypoints, descriptors, false);
drawKeypoints(src, keypoints, dest);
imshow("Sift", dest);
cvWaitKey(0);
return 0;
}
The result:
Dense sampling example:
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/features2d/features2d.hpp>
#include "opencv2/nonfree/nonfree.hpp"
int main(int argc, char* argv[])
{
cv::initModule_nonfree();
cv::namedWindow("result");
cv::Mat bgr_img = cv::imread("D:\\ImagesForTest\\lena.jpg");
if (bgr_img.empty())
{
exit(EXIT_FAILURE);
}
cv::Mat gray_img;
cv::cvtColor(bgr_img, gray_img, cv::COLOR_BGR2GRAY);
cv::normalize(gray_img, gray_img, 0, 255, cv::NORM_MINMAX);
cv::DenseFeatureDetector detector(12.0f, 1, 0.1f, 10);
std::vector<cv::KeyPoint> keypoints;
detector.detect(gray_img, keypoints);
std::vector<cv::KeyPoint>::iterator itk;
for (itk = keypoints.begin(); itk != keypoints.end(); ++itk)
{
std::cout << itk->pt << std::endl;
cv::circle(bgr_img, itk->pt, itk->size, cv::Scalar(0,255,255), 1, CV_AA);
cv::circle(bgr_img, itk->pt, 1, cv::Scalar(0,255,0), -1);
}
cv::Ptr<cv::DescriptorExtractor> descriptorExtractor = cv::DescriptorExtractor::create("SURF");
cv::Mat descriptors;
descriptorExtractor->compute( gray_img, keypoints, descriptors);
// SIFT returns large negative values when it goes off the edge of the image.
descriptors.setTo(0, descriptors<0);
imshow("result",bgr_img);
cv::waitKey();
return 0;
}
The result:

Output producing 4 images side by side for single image provided in gradient calculation

Following code is used to calculate the normalized gradient at all the pixels of image. But on using imshow on calculated gradient, instead of showing gradient for provided image its showing gradient of provided image 4 times (side by side).
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <opencv2/core/core.hpp>
#include<iostream>
#include<math.h>
using namespace cv;
using namespace std;
Mat mat2gray(const Mat& src)
{
Mat dst;
normalize(src, dst, 0.0, 1.0, NORM_MINMAX);
return dst;
}
Mat setImage(Mat srcImage){
//GaussianBlur(srcImage,srcImage,Size(3,3),0.5,0.5);
Mat avgImage = Mat::zeros(srcImage.rows,srcImage.cols,CV_32F);
Mat gradient = Mat::zeros(srcImage.rows,srcImage.cols,CV_32F);
Mat norMagnitude = Mat::zeros(srcImage.rows,srcImage.cols,CV_32F);
Mat orientation = Mat::zeros(srcImage.rows,srcImage.cols,CV_32F);
//Mat_<uchar> srcImagetemp = srcImage;
float dx,dy;
for(int i=0;i<srcImage.rows-1;i++){
for(int j=0;j<srcImage.cols-1;j++){
dx=srcImage.at<float>(i,j+1)-srcImage.at<float>(i,j);
dy=srcImage.at<float>(i+1,j)-srcImage.at<float>(i,j);
gradient.at<float>(i,j)=sqrt(dx*dx+dy*dy);
orientation.at<float>(i,j)=atan2(dy,dx);
//cout<<gradient.at<float>(i,j)<<endl;
}
}
GaussianBlur(gradient,avgImage,Size(7,7),3,3);
for(int i=0;i<srcImage.rows;i++){
for(int j=0;j<srcImage.cols;j++){
norMagnitude.at<float>(i,j)=gradient.at<float>(i,j)/max(avgImage.at<float>(i,j),float(4));
//cout<<norMagnitude.at<float>(i,j)<<endl;
}
}
imshow("b",(gradient));
waitKey();
return norMagnitude;
}
int main(int argc,char **argv){
Mat image=imread(argv[1]);
cvtColor( image,image, CV_BGR2GRAY );
Mat newImage=setImage(image);
imshow("a",(newImage));
waitKey();
}
Your incoming source image is of type CV_8UC1, and yet you read it as floats:
dx=srcImage.at<float>(i,j+1)-srcImage.at<float>(i,j);
dy=srcImage.at<float>(i+1,j)-srcImage.at<float>(i,j);
If running under the debugger, this should have thrown an assertion, which would have highlighted the problem.
Try changing those lines to use unsigned char as follows:
dx=(float)(srcImage.at<unsigned char>(i,j+1)-srcImage.at<unsigned char>(i,j));
dy=(float)(srcImage.at<unsigned char>(i+1,j)-srcImage.at<unsigned char>(i,j));

Motion detection with OpenCV c++

I'm trying to play with my webcam and OpenCV.
I follow this tuto : http://mateuszstankiewicz.eu/?p=189.
But the only result I have is one red border and I don't understand why. Could anyone help me to make it right and fix this ?
Here is my code :
#include "mvt_detection.h"
Mvt_detection::Mvt_detection()
{
}
Mvt_detection::~Mvt_detection()
{
}
cv::Mat Mvt_detection::start(cv::Mat frame)
{
cv::Mat back;
cv::Mat fore;
cv::BackgroundSubtractorMOG2 bg(5,3,true) ;
cv::namedWindow("Background");
std::vector<std::vector<cv::Point> > contours;
bg.operator ()(frame,fore);
bg.getBackgroundImage(back);
cv::erode(fore,fore,cv::Mat());
cv::dilate(fore,fore,cv::Mat());
cv::findContours(fore,contours,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_NONE);
cv::drawContours(frame,contours,-1,cv::Scalar(0,0,255),2);
return frame;
}
Here is a screenshot of what our cam returns :
I tried on two other video from there and there and there is the same issue.
Thanks for the help :).
As #Lenjyco said, we fixe the problem.
#Micka had a good idea :
Firstly the BackgroundSubtractorMOG2 as to be instancied only ONCE.
We instantiate it in the constructor and play with the Hystory and Threashold:
Mvt_detection::Mvt_detection()
{
bg = new cv::BackgroundSubtractorMOG2(10, 16, false);
}
10 : the number of image the backgound look back to compare.
16 : the threshold level (blur)
This way, we are now able to detect motion.
Thank you !
I have used the following code which is similar to yours and it is working well. I am also taking the inputs from my webcam. In your code, i didnt find any imshow() and waitkey. Try to use them. My code is following:
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/video/background_segm.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdio.h>
#include <iostream>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
VideoCapture cap;
bool update_bg_model = true;
cap.open(0);
cv::BackgroundSubtractorMOG2 bg;//(100, 3, 0.3, 5);
bg.set ("nmixtures", 3);
std::vector < std::vector < cv::Point > >contours;
cv::namedWindow ("Frame");
cv::namedWindow ("Background");
Mat frame, fgmask, fgimg, backgroundImage;
for(;;)
{
cap >> frame;
bg.operator()(frame, fgimg);
bg.getBackgroundImage (backgroundImage);
cv::erode (fgimg, fgimg, cv::Mat ());
cv::dilate (fgimg, fgimg, cv::Mat ());
cv::findContours (fgimg, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
cv::drawContours (frame, contours, -1, cv::Scalar (0, 0, 255), 2);
cv::imshow ("Frame", frame);
cv::imshow ("Background", backgroundImage);
char k = (char)waitKey(30);
if( k == 27 ) break;
}
return 0;
}
Problem fixed, putting BackgroundSubtractorMOG2 in my object's field and initialise it in constructor make him work well.