On an image frame, I use
void ellipse(Mat& img, Point center, Size axes, double angle, double startAngle, double endAngle, const Scalar& color, int thickness=1, int lineType=8, int shift=0)
to draw an ellipse and I want to set the ellipse color to green [ RGB value : (165, 206, 94) ].
So I set the parameter const Scalar& color to
cv::Scalar(94.0, 206.0, 165.0, 0.0); // as BGR order, suppose the value is 0.0 - 255.0
cv::Scalar(94.0/255.0, 206.0/255.0, 165.0/255.0, 0.0); // suppose the value is 0.0 - 1.0
I also tried RGB alternative.
CV_RGB(165.0, 206.0, 94.0); // as RGB order, suppose the value is 0.0 - 255.0
CV_RGB(165.0/255.0, 206.0/255.0, 94.0/255.0); // suppose the value is 0.0 - 1.0
But the color being displayed is white [ RGB value (255, 255, 255) ] , not the desired green one.
What I missed at this point? Any suggestion please. Thank you.
EDIT:
Let me put whole related code here. According to OpenCV iOS - Video Processing, this is the CvVideoCamera config in - (void)viewDidLoad;:
self.videoCamera = [[CvVideoCamera alloc] initWithParentView:imgView];
[self.videoCamera setDelegate:self];
self.videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionFront;
self.videoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPreset352x288;
self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationPortrait;
self.videoCamera.defaultFPS = 30;
self.videoCamera.grayscaleMode = NO;
[self.videoCamera adjustLayoutToInterfaceOrientation:UIInterfaceOrientationPortrait];
Then after [self.videoCamera start]; called, the (Mat&)image would be captured and can be processed in the CvVideoCameraDelegate method - (void)processImage:(Mat&)image; and here are the code to draw an ellipse:
- (void)processImage:(Mat&)image {
NSLog(#"image.type(): %d", image.type()); // got 24
// image.convertTo(image, CV_8UC3); // try to convert image type, but with or without this line result the same
NSLog(#"image.type(): %d", image.type()); // also 24
cv::Scalar colorScalar = cv::Scalar( 94, 206, 165 );
cv::Point center( image.size().width*0.5, image.size().height*0.5 );
cv::Size size( 100, 100 );
cv::ellipse( image, center, size, 0, 0, 360, colorScalar, 4, 8, 0 );
}
Eventually, the ellipse is still in white, not the desired green one.
Set alpha to 255 can fix this problem.
Scalar(94,206,165,255)
As mrgloom points correctly in the comment, it might be because of type of your image [ the Mat object where you want to draw, i.e Mat &img in ellipse() function].
cv::Scalar(94, 206, 165) is the desired green color for 8UC3 type images. Setting these values in 32FC3 image will result in white color.
you can use
src.convertTo(src, CV_8UC3);
Where CV_8UC3 means that you use 8 bits unsigned char and 3 color image representation.
More information you can find here OpenCV docs
after that your ellipse should be green, if it doesn't help post the whole code.
I was having similar problem and I have managed to fix it by first converting image to BGR. So in your case processImage function would look like as:
-(void)processImage:(Mat&)image
{
cvtColor(image, image, CV_RGBA2BGR);
cv::Scalar colorScalar = cv::Scalar( 94, 206, 165 );
cv::Point center( image.size().width*0.5, image.size().height*0.5 );
cv::Size size( 100, 100 );
cv::ellipse( image, center, size, 0, 0, 360, colorScalar, 4, 8, 0 );
}
The only line which I have included in your code is:
cvtColor(image, image, CV_RGBA2BGR);
If you also log channel, depth and type information in the above function as follows:
NSLog(#"Before conversion");
NSLog(#"channels %d", image.channels());
NSLog(#"depth %d", image.depth());
NSLog(#"type %d", image.type());
NSLog(#"element size %lu", image.elemSize());
cvtColor(image, image, CV_RGBA2BGR);
NSLog(#"After conversion");
NSLog(#"channels %d", image.channels());
NSLog(#"depth %d", image.depth());
NSLog(#"type %d", image.type());
NSLog(#"element size %lu", image.elemSize());
you will see before conversion:
channels 4
depth 0
type 24
element size 4
which I think is CV_8UC4 and after conversion it becomes:
channels 3
depth 0
type 16
element size 3
which is CV_8UC3.
I guess one of the reason why it does not work without cvtColor is that the opencv drawing functions don't support alpha transparency when the target image is 4-channel as mentioned in opencv documentation. So by converting CV_RGBA2BGR we take out alpha channel. However having said that I do not managed to get it work if I do:
cvtColor(image, image, CV_RGBA2RGB);
In this Red and Blue colors are inverted in the image. So although it seems to work but I am not sure if it is the actual reason.
Related
I am trying to segment the color green in the HSV-color space. I have this image of a tree and I would only like the upper part of the tree to be left.
This is the image I am starting from and the mask I obtain is just an entirely black image
This is my current code:
Mat input = imread(image_location);
imshow("input img",input); waitKey(0);
//convert image to HSV
Mat input_hsv;
cvtColor(input,input_hsv,COLOR_BGR2HSV);
vector<Mat>channels;
split(input_hsv, channels);
Mat H = channels[0];
Mat S = channels[1];
Mat V = channels[2];
Mat mask2;
inRange(input_hsv, Scalar(70, 0, 0), Scalar(143, 255, 255), mask2);
imshow("mask2", mask2);waitKey(0);
Normally the color green in HSV ranges from +/- 70 to 140.
But it doesn't seem to work at all. Could somebody help?
You are working in 8U. Thus, the H component which is normally in degrees [0,360) is compressed to fit 255 by halving.
See docs: 8-bit images: V←255V,S←255S,H←H/2(to fit to 0 to 255)
So the original H green range [70,140] should be halved to [35,70].
How can I crop a non rectangular region from image?
Imagine I have four points and I want to crop it, this shape wouldn't be a triangle somehow!
For example I have the following image :
and I want to crop this from image :
How can I do this?
regards..
The procedure for cropping an arbitrary quadrilateral (or any polygon for that matter) part of an image is summed us as:
Generate a "mask". The mask is black where you want to keep the image, and white where you don't want to keep it
Compute the "bitwise_and" between your input image and the mask
So, lets assume you have an image. Throughout this I'll use an image size of 30x30 for simplicity, you can change this to suit your use case.
cv::Mat source_image = cv::imread("filename.txt");
And you have four points you want to use as the corners:
cv::Point corners[1][4];
corners[0][0] = Point( 10, 10 );
corners[0][1] = Point( 20, 20 );
corners[0][2] = Point( 30, 10 );
corners[0][3] = Point( 20, 10 );
const Point* corner_list[1] = { corners[0] };
You can use the function cv::fillPoly to draw this shape on a mask:
int num_points = 4;
int num_polygons = 1;
int line_type = 8;
cv::Mat mask(30,30,CV_8UC3, cv::Scalar(0,0,0));
cv::fillPoly( mask, corner_list, &num_points, num_polygons, cv::Scalar( 255, 255, 255 ), line_type);
Then simply compute the bitwise_and of the image and mask:
cv::Mat result;
cv::bitwise_and(source_image, mask, result);
result now has the cropped image in it. If you want the edges to end up white instead of black you could instead do:
cv::Mat result_white(30,30,CV_8UC3, cv::Scalar(255,255,255));
cv::bitwise_and(source_image, mask, result_white, mask);
In this case we use bitwise_and's mask parameter to only do the bitwise_and inside the mask. See this tutorial for more information and links to all the functions I mentioned.
You may use cv::Mat::copyTo() like this:
cv::Mat img = cv::imread("image.jpeg");
// note mask may be single channel, even if img is multichannel
cv::Mat mask = cv::Mat::zeros(img.rows, img.cols, CV_8UC1);
// fill mask with nonzero values, e.g. as Tim suggests
// cv::fillPoly(...)
cv::Mat result(img.size(), img.type(), cv::Scalar(255, 255, 255));
img.copyTo(result, mask);
I have the following image:
I would like to detect the red rectangle using cv::inRange method and HSV color space.
int H_MIN = 0;
int H_MAX = 10;
int S_MIN = 70;
int S_MAX = 255;
int V_MIN = 50;
int V_MAX = 255;
cv::cvtColor( input, imageHSV, cv::COLOR_BGR2HSV );
cv::inRange( imageHSV, cv::Scalar( H_MIN, S_MIN, V_MIN ), cv::Scalar( H_MAX, S_MAX, V_MAX ), imgThreshold0 );
I already created dynamic trackbars in order to change the values for HSV, but I can't get the desired result.
Any suggestion for best values (and maybe filters) to use?
In HSV space, the red color wraps around 180. So you need the H values to be both in [0,10] and [170, 180].
Try this:
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
Mat3b bgr = imread("path_to_image");
Mat3b hsv;
cvtColor(bgr, hsv, COLOR_BGR2HSV);
Mat1b mask1, mask2;
inRange(hsv, Scalar(0, 70, 50), Scalar(10, 255, 255), mask1);
inRange(hsv, Scalar(170, 70, 50), Scalar(180, 255, 255), mask2);
Mat1b mask = mask1 | mask2;
imshow("Mask", mask);
waitKey();
return 0;
}
Your previous result:
Result adding range [170, 180]:
Another interesting approach which needs to check a single range only is:
invert the BGR image
convert to HSV
look for cyan color
This idea has been proposed by fmw42 and kindly pointed out by Mark Setchell. Thank you very much for that.
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
Mat3b bgr = imread("path_to_image");
Mat3b bgr_inv = ~bgr;
Mat3b hsv_inv;
cvtColor(bgr_inv, hsv_inv, COLOR_BGR2HSV);
Mat1b mask;
inRange(hsv_inv, Scalar(90 - 10, 70, 50), Scalar(90 + 10, 255, 255), mask); // Cyan is 90
imshow("Mask", mask);
waitKey();
return 0;
}
While working with dominant colors such as red, blue, green and yellow; analyzing the two color channels of the LAB color space keeps things simple. All you need to do is apply a suitable threshold on either of the two color channels.
1. Detecting Red color
Background :
The LAB color space represents:
the brightness value in the image in the primary channel (L-channel)
while colors are expressed in the two remaining channels:
the color variations between red and green are expressed in the secondary channel (A-channel)
the color variations between yellow and blue are expressed in the third channel (B-channel)
Code :
import cv2
img = cv2.imread('red.png')
# convert to LAB color space
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
# Perform Otsu threshold on the A-channel
th = cv2.threshold(lab[:,:,1], 127, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
Result:
I have placed the LAB converted image and the threshold image besides each other.
2. Detecting Blue color
Now lets see how to detect blue color
Sample image:
Since I am working with blue color:
Analyze the B-channel (since it expresses blue color better)
Perform inverse threshold to make the blue region appear white
(Note: the code changes below compared to the one above)
Code :
import cv2
img = cv2.imread('blue.jpg')
# convert to LAB color space
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
# Perform Otsu threshold on the A-channel
th = cv2.threshold(lab[:,:,2], 127, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
Result:
Again, stacking the LAB and final image:
Conclusion :
Similar processing can be performed on green and yellow colors
Moreover segmenting a range of one of these dominant colors is also much simpler.
I just start to learn opencv, I have defined a vector like:
vector<Point2f> cornersB;
and after that i have done some calculations like:goodFeaturesToTrack,cornerSubPix and calcOpticalFlowPyrLK using cornersB.
And now I want to show cornerB to see the points that has been drawn, my code is:
pointmat = Mat(cornersB);
imshow("Window", pointmat);
But I got error said that bad number of channels (Source image must have 1, 3 or 4 channels) in cvConvertImage.
Anyone can teach me how to show the points of cornerB in an image?
I just want to see the points (points in white and the background in black).
The simpler is to use cv::drawKeypoints
drawKeypoints( InputArray image, const std::vector<KeyPoint>& keypoints, InputOutputArray outImage,const Scalar& color=Scalar::all(-1), int flags=DrawMatchesFlags::DEFAULT );
In your case, let define a black image as image:
cv::Mat image(512,512,CV_8U)
image.setTo(0);
Then convert cornersB to cv::KeyPoint kp_cornerB and define the color as white with CV_RGB(255, 255, 255)
std::vector<cv::KeyPoint> kp_cornerB ;
// TODO convert cornersB to kp_cornerB
cv::Mat pointmat;
cv::drawKeypoints(image, kp_cornerB, pointmat, CV_RGB(255, 255, 255));
imshow("Window", pointmat);
The conversion can be done with a for loop on the vector:
for(vector<Point2f>::const_iterator it = cornersB.begin();
it != cornersB.end(); it++) {
cv::KeyPoint kp(*it, 8);
kp_cornerB.push_back(kp);
}
Here, the value '8' is the 'size' of the keypoint.
I'm new to opencv and i'm trying on some sample codes.
in one code, Mat gr(row1,col1,CV_8UC1,scalar(0));
int x = gr.at<uchar> (row,col);
And in another one,
Mat grHistrogram(301,260,CV_8UC1,Scalar(0,0,0));
line(grHistrogram,pt1,pt2,Scalar(255,255,255),1,8,0);
Now my question is if i used scalar(0) instead of scalar(0,0,0) in second code, The code doesn't work.
1.Why this happening since, Both create a Mat image structure.
2.what is the purpose of const cv:Scalar &_s.
I search the Documentaion from Opencv site (opencv.pdf,opencv2refman.pdf) and Oreilly's Opencv book. But couldn't find a explained answer.
I think i'm using the Mat(int _rows,int _cols,int _type,const cv:Scalar &_s) struct.
First, you need the following information to create the image:
Width: 301 pixels
Height: 260 pixels
Each pixel value (intensity) is 0 ~ 255: an 8-bit unsigned integer
Supports all RGB colors: 3 channels
Initial color: black = (B, G, R) = (0, 0, 0)
You can create the Image using cv::Mat:
Mat grHistogram(260, 301, CV_8UC3, Scalar(0, 0, 0));
The 8U means the 8-bit Usigned integer, C3 means 3 Channels for RGB color, and Scalar(0, 0, 0) is the initial value for each pixel. Similarly,
line(grHistrogram,pt1,pt2,Scalar(255,255,255),1,8,0);
is to draw a line on grHistogram from point pt1 to point pt2. The color of line is white (255, 255, 255) with 1-pixel thickness, 8-connected line, and 0-shift.
Sometimes you don't need a RGB-color image, but a simple grayscale image. That is, use one channel instead of three. The type can be changed to CV_8UC1 and you only need to specify the intensity for one channel, Scalar(0) for example.
Back to your problem,
Why this happening since, both create a Mat image structure?
Because you need to specify the type of the Mat. Is it a color image CV_8UC3 or a grayscale image CV_8UC1? They are different. Your program may not work as you think if you use Scalar(255) on a CV_8UC3 image.
What is the purpose of const cv:Scalar &_s ?
cv::Scalar is use to specify the intensity value for each pixel. For example, Scalar(255, 0, 0) is blue and Scalar(0, 0, 0) is black if type is CV_8UC3. Or Scalar(0) is black if it's a CV_8UC1 grayscale image. Avoid mixing them together.
You can create single channel image or multi channel image.
creating single channel image : Mat img(500, 1000, CV_8UC1, Scalar(70));
creating multi channel image : Mat img1(500, 1000, CV_8UC3, Scalar(10, 100, 150));
you can see more example and detail from following page.
https://progtpoint.blogspot.com/2017/01/tutorial-3-create-image.html