I am using OpenCVwith Eclipse.
I need to detect the human skin, so I convert the image to HSV and the I use inRange function to obtain a Mat with the image with the skin in white.
The problem is that now,I need to detect in which components are the white color to modify this pixels in the original frame ( i am changing the skin color with the video camera), but I cant access to the Mat returned in InRange
cvtColor(frame,frame,CV_BGR2HSV);
Mat n;
inRange(frame, Scalar(0, 10, 60), Scalar(20, 150, 255), n);
for(int i=0;i<frame.rows;i++)
{
for(int j=0;j<frame.cols;j++)
{
n.at(&i);
//n(i,j);
}
}
That is the problematic code. When I get to the internal loop, the build fails giving a lot of error refering to the template.
Anyone knows how can I access to this matrix? Is there another way to achieve my objective? Maybe I am complicating the problem.
Thanks for your time.
nothing to do with inRange or such, it's just your Mat access code, that is broken.
Vec3b & hsvPixel = n.at<Vec3b>(i,j);
// hsvPixel[0] = h;
// hsvPixel[1] = s;
// hsvPixel[2] = v;
Related
I have this project where we are trying to make an autonomous vehicle using a lidar and a stereo camera. To to this we're making two maps with cartographer and merging them together. However, the data from the stereo camera is not very accurate and we therefor have to manipulate the map made by cartographer. To make the camera map we are detecting lines, reading the distance and turning this into a laser scan which is the sent to cartographer. Ideally we would be able to convert the map into just the lines. This is what the camera map looks like: Camera map
What I would like to do first is fill out the holes in the map to make it easier to find lines and such later. This is where I am struggling. I have written code to convert from nav_msgs::OccupancyGrid to cv::Mat and back in addition to merging the maps. I have looked over this code and I don't think this is where the problem is. I have tried different suggestions online but have not gotten close to a solution. This is my code:
cv::Mat fill_cam_mat(cv::Mat mat) {
int thresh = 50;
cv::Mat canny_output;
cv::Canny( mat, canny_output, thresh, thresh*2 );
//std::vector<cv::Vec4i> hierarchy;
cv::Mat mat_floodfill = canny_output.clone();
cv::floodFill(mat_floodfill, cv::Point(0,0), cv::Scalar(255));
cv::Mat mat_floodfill_inv;
cv::bitwise_not(mat_floodfill, mat_floodfill_inv);
cv::Mat mat_out = (canny_output | mat_floodfill_inv);
return mat_out;
}
And my result is as follows when merged with the lidar map:
Final map
I have also tried:
cv::Mat fill_cam_mat(cv::Mat mat) {
int mat_height = mat.rows;
int mat_width = mat.cols;
int thresh = 50;
cv::Mat canny_output;
cv::Canny( mat, canny_output, thresh, thresh*2 );
cv::Mat non_zero;
cv::findNonZero(canny_output, non_zero);
std::vector<std::vector<cv::Point>> hull(non_zero.total());
for(unsigned int i = 0, n = non_zero.total(); i < n; ++i) {
cv::convexHull(non_zero, hull[i], false);
}
cv::Mat fill_contours_result(mat_height, mat_width, CV_8UC3, cv::Scalar(0));
cv::fillPoly(fill_contours_result, hull, 255);
return fill_contours_result;
}
Which gives the same result. I have also tried using cv::findContours to spicify the hull, but that worked even worse.
I am new with OpenCV and I don't understand what is wrong with my output. Would really appreciate any help on the code or if anybody have any better suggestions on how to solve the problem. Is it even necessary to fill the holes in order to get useful information from the map?
Thank you in advance!
How can I grow bright pixel in grey region?
Input:
image
Output: image
My answer is somewhat less helpful than my usual efforts, but it is hard to get up enthusiasm for questions with so little effort...
You can solve your issue by using OpenCV findContours() - documentation here. You will need to be sure to use the retrieval mode CV_RETR_TREE.
You then need to write a loop, iterating through all the contours found. In the loop, you need to look for a contour that:
a) has a colour of white and,
b) which has a parent with colour grey.
There is a decent explanation of how the hierarchy works here.
Mat im = imread("ask.png", 0);
Mat mat;
mat = im==255;
findContours( mat, contours, hierarchy, RETR_TREE, CHAIN_APPROX_SIMPLE);
for( size_t i = 0; i< contours.size(); i++ )
{
floodFill(mat, contours[i].at(0), 255, 0, Scalar(128), Scalar(255), FLOODFILL_FIXED_RANGE);
}
mat = mat==255; // output image
I want to plot circles on a image where each previous circle is deleted on the image before the next circle is drawn.
I have to following configuration:
I have several picture (let says 10)
For each picture I test several pixel for some condition (let say 50 pixels).
For each pixel I'm testing (or working on) I want to draw a circle at that pixel for visualization purpose (for me to visualize that pixel).
To summarize I have 2 for loop, one looping over the 10 images and the other looping over the 50 pixels.
I done the following (see code above). The circles are correctly drawn but when the next circle is drawn, the previous circle is still visible (at the end all circle are drawn on the same image) but what I want to have is (after a circle was drawn) to close the picture (or window) somehow and reopen a new one and plot the next circle on it and so on
for(int imgID=0; imgID < numbImgs; imgID++)
{
cv::Mat colorImg = imgVector[imgID];
for(int pixelID=0; pixelID < numPixelsToBeTested; pixelID++)
{
some_pixel = ... //some pixel
x = some_pixel(0); y = some_pixel(1);
cv::Mat colorImg2 = colorImg; //redefine the image for each pixel
cv::circle(colorImg2, cv::Point(x,y),5, cv::Scalar(0,0,255),1, cv::LINE_8, 0);
// creating a new window each time
cv::namedWindow("Display", CV_WINDOW_AUTOSIZE );
cv::imshow("Display", colorImg2);
cv::waitKey(0);
cv::destroyWindow("Display");
}
}
What is wrong in my code?
Thanks guys
cv::circle() manipulates the input image within the API call, so what you need to do is to create a clone of the original image, draw circles on the cloned image and at each iteration swap the cloned image with original image.
It is also a good idea to break your program into smaller methods, making the code more readable and easy to understand, Following code may give you a starting point.
void visualizePoints(cv::Mat mat) {
cv::Mat debugMat = mat.clone();
// Dummy set of points, to be replace with the 50 points youo are using.
std::vector<cv::Point> points = {cv::Point(30, 30), cv::Point(30, 100), cv::Point(100, 30), cv::Point(100, 100)};
for (cv::Point p:points) {
cv::circle(debugMat, p, 5, cv::Scalar(0, 0, 255), 1, cv::LINE_8, 0);
cv::imshow("Display", debugMat);
cv::waitKey(800);
debugMat = mat.clone();
}
}
int main() {
std::vector<std::string> imagePaths = {"path/img1.png", "path/img2.png", "path/img3.png"};
cv::namedWindow("Display", CV_WINDOW_AUTOSIZE );
for (std::string path:imagePaths) {
cv::Mat img = cv::imread(path);
visualizePoints(img);
}
}
I create a Bird-View-Image with the warpPerspective()-function like this:
warpPerspective(frame, result, H, result.size(), CV_WARP_INVERSE_MAP, BORDER_TRANSPARENT);
The result looks very good and also the border is transparent:
Bird-View-Image
Now I want to put this image on top of another image "out". I try doing this with the function warpAffine like this:
warpAffine(result, out, M, out.size(), CV_INTER_LINEAR, BORDER_TRANSPARENT);
I also converted "out" to a four channel image with alpha channel according to a question which was already asked on stackoverflow:
Convert Image
This is the code: cvtColor(out, out, CV_BGR2BGRA);
I expected to see the chessboard but not the gray background. But in fact, my result looks like this:
Result Image
What am I doing wrong? Do I forget something to do? Is there another way to solve my problem? Any help is appreciated :)
Thanks!
Best regards
DamBedEi
I hope there is a better way, but here it is something you could do:
Do warpaffine normally (without the transparency thing)
Find the contour that encloses the image warped
Use this contour for creating a mask (white values inside the image warped, blacks in the borders)
Use this mask for copy the image warped into the other image
Sample code:
// load images
cv::Mat image2 = cv::imread("lena.png");
cv::Mat image = cv::imread("IKnowOpencv.jpg");
cv::resize(image, image, image2.size());
// perform warp perspective
std::vector<cv::Point2f> prev;
prev.push_back(cv::Point2f(-30,-60));
prev.push_back(cv::Point2f(image.cols+50,-50));
prev.push_back(cv::Point2f(image.cols+100,image.rows+50));
prev.push_back(cv::Point2f(-50,image.rows+50 ));
std::vector<cv::Point2f> post;
post.push_back(cv::Point2f(0,0));
post.push_back(cv::Point2f(image.cols-1,0));
post.push_back(cv::Point2f(image.cols-1,image.rows-1));
post.push_back(cv::Point2f(0,image.rows-1));
cv::Mat homography = cv::findHomography(prev, post);
cv::Mat imageWarped;
cv::warpPerspective(image, imageWarped, homography, image.size());
// find external contour and create mask
std::vector<std::vector<cv::Point> > contours;
cv::Mat imageWarpedCloned = imageWarped.clone(); // clone the image because findContours will modify it
cv::cvtColor(imageWarpedCloned, imageWarpedCloned, CV_BGR2GRAY); //only if the image is BGR
cv::findContours (imageWarpedCloned, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// create mask
cv::Mat mask = cv::Mat::zeros(image.size(), CV_8U);
cv::drawContours(mask, contours, 0, cv::Scalar(255), -1);
// copy warped image into image2 using the mask
cv::erode(mask, mask, cv::Mat()); // for avoid artefacts
imageWarped.copyTo(image2, mask); // copy the image using the mask
//show images
cv::imshow("imageWarpedCloned", imageWarpedCloned);
cv::imshow("warped", imageWarped);
cv::imshow("image2", image2);
cv::waitKey();
One of the easiest ways to approach this (not necessarily the most efficient) is to warp the image twice, but set the OpenCV constant boundary value to different values each time (i.e. zero the first time and 255 the second time). These constant values should be chosen towards the minimum and maximum values in the image.
Then it is easy to find a binary mask where the two warp values are close to equal.
More importantly, you can also create a transparency effect through simple algebra like the following:
new_image = np.float32((warp_const_255 - warp_const_0) *
preferred_bkg_img) / 255.0 + np.float32(warp_const_0)
The main reason I prefer this method is that openCV seems to interpolate smoothly down (or up) to the constant value at the image edges. A fully binary mask will pick up these dark or light fringe areas as artifacts. The above method acts more like true transparency and blends properly with the preferred background.
Here's a small test program that warps with transparent "border", then copies the warped image to a solid background.
int main()
{
cv::Mat input = cv::imread("../inputData/Lenna.png");
cv::Mat transparentInput, transparentWarped;
cv::cvtColor(input, transparentInput, CV_BGR2BGRA);
//transparentInput = input.clone();
// create sample transformation mat
cv::Mat M = cv::Mat::eye(2,3, CV_64FC1);
// as a sample, just scale down and translate a little:
M.at<double>(0,0) = 0.3;
M.at<double>(0,2) = 100;
M.at<double>(1,1) = 0.3;
M.at<double>(1,2) = 100;
// warp to same size with transparent border:
cv::warpAffine(transparentInput, transparentWarped, M, transparentInput.size(), CV_INTER_LINEAR, cv::BORDER_TRANSPARENT);
// NOW: merge image with background, here I use the original image as background:
cv::Mat background = input;
// create output buffer with same size as input
cv::Mat outputImage = input.clone();
for(int j=0; j<transparentWarped.rows; ++j)
for(int i=0; i<transparentWarped.cols; ++i)
{
cv::Scalar pixWarped = transparentWarped.at<cv::Vec4b>(j,i);
cv::Scalar pixBackground = background.at<cv::Vec3b>(j,i);
float transparency = pixWarped[3] / 255.0f; // pixel value: 0 (0.0f) = fully transparent, 255 (1.0f) = fully solid
outputImage.at<cv::Vec3b>(j,i)[0] = transparency * pixWarped[0] + (1.0f-transparency)*pixBackground[0];
outputImage.at<cv::Vec3b>(j,i)[1] = transparency * pixWarped[1] + (1.0f-transparency)*pixBackground[1];
outputImage.at<cv::Vec3b>(j,i)[2] = transparency * pixWarped[2] + (1.0f-transparency)*pixBackground[2];
}
cv::imshow("warped", outputImage);
cv::imshow("input", input);
cv::imwrite("../outputData/TransparentWarped.png", outputImage);
cv::waitKey(0);
return 0;
}
I use this as input:
and get this output:
which looks like ALPHA channel isn't set to ZERO by warpAffine but to something like 205...
But in general this is the way I would do it (unoptimized)
If got a very strange problem here. Im developing with Visual Studio 10 and OpenCV
In the following code segment I'm creating a 1 channel Mat and write into two different Mats.
First window "test1" shows a black picture. That's correct.
The "test2" window still shows a black picture. Still correct.
Then the last window "test3", shows the same picture as stored in bwHSVred after the inRange command.
Why would the bwHSVblue change during this inRange operation?
Does anybody know why? It doesn't make any sense to me a all.
frame = imread(pathtopicture);
cvtColor(frame, calHSV, COLOR_BGR2HSV);
inRange(calibHSV, Scalar(255, 255, 255), Scalar(0, 0, 0), bwAll);
bwHSVred = bwAll;
bwHSVblue = bwAll;
imshow("test1",bwHSVblue);
//load red
//set the x_MIN,x_MAX values to Hmin=0,Smin=119,Vmin=108,Hmax=218,Smax=234,Vmax=168
setHSVval(redCube);
updateTrackbars();
currentColor = RED;
imshow("test2",bwHSVblue);
inRange(calibHSV, Scalar(H_MIN, S_MIN, V_MIN), Scalar(H_MAX, S_MAX, V_MAX), bwHSVred);
imshow("test3",bwHSVblue);
Definition of the Mat objects in the .h-file
private:
Mat calHSV;
Mat bwAll;
Mat bwHSVred;
Mat bwHSVblue;
The problem is in your C/C++ usage of pointers.
All three matrices are the same instance:
bwHSVred = bwAll;
bwHSVblue = bwAll;
You are copying the pointer, meaning that they now all point to the same matrix.
If you want to make copies of a matrix, you should use clone or copyTo as explained in the docs:
Mat F = A.clone();
Mat G;
A.copyTo(G);
Mat a=b; does a shallow copy only ( both point to the same data )
that's why bwHSVred == bwHSVblue == bwAll