Extract image pixels of triangle Error - c++

I'm new to image processing and development. I need to take the inside triangle pixels of the image. In order to do it I used the following code. Unfortunately I obtain unwanted black pixels. get rid of that problem i tried to remove background[0] pixels by giving alfa value.(tranparent background) But it gives following Error. Any help is appreciated.
My code:
Mat img = cv::imread("/home/fabio/code/lena.jpg", cv::IMREAD_GRAYSCALE);
Mat alpha(img.size(), CV_8UC1, Scalar(0));
//triangle definition (example points)
vector<Point> points;
points.push_back(Point(200, 70));
points.push_back(Point(60, 150));
points.push_back(Point(500, 500));
//apply triangle to mask
fillConvexPoly(alpha, points, Scalar(255));
cv::Mat finalImage = cv::Mat::zeros(img.size(), img.type());
img.copyTo(finalImage, alpha);
imshow("image", finalImage);
Mat dst;
Mat rgb[1];
split(finalImage, rgb);
Mat rgba[2] = { finalImage, alpha };
merge(rgba, 2, dst);
imshow("dst", dst);
Error: OpenCV Error: Bad number of channels (Source image must have 1, 3 or 4 channels) in cvConvertImage, file C:\builds\2_4_PackSlave-win64-vc12-shared\opencv\modules\highgui\src\utils.cpp, line 611

use this instead of your last block:
std::vector<cv::Mat> channels;
cv::split(finalImage,m channels);
if(channels.size() == 0)
{
std::cout << "unexpected error" << std::endl;
return 1;
}
// fill up to reach 3 channels
while(channels,size() < 3)
{
channels.push_back(channels[0]);
}
// add alpha channel
channels.push_back(alpha);
cv::merge(channels, dst);
I didn't test it but this should be what you want?

Related

C++ and OpenCV 4.5.3 - (-215: Assertion failed)

Problem : Watershed algorithm
I started app project, for image processing, using OpenCv 4.5.3 and Swift ( with C++ ). I'm fighting with watershaded alg. for a really long time... And i have no clue what did i do wrong. Just don't know...
Error :
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(4.5.3)
/Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/segmentation.cpp:161:
error: (-215:Assertion failed) src.type()
== CV_8UC3 && dst.type() == CV_32SC1 in function 'watershed'
terminating with uncaught exception of type cv::Exception: OpenCV(4.5.3)
/Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/segmentation.cpp:161: error:
(-215:Assertion failed) src.type()
== CV_8UC3 && dst.type() == CV_32SC1 in function 'watershed'
In the definition of openCv's watershed we can find :
#param image Input 8-bit 3-channel image.
#param markers Input/output 32-bit single-channel image (map) of markers. It should have the same size as image .
Code
+(UIImage *) watershed:(UIImage *)src{
cv::Mat img, mask;
UIImageToMat(src, img);
// Change the background from white to black, since that will help later to extract
// better results during the use of Distance Transform
cv::inRange(img, cv::Scalar(255,255,255), cv::Scalar(255,255,255), mask);
img.setTo(cv::Scalar(0,0,0), mask);
// Create a kernel that we will use to sharpen our image
// an approximation of second derivative, a quite strong kernel
cv::Mat kernel = (cv::Mat_<float>(3,3) <<
1, 1, 1,
1, -8, 1,
1, 1, 1);
// do the laplacian filtering as it is
// well, we need to convert everything in something more deeper then CV_8U
// because the kernel has some negative values,
// and we can expect in general to have a Laplacian image with negative values
// BUT a 8bits unsigned int (the one we are working with) can contain values from 0 to 255
// so the possible negative number will be truncated
cv::Mat lapl;
cv::filter2D(img, lapl, CV_32F, kernel);
cv::Mat sharp;
img.convertTo(sharp, CV_32F);
cv::Mat result = sharp - lapl;
// convert back to 8bits gray scale
result.convertTo(result, CV_8UC3);
lapl.convertTo(lapl, CV_8UC3);
cv::Mat bw;
cv::cvtColor(result, bw, cv::COLOR_BGR2GRAY);
cv::threshold(bw, bw, 40, 255, cv::THRESH_BINARY | cv::THRESH_OTSU);
// Perform the distance transform algorithm
cv::Mat dist;
cv::distanceTransform(bw, dist, cv::DIST_L2, cv::DIST_MASK_3);
// Normalize the distance image for range = {0.0, 1.0}
// so we can visualize and threshold it
cv::normalize(dist, dist, 0, 1.0, cv::NORM_MINMAX);
// Threshold to obtain the peaks
// This will be the markers for the foreground objects
cv::threshold(dist, dist, 0.4, 1.0, cv::THRESH_BINARY);
// Dilate a bit the dist image
cv::Mat kernel1 = cv::Mat::ones(3, 3, CV_8U);
dilate(dist, dist, kernel1);
// Create the CV_8U version of the distance image
// It is needed for findContours()
cv::Mat dist_8u;
dist.convertTo(dist_8u, CV_8U);
// Find total markers
std::vector<std::vector<cv::Point> > contours;
findContours(dist_8u, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
// Create the marker image for the watershed algorithm
cv::Mat markers = cv::Mat::zeros(dist.size(), CV_32S);
// Draw the foreground markers
for (size_t i = 0; i < contours.size(); i++)
{
drawContours(markers, contours, static_cast<int>(i), cv::Scalar(static_cast<int>(i)+1), -1);
}
// Draw the background marker
circle(markers, cv::Point(5,5), 3, cv::Scalar(255), -1);
cv::Mat markers8u;
markers.convertTo(markers8u, CV_8U, 10);
// Perform the watershed algorithm
watershed(result, markers);
return MatToUIImage(result);
}
You can clearly see, that variables has proper type, as in descr. of function:
result.convertTo(result, CV_8UC3);
cv::Mat markers = cv::Mat::zeros(dist.size(), CV_32S);
The convertTo can not add channels as well can not reduce/convert image to image with smaller amount of channels.
The key in this case is to use :
cvtColor(src, src, COLOR_BGRA2BGR); // change 4 to 3 channels

Opencv error: assertion failed in wrapPerspective

i'm trying to make an AR app, using aruco and Opencv (i'm a newbie). It detects aruco marker, and puts an image on it. I have tried to use wrapPerstective() function, however somethig is wrong, it returns Opencv error assertion failed ((m0.type() == cv_32f m0.type() == cv_64f) in wrapPerspective. Please give me a way to solve it
int main() {
cv::VideoCapture inputVideo;
inputVideo.open("gal.mp4");
cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_50);
cv::Mat sq = imread("zhuz.jpg", CV_LOAD_IMAGE_UNCHANGED);
while (inputVideo.grab()) {
vector<Point2f> sqPoints;
vector<Point2f> p;
sqPoints.push_back(Point2f(0, 0));
sqPoints.push_back(Point2f(sq.cols, 0));
sqPoints.push_back(Point2f(sq.cols, sq.rows));
sqPoints.push_back(Point2f(0, sq.rows));
cv::Mat image, warp_matrix;
inputVideo.retrieve(image);
Mat cpy_img(image.rows, image.cols, image.type());
Mat neg_img(image.rows, image.cols, image.type());
Mat gray;
Mat blank(sq.rows, sq.cols, sq.type());
std::vector<int> ids;
std::vector<std::vector<cv::Point2f>> corners;
cv::aruco::detectMarkers(image, dictionary, corners, ids);
if (ids.size() > 0) {
p.push_back(corners[0][0]);
p.push_back(corners[0][1]);
p.push_back(corners[0][2]);
p.push_back(corners[0][3]);
Mat wrap_matrix = getPerspectiveTransform(sqPoints, p);
blank = Scalar(0);
neg_img = Scalar(0); // Image is white when pixel values are zero
cpy_img = Scalar(0); // Image is white when pixel values are zero
bitwise_not(blank, blank);
warpPerspective(sq, neg_img, warp_matrix, Size(neg_img.cols, neg_img.rows)); // Transform overlay Image to the position - [ITEM1]
warpPerspective(blank, cpy_img, warp_matrix, Size(cpy_img.cols, neg_img.rows)); // Transform a blank overlay image to position
bitwise_not(cpy_img, cpy_img); // Invert the copy paper image from white to black
bitwise_and(cpy_img, image, cpy_img); // Create a "hole" in the Image to create a "clipping" mask - [ITEM2]
bitwise_or(cpy_img, neg_img, image); // Finally merge both items [ITEM1 & ITEM2]
}
cv::imshow("out", image);
}
}

Segmentation of foreground from background

I'm currently working on a project that uses a Lacatan Banana, and I would like to know how to further separate the foreground from the background:
I already got a segmented image of it using erosion, dilation, and thresholding only. The problem is that it is still not properly segmented.
Here is my code:
cv::Mat imggray, imgthresh, fg, bgt, bg;
cv::cvtColor(src, imggray, CV_BGR2GRAY); //Grayscaling the image from RGB color space
cv::threshold(imggray, imgthresh, 0, 255, CV_THRESH_BINARY_INV | CV_THRESH_OTSU); //Create an inverted binary image from the grayscaled image
cv::erode(imgthresh, fg, cv::Mat(), cv::Point(-1, -1), 1); //erosion of the binary image and setting it as the foreground
cv::dilate(imgthresh, bgt, cv::Mat(), cv::Point(-1, -1), 4); //dilation of the binary image to reduce the background region
cv::threshold(bgt, bg, 1, 128, CV_THRESH_BINARY); //we get the background by setting the threshold to 1
cv::Mat markers = cv::Mat::zeros(src.size(), CV_32SC1); //initializing the markers with a size same as the source image and setting its data type as 32-bit Single channel
cv::add(fg, bg, markers); //setting the foreground and background as markers
cv::Mat mask = cv::Mat::zeros(markers.size(), CV_8UC1);
markers.convertTo(mask, CV_8UC1); //converting the 32-bit single channel marker to a 8-bit single channel
cv::Mat mthresh;
cv::threshold(mask, mthresh, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU); //threshold further the mask to reduce the noise
// cv::erode(mthresh,mthresh,cv::Mat(), cv::Point(-1,-1),2);
cv::Mat result;
cv::bitwise_and(src, src, result, mthresh); //use the mask to subtrack the banana from the background
for (int x = 0; x < result.rows; x++) { //changing the black background to white
for (int y = 0; y < result.cols; y++) {
if (result.at<Vec3b>(x, y) == Vec3b(0, 0, 0)){
result.at<Vec3b>(x, y)[0] = 255;
result.at<Vec3b>(x, y)[1] = 255;
result.at<Vec3b>(x, y)[2] = 255;
}
}
}
This is my result:
As the background is near gray-color, try using Hue channel and Saturation channel instead of grayscale image.
You can get them easily.
cv::Mat hsv;
cv::cvtColor(src, hsv, CV_BGR2HSV);
std::vector<cv::Mat> channels;
cv::split(src, channels);
cv::Mat hue = channels[0];
cv::Mat saturation = channels[1];
// If you want to combine those channels, use this code.
cv::Mat hs = cv::Mat::zeros(src.size(), CV_8U);
for(int r=0; r<src.rows; r++) {
for(int c=0; c<src.cols; c++) {
int hp = h.at<uchar>(r,c);
int sp = s.at<uchar>(r,c);
hs.at<uchar>(r, c) = static_cast<uchar>((h+s)>>1);
}
}
adaptiveThreshold() should work better than just level-cut threshold(), because it does not consider absolute color levels, but rather a change in color in small area around the point being checked.
Try replacing your thresholding with adaptive one.
Use a top-hat instead of just erosion/dilation. It will take care of the background variations at the same time.
Then in your case a simple thresholding should be good enough to have an accurate segmentation. Else, you can couple it with a watershed.
(I will share some images asap).
Thanks guys, I tried to apply your advises and was able to come up with this
However as you can see there are still bits of the background,any ideas how to "clean" these further, i tried thresholding further but it would still have the bits.The Code I came up with is below and i apologize in advance if the variables and coding style is somewhat confusing didn't have the time to properly sort them.
#include <stdio.h>
#include <iostream>
#include <opencv2\core.hpp>
#include <opencv2\opencv.hpp>
#include <opencv2\highgui.hpp>
using namespace cv;
using namespace std;
Mat COLOR_MAX(Scalar(65, 255, 255));
Mat COLOR_MIN(Scalar(15, 45, 45));
int main(int argc, char** argv){
Mat src,hsv_img,mask,gray_img,initial_thresh;
Mat second_thresh,add_res,and_thresh,xor_thresh;
Mat result_thresh,rr_thresh,final_thresh;
// Load source Image
src = imread("sample11.jpg");
imshow("Original Image", src);
cvtColor(src,hsv_img,CV_BGR2HSV);
imshow("HSV Image",hsv_img);
//imwrite("HSV Image.jpg", hsv_img);
inRange(hsv_img,COLOR_MIN,COLOR_MAX, mask);
imshow("Mask Image",mask);
cvtColor(src,gray_img,CV_BGR2GRAY);
adaptiveThreshold(gray_img, initial_thresh, 255,ADAPTIVE_THRESH_GAUSSIAN_C,CV_THRESH_BINARY_INV,257,2);
imshow("AdaptiveThresh Image", initial_thresh);
add(mask,initial_thresh,add_res);
erode(add_res, add_res, Mat(), Point(-1, -1), 1);
dilate(add_res, add_res, Mat(), Point(-1, -1), 5);
imshow("Bitwise Res",add_res);
threshold(gray_img,second_thresh,170,255,CV_THRESH_BINARY_INV | CV_THRESH_OTSU);
imshow("TreshImge", second_thresh);
bitwise_and(add_res,second_thresh,and_thresh);
imshow("andthresh",and_thresh);
bitwise_xor(add_res, second_thresh, xor_thresh);
imshow("xorthresh",xor_thresh);
bitwise_or(and_thresh,xor_thresh,result_thresh);
imshow("Result image", result_thresh);
bitwise_and(add_res,result_thresh,final_thresh);
imshow("Final Thresh",final_thresh);
erode(final_thresh, final_thresh, Mat(), Point(-1,-1),5);
bitwise_and(src,src,rr_thresh,final_thresh);
imshow("Segmented Image", rr_thresh);
imwrite("Segmented Image.jpg", rr_thresh);
waitKey(0);
return 1;
}

Normalizing color channels of and image by intensity values, OpenCV

I have split an image into 3 separate color channels - one blue, one green, and one red. I would like to normalize each of these channels by the image's intensity, where intensity = (red + blue + green)/3. To be clear, I am trying to make an image that is composed of one of the three color channels, divided by the image's intensity, where the intensity is described by the equation above.
I am new to OpenCV and I do not think I am doing this correctly; when the images are displayed, all the pixels appear to be black.
I am new to OpenCV (I have worked through the tutorials that come with the documentation, but that is it) - any advice about how to go about this normalization would be extremely helpful.
Thanks!
Here is my attempt:
int main(int argc, char** argv){
Mat sourceImage, I;
const char* redWindow = "Red Color Channel";
const char* greenWindow = "Green Color Channel";
const char* blueWindow = "Blue Color Channel";
if(argc != 2)
{
cout << "Incorrect number of arguments" << endl;
}
/* Load the image */
sourceImage = imread(argv[1], 1);
if(!sourceImage.data)
{
cout << "Image failed to load" << endl;
}
/* First, we have to allocate the new channels */
Mat r(sourceImage.rows, sourceImage.cols, CV_8UC1);
Mat b(sourceImage.rows, sourceImage.cols, CV_8UC1);
Mat g(sourceImage.rows, sourceImage.cols, CV_8UC1);
/* Now we put these into a matrix */
Mat out[] = {b, g, r};
/* Split the image into the three color channels */
split(sourceImage, out);
/* I = (r + b + g)/3 */
add(b, g, I);
add(I, r, I);
I = I/3;
Mat red = r/I;
Mat blue = b/I;
Mat green = g/I;
/* Create the windows */
namedWindow(blueWindow, 0);
namedWindow(greenWindow, 0);
namedWindow(redWindow, 0);
/* Show the images */
imshow(blueWindow, blue);
imshow(greenWindow, green);
imshow(redWindow, red);
waitKey(0);
return 0;
}
Once you divide by the intensity the pixel values will be in the range [0, 1], except since they are integers they will be 0 or 1. For a display image white is 255 and 0 is black, so this is why everything appears black to you.
You need to use floating point to get an accurate result, and you need to scale the result by 255 to see it.
Doing that results in this (which I an not sure is particularly useful)
(Image source: BSDS500)
And here is the code that generated it:
#include <opencv2/core/core.hpp>
#include <vector>
int main(int argc, char** argv)
{
// READ RGB color image and convert it to Lab
cv::Mat bgr_image = cv::imread("208001.jpg"); // BSDS500 mushroom
cv::imshow("original image", bgr_image);
cv::Mat bgr_image_f;
bgr_image.convertTo(bgr_image_f, CV_32FC3);
// Extract the color planes and calculate I = (r + g + b) / 3
std::vector<cv::Mat> planes(3);
cv::split(bgr_image_f, planes);
cv::Mat intensity_f((planes[0] + planes[1] + planes[2]) / 3.0f);
cv::Mat intensity;
intensity_f.convertTo(intensity, CV_8UC1);
cv::imshow("intensity", intensity);
//void divide(InputArray src1, InputArray src2, OutputArray dst, double scale=1, int dtype=-1)
cv::Mat b_normalized_f;
cv::divide(planes[0], intensity_f, b_normalized_f);
cv::Mat b_normalized;
b_normalized_f.convertTo(b_normalized, CV_8UC1, 255.0);
cv::imshow("b_normalized", b_normalized);
cv::Mat g_normalized_f;
cv::divide(planes[1], intensity_f, g_normalized_f);
cv::Mat g_normalized;
g_normalized_f.convertTo(g_normalized, CV_8UC1, 255.0);
cv::imshow("g_normalized", g_normalized);
cv::Mat r_normalized_f;
cv::divide(planes[2], intensity_f, r_normalized_f);
cv::Mat r_normalized;
r_normalized_f.convertTo(r_normalized, CV_8UC1, 255.0);
cv::imshow("r_normalized", r_normalized);
cv::waitKey();
}

getting a value in binary image

i'm trying to get a set of values in a binary image for inverting it.. but i'm having troubles to index the matrix, the first lines of my code are.
std::string path = "img/lena.jpg";
//Our color image
cv::Mat imageMat = cv::imread(path, CV_LOAD_IMAGE_GRAYSCALE);
if(imageMat.empty())
{
std::cerr << "ERROR: Could not read image " << argv[1] << std::endl;
return 1;
}
//Grayscale matrix
cv::Mat grayscaleMat (imageMat.size(), CV_8U);
//Convert BGR to Gray
cv::cvtColor( imageMat, grayscaleMat, CV_BGR2GRAY );
//Binary image
cv::Mat binaryMat(grayscaleMat.size(), grayscaleMat.type());
//Apply thresholding
cv::threshold(grayscaleMat, binaryMat, 100, 255, cv::THRESH_BINARY);
Now i need to work with the values in binaryMat, but i don't know how get it...
1: with opencv's c++ api, you don't need to allocate output/result Mat's. just leave them empty.
//Convert BGR to Gray
cv::Mat grayscaleMat;
cv::cvtColor( imageMat, grayscaleMat, CV_BGR2GRAY );
//Apply thresholding
cv::Mat binaryMat;
cv::threshold(grayscaleMat, binaryMat, 100, 255, cv::THRESH_BINARY);
2: now access the pixels:
uchar p = binaryMat.at<uchar>(y,x); // row,col world !
binaryMat.at<uchar>(5,5) = 17;