How to draw rectangle with mask image in opencv? - c++

Mask image
Result image
Expected image
Is there any function to draw a rectangle with mask image using opencv? i have attached the expected image. Please help me. thanks in advance.

I think your asked question isn't quite clear, but if the first image is your original image (circle), the second one (rectangle) is your binary mask image, and you want to apply that mask on the original image, than you can apply the mask as followed:
inputMat.copyTo(outputMat, maskMat);
Src.: https://stackoverflow.com/a/18161322/4767895
If you haven't already created a binary mask, do it this way:
Create a mask with the same size as your original image (set all zero), and draw a filled rectangle (set all one) with specific size in it.
cv::Mat mask = cv::Mat::zeros(Rows, Cols, CV_8U); // all 0
mask(Rect(StartX,StartY,Width,Height)) = 1; //make rectangle 1
Src.: https://stackoverflow.com/a/18136171/4767895
Feel free to response if I misunderstood your question.

Try using Boolean Operation provided in Opencv
Please refer to this code (source). I have added all the bitwise operation in case you need any.
int main( )
{
Mat drawing1 = Mat::zeros( Size(400,200), CV_8UC1 );
Mat drawing2 = Mat::zeros( Size(400,200), CV_8UC1 );
drawing1(Range(0,drawing1.rows),Range(0,drawing1.cols/2))=255; imshow("drawing1",drawing1);
drawing2(Range(100,150),Range(150,350))=255; imshow("drawing2",drawing2);
Mat res;
bitwise_and(drawing1,drawing2,res); imshow("AND",res);
bitwise_or(drawing1,drawing2,res); imshow("OR",res);
bitwise_xor(drawing1,drawing2,res); imshow("XOR",res);
bitwise_not(drawing1,res); imshow("NOT",res);
waitKey(0);
return(0);
}

Related

OCR: Difference between two frames

I am trying to find an easy solution to implement the OCR algorithm from OPenCV. I am very new to Image Processing !
I am playing a video that is decoded with specific codec using RLE algorithm.
What I would like to do is that for each decoded frame, I would like to compare it with the previous one and store the pixels that have changed between the two frames.
Most of the existing solutions gives a difference between the two frames but I would like to just keep the new pixels that have changed and store it in a table and then be able to analyze every group of pixels that have changed instead of analyzing the whole image each time.
I planned to use the "blobs detection" algoritm mais I'm stuck before being able to implement it.
Today, I'm trying this:
char *prevFrame;
char *curFrame;
QVector DiffPixel<LONG>;
//for each frame
DiffPixel.push_back(curFrame-prevFrame);
I really want to have the "Only changed pixel result" solution. Could anyone give me some tips or correct me if I'm going to a wrong way ?
EDIT:
New question, what if there are multiple areas of changed pixels ? Will it be possible to have one table per blocs of changed pixels or will it be only one unique table ? Take the example below:
The best thing as a result would be to have 2 mat matrices. The first matrix with the first orange square and the second matrix with the second orange square. This way, it avoids having to "scan" almost the entire frame if we store the result in one matrix only with a resolution being almost the same as the full frame.
The main goal here is to minimize the area (aka the resolution) to analyze to find text.
After loading your images:
img1
img2
you can apply XOR operation to get the differences. The result has the same number of channels of the input images:
XOR
You can then create a binary mask OR-ing all channels:
mask
The you can copy the values of img2 that correspond to non-zero elements in the mask to a white image:
diff
UPDATE
If you have multiple areas where pixel changed, like this:
You'll find a difference mask (after binarization all non-zero pixels are set to 255) like:
You can then extract connected components and draw each connected component on a new black-initialized mask:
Then, as before, you can copy the values of img2 that correspond to non-zero elements in each mask to a white image.
The complete code for reference. Note that this is the code for the updated version of the answer. You can find the original code in the revision history.
#include <opencv2\opencv.hpp>
#include <vector>
using namespace cv;
using namespace std;
int main()
{
// Load the images
Mat img1 = imread("path_to_img1");
Mat img2 = imread("path_to_img2");
imshow("Img1", img1);
imshow("Img2", img2);
// Apply XOR operation, results in a N = img1.channels() image
Mat maskNch = (img1 ^ img2);
imshow("XOR", maskNch);
// Create a binary mask
// Split each channel
vector<Mat1b> masks;
split(maskNch, masks);
// Create a black mask
Mat1b mask(maskNch.rows, maskNch.cols, uchar(0));
// OR with each channel of the N channels mask
for (int i = 0; i < masks.size(); ++i)
{
mask |= masks[i];
}
// Binarize mask
mask = mask > 0;
imshow("Mask", mask);
// Find connected components
vector<vector<Point>> contours;
findContours(mask.clone(), contours, RETR_LIST, CHAIN_APPROX_SIMPLE);
for (int i = 0; i < contours.size(); ++i)
{
// Create a black mask
Mat1b mask_i(mask.rows, mask.cols, uchar(0));
// Draw the i-th connected component
drawContours(mask_i, contours, i, Scalar(255), CV_FILLED);
// Create a black image
Mat diff_i(img2.rows, img2.cols, img2.type());
diff_i.setTo(255);
// Copy into diff only different pixels
img2.copyTo(diff_i, mask_i);
imshow("Mask " + to_string(i), mask_i);
imshow("Diff " + to_string(i), diff_i);
}
waitKey();
return 0;
}

OpenCV keep background transparent during warpAffine

I create a Bird-View-Image with the warpPerspective()-function like this:
warpPerspective(frame, result, H, result.size(), CV_WARP_INVERSE_MAP, BORDER_TRANSPARENT);
The result looks very good and also the border is transparent:
Bird-View-Image
Now I want to put this image on top of another image "out". I try doing this with the function warpAffine like this:
warpAffine(result, out, M, out.size(), CV_INTER_LINEAR, BORDER_TRANSPARENT);
I also converted "out" to a four channel image with alpha channel according to a question which was already asked on stackoverflow:
Convert Image
This is the code: cvtColor(out, out, CV_BGR2BGRA);
I expected to see the chessboard but not the gray background. But in fact, my result looks like this:
Result Image
What am I doing wrong? Do I forget something to do? Is there another way to solve my problem? Any help is appreciated :)
Thanks!
Best regards
DamBedEi
I hope there is a better way, but here it is something you could do:
Do warpaffine normally (without the transparency thing)
Find the contour that encloses the image warped
Use this contour for creating a mask (white values inside the image warped, blacks in the borders)
Use this mask for copy the image warped into the other image
Sample code:
// load images
cv::Mat image2 = cv::imread("lena.png");
cv::Mat image = cv::imread("IKnowOpencv.jpg");
cv::resize(image, image, image2.size());
// perform warp perspective
std::vector<cv::Point2f> prev;
prev.push_back(cv::Point2f(-30,-60));
prev.push_back(cv::Point2f(image.cols+50,-50));
prev.push_back(cv::Point2f(image.cols+100,image.rows+50));
prev.push_back(cv::Point2f(-50,image.rows+50 ));
std::vector<cv::Point2f> post;
post.push_back(cv::Point2f(0,0));
post.push_back(cv::Point2f(image.cols-1,0));
post.push_back(cv::Point2f(image.cols-1,image.rows-1));
post.push_back(cv::Point2f(0,image.rows-1));
cv::Mat homography = cv::findHomography(prev, post);
cv::Mat imageWarped;
cv::warpPerspective(image, imageWarped, homography, image.size());
// find external contour and create mask
std::vector<std::vector<cv::Point> > contours;
cv::Mat imageWarpedCloned = imageWarped.clone(); // clone the image because findContours will modify it
cv::cvtColor(imageWarpedCloned, imageWarpedCloned, CV_BGR2GRAY); //only if the image is BGR
cv::findContours (imageWarpedCloned, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// create mask
cv::Mat mask = cv::Mat::zeros(image.size(), CV_8U);
cv::drawContours(mask, contours, 0, cv::Scalar(255), -1);
// copy warped image into image2 using the mask
cv::erode(mask, mask, cv::Mat()); // for avoid artefacts
imageWarped.copyTo(image2, mask); // copy the image using the mask
//show images
cv::imshow("imageWarpedCloned", imageWarpedCloned);
cv::imshow("warped", imageWarped);
cv::imshow("image2", image2);
cv::waitKey();
One of the easiest ways to approach this (not necessarily the most efficient) is to warp the image twice, but set the OpenCV constant boundary value to different values each time (i.e. zero the first time and 255 the second time). These constant values should be chosen towards the minimum and maximum values in the image.
Then it is easy to find a binary mask where the two warp values are close to equal.
More importantly, you can also create a transparency effect through simple algebra like the following:
new_image = np.float32((warp_const_255 - warp_const_0) *
preferred_bkg_img) / 255.0 + np.float32(warp_const_0)
The main reason I prefer this method is that openCV seems to interpolate smoothly down (or up) to the constant value at the image edges. A fully binary mask will pick up these dark or light fringe areas as artifacts. The above method acts more like true transparency and blends properly with the preferred background.
Here's a small test program that warps with transparent "border", then copies the warped image to a solid background.
int main()
{
cv::Mat input = cv::imread("../inputData/Lenna.png");
cv::Mat transparentInput, transparentWarped;
cv::cvtColor(input, transparentInput, CV_BGR2BGRA);
//transparentInput = input.clone();
// create sample transformation mat
cv::Mat M = cv::Mat::eye(2,3, CV_64FC1);
// as a sample, just scale down and translate a little:
M.at<double>(0,0) = 0.3;
M.at<double>(0,2) = 100;
M.at<double>(1,1) = 0.3;
M.at<double>(1,2) = 100;
// warp to same size with transparent border:
cv::warpAffine(transparentInput, transparentWarped, M, transparentInput.size(), CV_INTER_LINEAR, cv::BORDER_TRANSPARENT);
// NOW: merge image with background, here I use the original image as background:
cv::Mat background = input;
// create output buffer with same size as input
cv::Mat outputImage = input.clone();
for(int j=0; j<transparentWarped.rows; ++j)
for(int i=0; i<transparentWarped.cols; ++i)
{
cv::Scalar pixWarped = transparentWarped.at<cv::Vec4b>(j,i);
cv::Scalar pixBackground = background.at<cv::Vec3b>(j,i);
float transparency = pixWarped[3] / 255.0f; // pixel value: 0 (0.0f) = fully transparent, 255 (1.0f) = fully solid
outputImage.at<cv::Vec3b>(j,i)[0] = transparency * pixWarped[0] + (1.0f-transparency)*pixBackground[0];
outputImage.at<cv::Vec3b>(j,i)[1] = transparency * pixWarped[1] + (1.0f-transparency)*pixBackground[1];
outputImage.at<cv::Vec3b>(j,i)[2] = transparency * pixWarped[2] + (1.0f-transparency)*pixBackground[2];
}
cv::imshow("warped", outputImage);
cv::imshow("input", input);
cv::imwrite("../outputData/TransparentWarped.png", outputImage);
cv::waitKey(0);
return 0;
}
I use this as input:
and get this output:
which looks like ALPHA channel isn't set to ZERO by warpAffine but to something like 205...
But in general this is the way I would do it (unoptimized)

GrabCut reading mask from PNG file in OpenCV (C++)

The implementation of this functionality seems pretty straightforward in Python, as shown here: http://docs.opencv.org/trunk/doc/py_tutorials/py_imgproc/py_grabcut/py_grabcut.html
Yet, when I tried to do exactly the same in C++, I get bad arguments error (for the grabcut function). How to put the mask image in the right format?
I am a newbie at this, so I'd be very thankful if someone could help me understand better. Thank you!
Here's what I have so far:
Mat image;
image= imread(file);
Mat mask;
mask.setTo( GC_BGD );
mask = imread("messi5.png");
Mat image2 = image.clone();
// define bounding rectangle
cv::Rect rectangle(startX, startY, width, height);
cv::Mat result; // segmentation result (4 possible values)
cv::Mat bgModel,fgModel; // the models (internally used)
//// GrabCut segmentation that works, but with a rectangle, not with the mask I need
//cv::grabCut(image, // input image
// result, // segmentation result
// rectangle,// rectangle containing foreground
// bgModel,fgModel, // models
// 1, // number of iterations
// cv::GC_INIT_WITH_RECT); // use rectangle
grabCut( image, mask, rectangle, bgModel, fgModel, 1, GC_INIT_WITH_MASK);
cv::compare(mask,cv::GC_PR_FGD,mask,cv::CMP_EQ);
cv::Mat foreground(image.size(),CV_8UC3,cv::Scalar(255,255,255));
image.copyTo(foreground,mask); // bg pixels not copied
namedWindow( "Display window", WINDOW_AUTOSIZE );
imshow( "Display window", foreground );
waitKey(0);
return 0;
}
It looks like you have misunderstood the guide, repeated here from the linked guide in the question:
# newmask is the mask image I manually labelled
newmask = cv2.imread('newmask.png',0)
# whereever it is marked white (sure foreground), change mask=1
# whereever it is marked black (sure background), change mask=0
mask[newmask == 0] = 0
mask[newmask == 255] = 1
mask, bgdModel, fgdModel = cv2.grabCut(img,mask,None,bgdModel,fgdModel,5,cv2.GC_INIT_WITH_MASK)
mask = np.where((mask==2)|(mask==0),0,1).astype('uint8')
img = img*mask[:,:,np.newaxis]
plt.imshow(img),plt.colorbar(),plt.show()
this is not what you have done i'm afraid. For a start you seem to have set the mask to the rgb image:
mask = imread("messi5.png");
whereas is should be set to the mask image:
mask = imread("newmask.png",CV_LOAD_IMAGE_GRAYSCALE);
EDIT from comments:
from a pure red mask painted over the image (an actual mask would be better).
maskTmp = imread("messi5.png");
std::vector<cv::Mat> channels(3)
split( messi5, channels);
cv::Mat maskRed = channels[2];
now threshold on the red channel to get your binary mask.

copying ipl image pixel by pixel

The problem is solved....I used cvGet2D,below is the sample code
CvScalar s;
s=cvGet2D(src_Image,pixel[i].x,pixel[i].y);
cvSet2D(dst_Image,pixel[i].x,pixel[i].y,s);
Where src_Iamge and dst_Image is the source and destination image correspondingly and pixel[i] is the selected pixel i wanted to draw in the dst image. I have include the real out image below.
have an source Ipl image, I want to copy some of the part of the image to a new destination image pixel by pixel. can any body tell me how can do it? I use c,c++ in opencv. For example if the below image is source image,
The real output image
EDIT:
I can see the comments suggesting cvGet2d. I think, if you just want to show "points", it is best to show them with a small neighbourhood so they can be seen where they are. For that you can draw white filled circles with origins at (x,y), on a mask, then you do the copyTo.
using namespace cv;
Mat m(input_iplimage);
Mat mask=Mat::zeros(m.size(), CV_8UC1);
p1 = Point(x,y);
r = 3;
circle(mask,p1,r, 1); // draws the circle around your point.
floodFill(mask, p1, 1); // fills the circle.
//p2, p3, ...
Mat output = Mat::zeros(m.size(),m.type()); // output starts with a black background.
m.copyTo(output, mask); // copies the selected parts of m to output
OLD post:
Create a mask and copy those pixels:
#include<opencv2/opencv.hpp>
using namespace cv;
Mat m(input_iplimage);
Mat mask=Mat::zeros(m.size(), CV_8UC1); // set mask 1 for every pixel you wanna copy.
Rect roi=Rect(x,y,width,height); // create a rectangle
mask(roi) = 1; // set it to 0.
roi = Rect(x2,y2,w2,h2);
mask(roi)=1; // set the second rectangular area for copying...
Mat output = 100*Mat::ones(m.size(),m.type()); // output with a gray background.
m.copyTo(output, mask); // copy selected areas of m to output
Alternatively you can copy Rect-by-Rect:
Mat m(input_iplimage);
Mat output = 100*Mat::ones(m.size(),m.type()); // output with a gray background.
Rect roi=Rect(x,y,width,height);
Mat m_temp, out_temp;
m_temp=m(roi);
out_temp = output(roi);
m_temp.copyTo(out_temp);
roi=Rect(x2,y2,w2,h2);
Mat m_temp, out_temp;
m_temp=m(roi);
out_temp = output(roi);
m_temp.copyTo(out_temp);
The answer to your question only requires to have look at the OpenCV documentation or just to search in your favourite search engine.
Here you've an answer for Ipl images and for newer Mat data.
For having an output as I see in your images, I'd do it setting ROI's, it's more efficient.

How to Apply Mask to Image in OpenCV?

I want to apply a binary mask to a color image.
Please provide a basic code example with proper explanation of how the code works.
Also, is there some option to apply a mask permanently so all functions operate only within the mask?
While #perrejba s answer is correct, it uses the legacy C-style functions. As the question is tagged C++, you may want to use a method instead:
inputMat.copyTo(outputMat, maskMat);
All objects are of type cv::Mat.
Please be aware that the masking is binary. Any non-zero value in the mask is interpreted as 'do copy'. Even if the mask is a greyscale image.
Also be aware that the .copyTo() function does not clear the output before copying.
If you want to permanently alter the original Image, you have to do an additional copy/clone/assignment. The copyTo() function is not defined for overlapping input/output images. So you can't use the same image as both input and output.
You don't apply a binary mask to an image. You (optionally) use a binary mask in a processing function call to tell the function which pixels of the image you want to process. If I'm completely misinterpreting your question, you should add more detail to clarify.
Well, this question appears on top of search results, so I believe we need code example here. Here's the Python code:
import cv2
def apply_mask(frame, mask):
"""Apply binary mask to frame, return in-place masked image."""
return cv2.bitwise_and(frame, frame, mask=mask)
Mask and frame must be the same size, so pixels remain as-is where mask is 1 and are set to zero where mask pixel is 0.
And for C++ it's a little bit different:
cv::Mat inFrame; // Original (non-empty) image
cv::Mat mask; // Original (non-empty) mask
// ...
cv::Mat outFrame; // Result output
inFrame.copyTo(outFrame, mask);
You can use the mask to copy only the region of interest of an original image to a destination one:
cvCopy(origImage,destImage,mask);
where mask should be an 8-bit single channel array.
See more at the OpenCV docs
Here is some code to apply binary mask on a video frame sequence acquired from a webcam.
comment and uncomment the "bitwise_not(Mon_mask,Mon_mask);"line and see the effect.
bests,
Ahmed.
#include "cv.h" // include it to used Main OpenCV functions.
#include "highgui.h" //include it to use GUI functions.
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
int c;
int radius=100;
CvPoint2D32f center;
//IplImage* color_img;
Mat image, image0,image1;
IplImage *tmp;
CvCapture* cv_cap = cvCaptureFromCAM(0);
while(1) {
tmp = cvQueryFrame(cv_cap); // get frame
// IplImage to Mat
Mat imgMat(tmp);
image =tmp;
center.x = tmp->width/2;
center.y = tmp->height/2;
Mat Mon_mask(image.size(), CV_8UC1, Scalar(0,0,0));
circle(Mon_mask, center, radius, Scalar(255,255,255), -1, 8, 0 ); //-1 means filled
bitwise_not(Mon_mask,Mon_mask);// commenté ou pas = RP ou DMLA
if(tmp != 0)
imshow("Glaucom", image); // show frame
c = cvWaitKey(10); // wait 10 ms or for key stroke
if(c == 27)
break; // if ESC, break and quit
}
/* clean up */
cvReleaseCapture( &cv_cap );
cvDestroyWindow("Glaucom");
}
Use copy with a mask.
Code sample:
Mat img1 = imread(path); // Load your image
Mat mask(img1 .size(),img1 .type()); // Create your mask
mask.setTo(0);
Point center(img1.cols/2, img1.rows / 2);
const int radius = img1.cols / 5; // Circle radio
circle(mask, center, radius, 255, FILLED);// Draw a circle in the image center
Mat img2(img1 .size(),img1 .type()); // Outimage
img2.setTo(0); // Clear data
img1.copyTo(img2, mask); // Only values at mask > 0 will be copied.