I would like to crop out an image from an existing image. I've taken an image and applied monochrome on it with threshold 98% using imagemagick (is this doable in openCV?)
The resulting image is this:
Now from this Image I would like to crop out another image so that the final image looks like this:
Question
How can I do this in OpenCV? Note, the only reason I want to crop the image is so that I can use this answer to get the part of the text. If there is no need to crop out a new image and instead just concentrate on black part of the image to begin with, that would be great.
If the text at the top and at the bottom are the regions that you want to crop out, if they are always at the same location the solution is easy: just set a ROI that ignores those areas:
#include <cv.h>
#include <highgui.h>
int main(int argc, char* argv[])
{
cv::Mat img = cv::imread(argv[1]);
if (img.empty())
{
std::cout << "!!! imread() failed to open target image" << std::endl;
return -1;
}
/* Set Region of Interest */
int offset_x = 129;
int offset_y = 129;
cv::Rect roi;
roi.x = offset_x;
roi.y = offset_y;
roi.width = img.size().width - (offset_x*2);
roi.height = img.size().height - (offset_y*2);
/* Crop the original image to the defined ROI */
cv::Mat crop = img(roi);
cv::imshow("crop", crop);
cv::waitKey(0);
cv::imwrite("noises_cropped.png", crop);
return 0;
}
Output image:
If the position of the black rectangle, which is your area of interest, is not present on a fixed location then you might want to check out another approach: use the rectangle detection technique:
On the output above, the area you are interested will be 2nd largest rectangle in the image.
On a side note, if you plan to isolate the text later, a simple cv::erode() could remove all the noises in that image so you are left with the white box & text. Another technique to remove noises is to use cv::medianBlur().You can also explore cv::morphologyEx() to do that trick:
cv::Mat kernel = cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(7, 7), cv::Point(3, 3));
cv::morphologyEx(src, src, cv::MORPH_ELLIPSE, kernel);
A proper solution might even be a combination of these 3. I've demonstrated a little bit of that on Extract hand bones from X-ray image.
A simple solution: scan lines from top down, bottom up, left-right and right-left. Terminate when the number of dark pixels in the line exceeds 50% of the total amount of pixels in the line. This will give you the xmin, xmax, ymin, ymax coordinates to bound your cropping rectangle.
Related
I need help with my project. I read color image(source image) from disk and my task is to apply blur to this image only where Canny function detect edges in this image. So detection of edges is without problems, as you can see in attached images (Top left corner image - Edge Image).
I applied 4 steps from related questions
this and this.
Probably steps 1-3 are correct as you can see in attached image. The first image is showing detected edges, the second shows previous image dilated, the third picture shows blurred second image and copied source image to this image. But at the last step I want to copy this image into final image (source image) to achieve that detected edges will be blurred. But when I use copyTo function from OpenCV library the result does not have blurred edges which Canny function detects as you can see in picture Result (right bottom corner image). Can you help me please what I am doing bad?
#include <cstdlib>
#include <iostream>
#include <QCoreApplication>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
using namespace cv;
Mat src, src_gray;
Mat detected_edges;
Mat blurred;
int edgeTresh = 1;
int lowThreshold;
int const max_lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;
char* window_name = "Edge Image";
char* window_name2 = "Dilated";
char* window_name3 = "Blurred";
char* window_name4 = "Result";
void CannyThreshold(int, void*)
{
//reducing noise
blur(src_gray, detected_edges, Size(3,3));
//Canny function for detection of edges
Canny(detected_edges,detected_edges, lowThreshold,lowThreshold*ratio, kernel_size);
//show detected edges in source image
imshow(window_name, detected_edges);
//4 steps from stack owerflow
dilate(detected_edges, blurred, Mat()); //1
imshow(window_name2, blurred);
src.copyTo(blurred,blurred); //2
blur(blurred, blurred ,Size(10,10)); //3
imshow(window_name3, blurred);
//here can by a problem when I copy image from step 3 to source image with detected_edges mask.
blurred.copyTo(src,detected_edges); //4
imshow(window_name4, src); //final image
}
int main(int argc, char *argv[])
{
//reading image
src = cv::imread("/home/ja/FCS02/FCS02_3/imageReading/drevo.png");
if(!src.data)
return -1;
//convert to gray
cvtColor(src,src_gray,CV_BGR2GRAY);
//windows for showing each step image
namedWindow(window_name,CV_WINDOW_NORMAL);
namedWindow(window_name2,CV_WINDOW_NORMAL);
namedWindow(window_name3,CV_WINDOW_NORMAL);
namedWindow(window_name4,CV_WINDOW_NORMAL);
//trackbar
createTrackbar("Min Threshold:",window_name, &lowThreshold, max_lowThreshold,CannyThreshold);
//detection of edges
CannyThreshold(0,0);
cv::waitKey(300000);
return EXIT_SUCCESS;
}
Source Image where I want to blur only edges
Results of my code
This image shows what I want
Big thanks for everybody for your help and advices.
When you copy back the blurred edge in your original image, you are using the wrong mask. detected_edges contains the output of the Canny detector (only some sparse pixels). The non-zeros pixels if the mask indicate which pixels of the source image can be copied to the destination. The image blurred contains only the blurred edge, and the rest of the pixels are black. So I think you can directly use it as a mask for the copy.
blurred.copyTo(src, blurred); //4
Keep in mind that the mask needs to be of type CV_8U. It seems that in your example this is the case. If not, you can use the following code to create an image that is black except where the pixels in blurred are not null.
blurred.copyTo(src, (blurred != 0)); //4
I create a Bird-View-Image with the warpPerspective()-function like this:
warpPerspective(frame, result, H, result.size(), CV_WARP_INVERSE_MAP, BORDER_TRANSPARENT);
The result looks very good and also the border is transparent:
Bird-View-Image
Now I want to put this image on top of another image "out". I try doing this with the function warpAffine like this:
warpAffine(result, out, M, out.size(), CV_INTER_LINEAR, BORDER_TRANSPARENT);
I also converted "out" to a four channel image with alpha channel according to a question which was already asked on stackoverflow:
Convert Image
This is the code: cvtColor(out, out, CV_BGR2BGRA);
I expected to see the chessboard but not the gray background. But in fact, my result looks like this:
Result Image
What am I doing wrong? Do I forget something to do? Is there another way to solve my problem? Any help is appreciated :)
Thanks!
Best regards
DamBedEi
I hope there is a better way, but here it is something you could do:
Do warpaffine normally (without the transparency thing)
Find the contour that encloses the image warped
Use this contour for creating a mask (white values inside the image warped, blacks in the borders)
Use this mask for copy the image warped into the other image
Sample code:
// load images
cv::Mat image2 = cv::imread("lena.png");
cv::Mat image = cv::imread("IKnowOpencv.jpg");
cv::resize(image, image, image2.size());
// perform warp perspective
std::vector<cv::Point2f> prev;
prev.push_back(cv::Point2f(-30,-60));
prev.push_back(cv::Point2f(image.cols+50,-50));
prev.push_back(cv::Point2f(image.cols+100,image.rows+50));
prev.push_back(cv::Point2f(-50,image.rows+50 ));
std::vector<cv::Point2f> post;
post.push_back(cv::Point2f(0,0));
post.push_back(cv::Point2f(image.cols-1,0));
post.push_back(cv::Point2f(image.cols-1,image.rows-1));
post.push_back(cv::Point2f(0,image.rows-1));
cv::Mat homography = cv::findHomography(prev, post);
cv::Mat imageWarped;
cv::warpPerspective(image, imageWarped, homography, image.size());
// find external contour and create mask
std::vector<std::vector<cv::Point> > contours;
cv::Mat imageWarpedCloned = imageWarped.clone(); // clone the image because findContours will modify it
cv::cvtColor(imageWarpedCloned, imageWarpedCloned, CV_BGR2GRAY); //only if the image is BGR
cv::findContours (imageWarpedCloned, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// create mask
cv::Mat mask = cv::Mat::zeros(image.size(), CV_8U);
cv::drawContours(mask, contours, 0, cv::Scalar(255), -1);
// copy warped image into image2 using the mask
cv::erode(mask, mask, cv::Mat()); // for avoid artefacts
imageWarped.copyTo(image2, mask); // copy the image using the mask
//show images
cv::imshow("imageWarpedCloned", imageWarpedCloned);
cv::imshow("warped", imageWarped);
cv::imshow("image2", image2);
cv::waitKey();
One of the easiest ways to approach this (not necessarily the most efficient) is to warp the image twice, but set the OpenCV constant boundary value to different values each time (i.e. zero the first time and 255 the second time). These constant values should be chosen towards the minimum and maximum values in the image.
Then it is easy to find a binary mask where the two warp values are close to equal.
More importantly, you can also create a transparency effect through simple algebra like the following:
new_image = np.float32((warp_const_255 - warp_const_0) *
preferred_bkg_img) / 255.0 + np.float32(warp_const_0)
The main reason I prefer this method is that openCV seems to interpolate smoothly down (or up) to the constant value at the image edges. A fully binary mask will pick up these dark or light fringe areas as artifacts. The above method acts more like true transparency and blends properly with the preferred background.
Here's a small test program that warps with transparent "border", then copies the warped image to a solid background.
int main()
{
cv::Mat input = cv::imread("../inputData/Lenna.png");
cv::Mat transparentInput, transparentWarped;
cv::cvtColor(input, transparentInput, CV_BGR2BGRA);
//transparentInput = input.clone();
// create sample transformation mat
cv::Mat M = cv::Mat::eye(2,3, CV_64FC1);
// as a sample, just scale down and translate a little:
M.at<double>(0,0) = 0.3;
M.at<double>(0,2) = 100;
M.at<double>(1,1) = 0.3;
M.at<double>(1,2) = 100;
// warp to same size with transparent border:
cv::warpAffine(transparentInput, transparentWarped, M, transparentInput.size(), CV_INTER_LINEAR, cv::BORDER_TRANSPARENT);
// NOW: merge image with background, here I use the original image as background:
cv::Mat background = input;
// create output buffer with same size as input
cv::Mat outputImage = input.clone();
for(int j=0; j<transparentWarped.rows; ++j)
for(int i=0; i<transparentWarped.cols; ++i)
{
cv::Scalar pixWarped = transparentWarped.at<cv::Vec4b>(j,i);
cv::Scalar pixBackground = background.at<cv::Vec3b>(j,i);
float transparency = pixWarped[3] / 255.0f; // pixel value: 0 (0.0f) = fully transparent, 255 (1.0f) = fully solid
outputImage.at<cv::Vec3b>(j,i)[0] = transparency * pixWarped[0] + (1.0f-transparency)*pixBackground[0];
outputImage.at<cv::Vec3b>(j,i)[1] = transparency * pixWarped[1] + (1.0f-transparency)*pixBackground[1];
outputImage.at<cv::Vec3b>(j,i)[2] = transparency * pixWarped[2] + (1.0f-transparency)*pixBackground[2];
}
cv::imshow("warped", outputImage);
cv::imshow("input", input);
cv::imwrite("../outputData/TransparentWarped.png", outputImage);
cv::waitKey(0);
return 0;
}
I use this as input:
and get this output:
which looks like ALPHA channel isn't set to ZERO by warpAffine but to something like 205...
But in general this is the way I would do it (unoptimized)
I have two almost similar images with the difference that the shapes in the second image are a little different. Most of the time smaller, but can be larger. Also the shape count in one image can range from ~10 to >100 and can get relatively close to each other.
It would look something like this (Notice: both images would be not transparent):
The black triangle is image 1, the grey triangle is image 2.
Now i want to add a predefined margin (3px here - to both sides of the contour) to the edges of image 1 and test if the edges of the second image are in "the same" range as the first image. If not, display that visually:
Top left: Small difference between the two images (visualized by red outline)
Bottom right: "Same" edge -> No difference
How can i best accomplish this?
I'm using OpenCV with C++
In case the shapes are at the same positions in both images and you just need the markers on an image without additional information, this simple trick could do it.
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
Mat img1 = imread("D:/1.png");
Mat img2 = imread("D:/2.png");
Mat diff;
absdiff(img1, img2, diff);
cv::threshold(diff, diff, 128, 255, THRESH_BINARY);
Mat markers;
int minRadiusDiff = 2;
erode(diff, markers, Mat(), cv::Point(-1, -1), minRadiusDiff / 2);
imwrite("D:/out.png", markers);
}
Here are some example images:
The triangle becomes much bigger, the wobbly thing becomes much smaller and the quad ony shrinks slightly.
So we would like to have the triangle and the wobble marked, but not the quad.
And that is exactly our result.
I'm working in opencv 2.4.0 and C++
I'm trying to do an exercise that says I should load an RGB image, convert it to gray scale and save the new image. The next step is to make the grayscale image into a binary image and store that image. This much I have working.
My problem is in counting the amount of black pixels in the binary image.
So far I've searched the web and looked in the book. The method that I've found that seems the most useful is.
int TotalNumberOfPixels = width * height;
int ZeroPixels = TotalNumberOfPixels - cvCountNonZero(cv_image);
But I don't know how to store these values and use them in cvCountNonZero(). When I pass the the image I want counted from to this function I get an error.
int main()
{
Mat rgbImage, grayImage, resizedImage, bwImage, result;
rgbImage = imread("C:/MeBGR.jpg");
cvtColor(rgbImage, grayImage, CV_RGB2GRAY);
resize(grayImage, resizedImage, Size(grayImage.cols/3,grayImage.rows/4),
0, 0, INTER_LINEAR);
imwrite("C:/Jakob/Gray_Image.jpg", resizedImage);
bwImage = imread("C:/Jakob/Gray_Image.jpg");
threshold(bwImage, bwImage, 120, 255, CV_THRESH_BINARY);
imwrite("C:/Jakob/Binary_Image.jpg", bwImage);
imshow("Original", rgbImage);
imshow("Resized", resizedImage);
imshow("Resized Binary", bwImage);
waitKey(0);
return 0;
}
So far this code is very basic but it does what it's supposed to for now. Some adjustments will be made later to clean it up :)
You can use countNonZero to count the number of pixels that are not black (>0) in an image. If you want to count the number of black (==0) pixels, you need to subtract the number of pixels that are not black from the number of pixels in the image (the image width * height).
This code should work:
int TotalNumberOfPixels = bwImage.rows * bwImage.cols;
int ZeroPixels = TotalNumberOfPixels - countNonZero(bwImage);
cout<<"The number of pixels that are zero is "<<ZeroPixels<<endl;
The problem is solved....I used cvGet2D,below is the sample code
CvScalar s;
s=cvGet2D(src_Image,pixel[i].x,pixel[i].y);
cvSet2D(dst_Image,pixel[i].x,pixel[i].y,s);
Where src_Iamge and dst_Image is the source and destination image correspondingly and pixel[i] is the selected pixel i wanted to draw in the dst image. I have include the real out image below.
have an source Ipl image, I want to copy some of the part of the image to a new destination image pixel by pixel. can any body tell me how can do it? I use c,c++ in opencv. For example if the below image is source image,
The real output image
EDIT:
I can see the comments suggesting cvGet2d. I think, if you just want to show "points", it is best to show them with a small neighbourhood so they can be seen where they are. For that you can draw white filled circles with origins at (x,y), on a mask, then you do the copyTo.
using namespace cv;
Mat m(input_iplimage);
Mat mask=Mat::zeros(m.size(), CV_8UC1);
p1 = Point(x,y);
r = 3;
circle(mask,p1,r, 1); // draws the circle around your point.
floodFill(mask, p1, 1); // fills the circle.
//p2, p3, ...
Mat output = Mat::zeros(m.size(),m.type()); // output starts with a black background.
m.copyTo(output, mask); // copies the selected parts of m to output
OLD post:
Create a mask and copy those pixels:
#include<opencv2/opencv.hpp>
using namespace cv;
Mat m(input_iplimage);
Mat mask=Mat::zeros(m.size(), CV_8UC1); // set mask 1 for every pixel you wanna copy.
Rect roi=Rect(x,y,width,height); // create a rectangle
mask(roi) = 1; // set it to 0.
roi = Rect(x2,y2,w2,h2);
mask(roi)=1; // set the second rectangular area for copying...
Mat output = 100*Mat::ones(m.size(),m.type()); // output with a gray background.
m.copyTo(output, mask); // copy selected areas of m to output
Alternatively you can copy Rect-by-Rect:
Mat m(input_iplimage);
Mat output = 100*Mat::ones(m.size(),m.type()); // output with a gray background.
Rect roi=Rect(x,y,width,height);
Mat m_temp, out_temp;
m_temp=m(roi);
out_temp = output(roi);
m_temp.copyTo(out_temp);
roi=Rect(x2,y2,w2,h2);
Mat m_temp, out_temp;
m_temp=m(roi);
out_temp = output(roi);
m_temp.copyTo(out_temp);
The answer to your question only requires to have look at the OpenCV documentation or just to search in your favourite search engine.
Here you've an answer for Ipl images and for newer Mat data.
For having an output as I see in your images, I'd do it setting ROI's, it's more efficient.