I have tried this code I gotten here. Its for displaying multiple images on a single window for C++. I have included the opencv 3.0 library on the program as well. Below is the code. I am trying to load 2 images but only the first one (1.jpg) appears but when i put image2 to be equal to cv::imread("1.jpg"); two images of 1.jpg appears. I am really new to this and I dont understand where i am going wrong here. I hope someone can help me. Thank you.
int main(int argc, char *argv[])
{
// read an image
cv::Mat image1= cv::imread("1.jpg");
cv::Mat image2= cv::imread("2.jpg");
int dstWidth = image1.cols;
int dstHeight = image1.rows * 2;
cv::Mat dst = cv::Mat(dstHeight, dstWidth, CV_8UC3, cv::Scalar(0,0,0));
cv::Rect roi(cv::Rect(0,0,image1.cols, image1.rows));
cv::Mat targetROI = dst(roi);
image1.copyTo(targetROI);
targetROI = dst(cv::Rect(0,image1.rows,image1.cols, image1.rows));
image2.copyTo(targetROI);
// create image window named "My Image"
cv::namedWindow("OpenCV Window");
// show the image on window
cv::imshow("OpenCV Window", dst);
// wait key for 5000 ms
cv::waitKey(5000);
return 0;
}
This is the result of the program above
Your code works ok for me, if images have the same size. Otherwise, the call to
image2.copyTo(targetROI);
will copy image2 into a newly created image, not in dst as you would expect.
If you want to make it work in general, you should:
1) to set dstWidth and dstHeight like:
int dstWidth = max(image1.cols, image2.cols);
int dstHeight = image1.rows + image2.rows;
2) set the second ROI with the size of the second image:
targetROI = dst(cv::Rect(0, image1.rows, image2.cols, image2.rows));
// ^ ^
From the comments, to show 4 images disposed as 2x2, you need a little more work:
#include <opencv2\opencv.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
// read an image
cv::Mat image1 = cv::imread("path_to_image1");
cv::Mat image2 = cv::imread("path_to_image2");
cv::Mat image3 = cv::imread("path_to_image3");
cv::Mat image4 = cv::imread("path_to_image4");
//////////////////////
// image1 image2
// image3 image4
//////////////////////
int max13cols = max(image1.cols, image3.cols);
int max24cols = max(image2.cols, image4.cols);
int dstWidth = max13cols + max24cols;
int max12rows = max(image1.rows, image2.rows);
int max34rows = max(image3.rows, image4.rows);
int dstHeight = max12rows + max34rows;
cv::Mat dst = cv::Mat(dstHeight, dstWidth, CV_8UC3, cv::Scalar(0, 0, 0));
cv::Rect roi(cv::Rect(0, 0, image1.cols, image1.rows));
image1.copyTo(dst(roi));
roi = cv::Rect(max13cols, 0, image2.cols, image2.rows);
image2.copyTo(dst(roi));
roi = cv::Rect(0, max12rows, image3.cols, image3.rows);
image3.copyTo(dst(roi));
roi = cv::Rect(max13cols, max12rows, image4.cols, image4.rows);
image4.copyTo(dst(roi));
cv::imshow("OpenCV Window", dst);
cv::waitKey(0);
return 0;
}
Related
Good Day! I'm using imwrite command to save the image below after cropping them in OpenCV (C++) but it seems like it included the black portion surrounding it in writing. All I want is to save the cropped one. Please help.
Here's my code
Mat mask,draft,res;
int nPixels;
char c=0;
while(true && c!='q') {
imshow("SAMPLE", img);
if(!roi.isSet())
roi.set("SAMPLE");
if (roi.isSet()) {
roi.createMask(img.size());
mask = roi.getMask();
res = mask & img.clone();
imwrite("masked.png",res);
imshow("draft", res);
}
c = waitKey(1);
}
Here is an example how to crop an image and save the croped image (see comment from api55). Maybe that helps you.
cv::Mat img = cv::imread("Path/To/Image/image.png", cv::IMREAD_GRAYSCALE);
if(image.empty())
return -1;
cv::Rect roi(0, 0, 100, 100); // define roi here as x0, y0, width, height
cv::Mat cropedImg(img, roi);
cv::imwrite("Path/To/Save/Location/cropedImage.png", cropedImg);
If I have mask like
And I have a image( the size is same to the mask) like
I want to hightlight the mask in the image. If I'm in other Language,I just
As you can see, the result image have a transparent red show the mask. I hope implement this in OpenCV. So I write this code
#include <opencv.hpp>
using namespace cv;
using namespace std;
int main() {
Mat srcImg = imread("image.jpg");
Mat mask = imread("mask.jpg", IMREAD_GRAYSCALE)>200;
for(int i=0;i<srcImg.rows;i++)
for(int j=0;j<srcImg.cols;j++)
if(mask.at<uchar>(i, j)==255)
circle(srcImg, Point(j,i), 3, Scalar(0, 0, 128,128));
imshow("image",srcImg);
waitKey();
return 0;
}
But as you see, I use a alpha value in Scalar, but it is not a transparent red.
Maybe this is due to the srcImg just have 3 channels. I have two question about this
How to hightlight the mask with a transparent red(even the image just have 3 channels)?
I have to draw circle pixel by pixel to do this thing?
#include<opencv2/core.hpp>
#include<opencv2/imgproc.hpp>
#include<opencv2/highgui.hpp>
using namespace cv;
int main(int argc, char** argv)
{
Mat srcImg = imread("image.png");
Mat mask = imread("mask.png", IMREAD_GRAYSCALE) > 200;
Mat red;
cvtColor(mask, red, COLOR_GRAY2BGR);
red = (red - Scalar(0, 0, 255)) / 2;
srcImg = srcImg - red;
imshow("image", srcImg);
waitKey();
return 0;
}
I've written this in python but you can easily port it to C++. Assuming that your source and mask images are CV_8UC3 images:
src = cv2.imread("source.png", -1)
mask = cv2.imread("mask.png", -1)
# convert mask to gray and then threshold it to convert it to binary
gray = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
ret, binary = cv2.threshold(gray, 40, 255, cv2.THRESH_BINARY)
# find contours of two major blobs present in the mask
im2,contours,hierarchy = cv2.findContours(binary, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
# draw the found contours on to source image
for contour in contours:
cv2.drawContours(src, contour, -1, (255,0,0), thickness = 1)
# split source to B,G,R channels
b,g,r = cv2.split(src)
# add a constant to R channel to highlight the selected area in reed
r = cv2.add(b, 30, dst = b, mask = binary, dtype = cv2.CV_8U)
# merge the channels back together
cv2.merge((b,g,r), src)
I need to get contour from hand image, usually I process image with 4 steps:
get raw RGB gray image from 3 channels to 1 channel:
cvtColor(sourceGrayImage, sourceGrayImage, COLOR_BGR2GRAY);
use Gaussian blur to filter gray image:
GaussianBlur(sourceGrayImage, sourceGrayImage, Size(3,3), 0);
binary gray image, I split image by height, normally I split image to 6 images by its height, then each one I do threshold process:
// we split source picture to binaryImageSectionCount(here it's 8) pieces by its height,
// then we for every piece, we do threshold,
// and at last we combine them agin to binaryImage
const binaryImageSectionCount = 8;
void GetBinaryImage(Mat &grayImage, Mat &binaryImage)
{
// get every partial gray image's height
int partImageHeight = grayImage.rows / binaryImageSectionCount;
for (int i = 0; i < binaryImageSectionCount; i++)
{
Mat partialGrayImage;
Mat partialBinaryImage;
Rect partialRect;
if (i != binaryImageSectionCount - 1)
{
// if it's not last piece, Rect's height should be partImageHeight
partialRect = Rect(0, i * partImageHeight, grayImage.cols, partImageHeight);
}
else
{
// if it's last piece, Rect's height should be (grayImage.rows - i * partImageHeight)
partialRect = Rect(0, i * partImageHeight, grayImage.cols, grayImage.rows - i * partImageHeight);
}
Mat partialResource = grayImage(partialRect);
partialResource.copyTo(partialGrayImage);
threshold( partialGrayImage, partialBinaryImage, 0, 255, THRESH_OTSU);
// combin partial binary image to one piece
partialBinaryImage.copyTo(binaryImage(partialRect));
///*stringstream resultStrm;
//resultStrm << "partial_" << (i + 1);
//string string = resultStrm.str();
//imshow(string, partialBinaryImage);
//waitKey(0);*/
}
imshow("result binary image.", binaryImage);
waitKey(0);
return;
}
use findcontour to get biggest area contour:
vector<vector<Point> > contours;
findContours(binaryImage, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
normally it works well,
But for some low quality gray image, it doesn't work,like below:
the complete code is here:
#include <opencv2/imgproc/imgproc.hpp>
#include<opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace std;
using namespace cv;
// we split source picture to binaryImageSectionCount(here it's 8) pieces by its height,
// then we for every piece, we do threshold,
// and at last we combine them agin to binaryImage
const binaryImageSectionCount = 8;
void GetBinaryImage(Mat &grayImage, Mat &binaryImage)
{
// get every partial gray image's height
int partImageHeight = grayImage.rows / binaryImageSectionCount;
for (int i = 0; i < binaryImageSectionCount; i++)
{
Mat partialGrayImage;
Mat partialBinaryImage;
Rect partialRect;
if (i != binaryImageSectionCount - 1)
{
// if it's not last piece, Rect's height should be partImageHeight
partialRect = Rect(0, i * partImageHeight, grayImage.cols, partImageHeight);
}
else
{
// if it's last piece, Rect's height should be (grayImage.rows - i * partImageHeight)
partialRect = Rect(0, i * partImageHeight, grayImage.cols, grayImage.rows - i * partImageHeight);
}
Mat partialResource = grayImage(partialRect);
partialResource.copyTo(partialGrayImage);
threshold( partialGrayImage, partialBinaryImage, 0, 255, THRESH_OTSU);
// combin partial binary image to one piece
partialBinaryImage.copyTo(binaryImage(partialRect));
///*stringstream resultStrm;
//resultStrm << "partial_" << (i + 1);
//string string = resultStrm.str();
//imshow(string, partialBinaryImage);
//waitKey(0);*/
}
imshow("result binary image.", binaryImage);
waitKey(0);
return;
}
int main(int argc, _TCHAR* argv[])
{
// get image path
string imgPath("C:\\Users\\Alfred\\Desktop\\gray.bmp");
// read image
Mat src = imread(imgPath);
imshow("Source", src);
//medianBlur(src, src, 7);
cvtColor(src, src, COLOR_BGR2GRAY);
imshow("gray", src);
// do filter
GaussianBlur(src, src, Size(3,3), 0);
// binary image
Mat threshold_output(src.rows, src.cols, CV_8UC1, Scalar(0, 0, 0));
GetBinaryImage(src, threshold_output);
imshow("binaryImage", threshold_output);
// get biggest contour
vector<vector<Point> > contours;
findContours(threshold_output,contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
int biggestContourIndex = 0;
int maxContourArea = -1000;
for (int i = 0; i < contours.size(); i++)
{
if (contourArea(contours[i]) > maxContourArea)
{
maxContourArea = contourArea(contours[i]);
biggestContourIndex = i;
}
}
// show biggest contour
Mat biggestContour(threshold_output.rows, threshold_output.cols, CV_8UC1, Scalar(0, 0, 0));
drawContours(biggestContour, contours, biggestContourIndex, cv::Scalar(255,255,255), 2, 8, vector<Vec4i>(), 0, Point());
imshow("maxContour", biggestContour);
waitKey(0);
}
could anybody please help me to get a better hand contour result?
thanks!!!
I have the code snippet in python, you can follow the same approach in C:
img = cv2.imread(x, 1)
cv2.imshow("img",img)
imgray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
cv2.imshow("gray",imgray)
#Code for histogram equalization
equ = cv2.equalizeHist(imgray)
cv2.imshow('equ', equ)
#Code for contrast limited adaptive histogram equalization
#clahe = cv2.createCLAHE(clipLimit=3.0, tileGridSize=(8,8))
#cl2 = clahe.apply(imgray)
#cv2.imshow('clahe2', cl2)
This is the result I obtained:
If you're image is horribly bad you could try the code that I commented involving contrast limited adaptive histogram equalization.
I'm new to opencv and i searched on the internet if there is an example of how to merge two images, but didin't found anything good to help me. Can someone help me with some indications or a small code to understand ? thanks in advance
From the comments to the question, you said:
I dont want to blend half from the first picture with the other half from the second. I just waint to print both images, one near the other one
So, starting from these images:
You want this result?
Note that if both images have the same height, you won't see the black background.
Code:
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
// Load images
Mat3b img1 = imread("path_to_image_1");
Mat3b img2 = imread("path_to_image_2");
// Get dimension of final image
int rows = max(img1.rows, img2.rows);
int cols = img1.cols + img2.cols;
// Create a black image
Mat3b res(rows, cols, Vec3b(0,0,0));
// Copy images in correct position
img1.copyTo(res(Rect(0, 0, img1.cols, img1.rows)));
img2.copyTo(res(Rect(img1.cols, 0, img2.cols, img2.rows)));
// Show result
imshow("Img 1", img1);
imshow("Img 2", img2);
imshow("Result", res);
waitKey();
return 0;
}
I have concatenated vertically two image in C# (OpenCvSharp) : (images can be different size)
private Mat VerticalConcat(Mat image1, Mat image2)
{
var smallImage = image1.Cols < image2.Cols ? image1 : image2;
var bigImage = image1.Cols > image2.Cols ? image1 : image2;
Mat combine = Mat.Zeros(new OpenCvSharp.CPlusPlus.Size(Math.Abs(image2.Cols - image1.Cols), smallImage.Height), image2.Type());
Cv2.HConcat(smallImage, combine, combine);
Cv2.VConcat(bigImage, combine, combine);
return combine;
}
I'm using OpenCV to extract a subimage of a scanned document and would like to use tesseract to perform OCR over this subimage.
I found out that I can use two methods for text recognition in tesseract, but so far I wasn't able to find a working solution.
A.) How can I convert a cv::Mat into a PIX*?
(PIX* is a datatype of leptonica)
Based on vasiles code below, this is essentially my current code:
cv::Mat image = cv::imread("c:/image.png");
cv::Mat subImage = image(cv::Rect(50, 200, 300, 100));
int depth;
if(subImage.depth() == CV_8U)
depth = 8;
//other cases not considered yet
PIX* pix = pixCreateHeader(subImage.size().width, subImage.size().height, depth);
pix->data = (l_uint32*) subImage.data;
tesseract::TessBaseAPI tess;
STRING text;
if(tess.ProcessPage(pix, 0, 0, &text))
{
std::cout << text.string();
}
While it doesn't crash or anything, the OCR result still is wrong. It should recognize one word of my sample image, but instead it returns some non-readable characters.
The method PIX_HEADER doesn't exist, so I used pixCreateHeader, but it doesn't take the number of channels as an argument. So how can I set the number of channels?
B.) How can I use cv::Mat for TesseractRect() ?
Tesseract offers another method for text recognition with this signature:
char * TessBaseAPI::TesseractRect (
const UINT8 * imagedata,
int bytes_per_pixel,
int bytes_per_line,
int left,
int top,
int width,
int height
)
Currently I am using the following code, but it also returns non-readable characters (although different ones than from the code above.
char* cr = tess.TesseractRect(
subImage.data,
subImage.channels(),
subImage.channels() * subImage.size().width,
0,
0,
subImage.size().width,
subImage.size().height);
tesseract::TessBaseAPI tess;
cv::Mat sub = image(cv::Rect(50, 200, 300, 100));
tess.SetImage((uchar*)sub.data, sub.size().width, sub.size().height, sub.channels(), sub.step1());
tess.Recognize(0);
const char* out = tess.GetUTF8Text();
For Anybody using the JavaCPP presets of OpenCV/Tesseract, here is what works
Mat img = imread("file.jpg");
Mat gray = new Mat();
cvtColor(img, gray, CV_BGR2GRAY);
// api is a Tesseract client which is initialised
api.SetImage(gray.data().asBuffer(),gray.size().width(),gray.size().height(),gray.channels(),gray.size1())
cv::Mat image = cv::imread(argv[1]);
cv::Mat gray;
cv::cvtColor(image, gray, CV_BGR2GRAY);
PIX *pixS = pixCreate(gray.size().width, gray.size().height, 8);
for(int i=0; i<gray.rows; i++)
for(int j=0; j<gray.cols; j++)
pixSetPixel(pixS, j,i, (l_uint32) gray.at<uchar>(i,j));
First, make a deep copy of your subImage, so that it will be stored in a coninuous memory block:
cv::Mat subImage = image(cv::Rect(50, 200, 300, 100)).clone();
Then, init a PIX headed (I don't know how) with the correct parameters.
// ???? Put your own constructor here.
PIX* pix = new PIX_HEADER(width, height, channels, depth);
OR, create it manually:
PIX pix;
pix.width = subImage.width;
...
Then set the pix data pointer to the subImage data pointer
pix.data = subImage.data;
Finally, make sure your subImage objects does not go out of scope before you finish your work with pix.