Friends, I am trying to detect the coin first, since I already know its size is 1.011cm2. And then measure the leaves in the image.
I am using findContours, but I am not always able to distinguish the currency first, I have also tried to use hougCircles but it is not working in my case. Would anyone have any ideas?
OpenCv 4.5.0 C++
My code
//variables for segmentation image
cv::Mat imagem_original, imagem_gray, imagem_binaria, imagem_inRange, imagem_threshold, dst, src;
vector<Vec3f> circles;
cv::Scalar min_color = Scalar(50, 50, 50);
cv::Scalar max_color = Scalar(90, 120, 180);
imagem_original = load_image("IMG_1845.jpg");
//imshow("Imagem Original", imagem_original);
cv::cvtColor(imagem_original, imagem_gray, COLOR_BGR2GRAY);
//imshow("imagem_gray", imagem_gray);
//cv::inRange(imagem_gray, min_color, max_color, imagem_inRange);
cv::threshold(imagem_gray, imagem_threshold, 0, 255, THRESH_BINARY_INV | THRESH_OTSU);
imshow(" Threshold", imagem_threshold);
// find outer-contours in the image these should be the circles!
cv::Mat conts = imagem_threshold.clone();
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(conts, contours, hierarchy, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE, cv::Point(0, 0));
int total_IAF = 0;
cout << "\n\n";
cout << contours.size() << "\n\n";
for (int i = 0; i < contours.size(); i++) {
int area = contourArea(contours[i]);
if (area <= 10) {
cv::drawContours(imagem_original, contours, i, Scalar(0, 0, 255));
}
else {
cout << area << "\n";
cv::drawContours(imagem_original, contours, i, Scalar(255, 0, 0));
}
if (area > 5000) {
total_IAF += contourArea(contours[i]);
}
}
imshow(" ORIGINAL ", imagem_original);
double iAF_cm2 = total_IAF / 4658;
cout << "\n\n TOTAL AREA IAF: " << total_IAF;
cout << "\n IAF em cm2: " << iAF_cm2 << " cm2\n\n";
If your setup has constant white-ish/gray-ish background and green leaves, I'd use the HSV color space to detect all objects using the S channel (the green leaves and the golden part of the coin will have significantly more saturation than the background) and then distinguish between the coin and the leaves using the H channel (the green leaves will have hue values around 45). The remainder is to determine the image areas of all contours, and set the coin's image area as some kind of reference area to calculate the object areas w.r.t. the coin's object area of 1.011.
That's the saturation channel of the given image:
The saturation channel thresholded at 64:
That's the hue channel of the image:
Here's some code executing the above idea:
int main()
{
// Read image
cv::Mat img = cv::imread("Wcj1R.jpg", cv::IMREAD_COLOR);
// Convert image to HSV color space, and split H, S, V channels
cv::Mat img_hsv;
cv::cvtColor(img, img_hsv, cv::COLOR_BGR2HSV);
std::vector<cv::Mat> hsv;
cv::split(img_hsv, hsv);
// Binary threshold S channel at fixed threshold
cv::Mat img_thr;
cv::threshold(hsv[1], img_thr, 64, 255, cv::THRESH_BINARY);
// Find most outer contours only
std::vector<std::vector<cv::Point>> cnts;
std::vector<cv::Vec4i> hier;
cv::findContours(img_thr.clone(), cnts, hier, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
// Iterate found contours
std::vector<cv::Point> cnt_centers;
std::vector<double> cnt_areas;
double ref_area = -1;
for (int i = 0; i < cnts.size(); i++)
{
// Current contour
std::vector<cv::Point> cnt = cnts[i];
// If contour is too small, discard
if (cnt.size() < 100)
continue;
// Calculate and store center (just for visualization) and area of contour
cv::Moments m = cv::moments(cnt);
cnt_centers.push_back(cv::Point(m.m10 / m.m00 - 30, m.m01 / m.m00));
cnt_areas.push_back(cv::contourArea(cnt));
// Check H channel, whether the contour's image parts are mostly green
cv::Mat mask = hsv[0].clone().setTo(cv::Scalar(0));
cv::drawContours(mask, cnts, i, cv::Scalar(255), cv::FILLED);
double h_mean = cv::mean(hsv[0], mask)[0];
// If it's not mostly green, that's the coin, thus the reference area
if (h_mean < 40 || h_mean > 50)
ref_area = cv::contourArea(cnt);
}
// Iterate all contours again
for (int i = 0; i < cnt_centers.size(); i++)
{
// Calculate actual object area
double area = cnt_areas[i] / ref_area * 1.011;
// Put area on image w.r.t. the contour's center
cv::putText(img, std::to_string(area), cnt_centers[i], cv::FONT_HERSHEY_COMPLEX_SMALL, 1, cv::Scalar(255, 255, 255));
}
return 0;
}
And, that'd be the output:
Your code finds all contours in a image and shows them. So I'm confused about the meaning of "detect the coin first".
If you want to draw the contour of the coin first, sort contours vector by size. The coin is the smallest object so it would be the first element of the vector after sorting.(Of course, some unwanted contours should removed before sorting.)
Related
I currently have a source image of: Source image
And I have made a bounding rects for each as an ROI: Rects
My goal is to find the number of red pixels inside each rect. However, I do not know how to proceed.
I have used the area (30*30) and countNonZero for finding the number of pixels for each circle by individually manually cropping and saving as a separate image. However I want to implement it in the whole image where I could just iterate through the bounding rects.
Edit: If it will help, this is the code that I use to get the bounding rects.
for (int i = 0; i < contours.size(); i++)
{
approxPolyDP(Mat(contours[i]), contours_poly[i], 0.1, true);
//Get the width and heights of the bounding rectangles
int w = boundingRect(Mat(contours[i])).width;
int h = boundingRect(Mat(contours[i])).height;
//Apply aspect ratio for filtering rects (optional)
double ar = (double)w / h;
//Apply a bounding Rects/Circles
//Rect/contour filter optional
if (hierarchy[i][3] == -1) //No parent
if ((w >= 28 && w <= 32) && (h >= 28 && h <= 32) && ar < 1.1 && ar > 0.9) {
//Apply a bounding Rects/Circles
boundRect[i] = boundingRect(Mat(contours_poly[i]));
minEnclosingCircle((Mat)contours_poly[i], center[i], radius[i]);
//Add to a new
filtered_contours.push_back(contours_poly[i]);
std::cout << i << " w: " << w << " h: " << h << std::endl;
}
}
Maybe, this helps you.
// Load image.
cv::Mat circles = cv::imread("circles.jpg", cv::IMREAD_GRAYSCALE);
// Use simple threshold to get rid of compression artifacts.
cv::Mat circlesThr;
cv::threshold(circles, circlesThr, 128, 255, cv::THRESH_BINARY_INV);
// Find contours in binary image (cv::RETR_EXTERNAL -> only most outer contours).
std::vector<std::vector<cv::Point>> contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(circlesThr, contours, hierarchy, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
// Iterate all contours...
// Iterate all contours...
for (std::vector<cv::Point>& contour : contours)
{
// Determine bounding rectangle of contour.
cv::Rect rect = cv::boundingRect(contour);
// Count non-zero pixels within bounding rect.
std::string count = std::to_string(cv::countNonZero(circlesThr(rect)));
// Output text to image.
cv::putText(circlesThr, count, cv::Point(rect.x - 5, rect.y - 5), cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(255));
}
// Save output image.
cv::imwrite("output.jpg", circlesThr);
Results in:
I need to get contour from hand image, usually I process image with 4 steps:
get raw RGB gray image from 3 channels to 1 channel:
cvtColor(sourceGrayImage, sourceGrayImage, COLOR_BGR2GRAY);
use Gaussian blur to filter gray image:
GaussianBlur(sourceGrayImage, sourceGrayImage, Size(3,3), 0);
binary gray image, I split image by height, normally I split image to 6 images by its height, then each one I do threshold process:
// we split source picture to binaryImageSectionCount(here it's 8) pieces by its height,
// then we for every piece, we do threshold,
// and at last we combine them agin to binaryImage
const binaryImageSectionCount = 8;
void GetBinaryImage(Mat &grayImage, Mat &binaryImage)
{
// get every partial gray image's height
int partImageHeight = grayImage.rows / binaryImageSectionCount;
for (int i = 0; i < binaryImageSectionCount; i++)
{
Mat partialGrayImage;
Mat partialBinaryImage;
Rect partialRect;
if (i != binaryImageSectionCount - 1)
{
// if it's not last piece, Rect's height should be partImageHeight
partialRect = Rect(0, i * partImageHeight, grayImage.cols, partImageHeight);
}
else
{
// if it's last piece, Rect's height should be (grayImage.rows - i * partImageHeight)
partialRect = Rect(0, i * partImageHeight, grayImage.cols, grayImage.rows - i * partImageHeight);
}
Mat partialResource = grayImage(partialRect);
partialResource.copyTo(partialGrayImage);
threshold( partialGrayImage, partialBinaryImage, 0, 255, THRESH_OTSU);
// combin partial binary image to one piece
partialBinaryImage.copyTo(binaryImage(partialRect));
///*stringstream resultStrm;
//resultStrm << "partial_" << (i + 1);
//string string = resultStrm.str();
//imshow(string, partialBinaryImage);
//waitKey(0);*/
}
imshow("result binary image.", binaryImage);
waitKey(0);
return;
}
use findcontour to get biggest area contour:
vector<vector<Point> > contours;
findContours(binaryImage, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
normally it works well,
But for some low quality gray image, it doesn't work,like below:
the complete code is here:
#include <opencv2/imgproc/imgproc.hpp>
#include<opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace std;
using namespace cv;
// we split source picture to binaryImageSectionCount(here it's 8) pieces by its height,
// then we for every piece, we do threshold,
// and at last we combine them agin to binaryImage
const binaryImageSectionCount = 8;
void GetBinaryImage(Mat &grayImage, Mat &binaryImage)
{
// get every partial gray image's height
int partImageHeight = grayImage.rows / binaryImageSectionCount;
for (int i = 0; i < binaryImageSectionCount; i++)
{
Mat partialGrayImage;
Mat partialBinaryImage;
Rect partialRect;
if (i != binaryImageSectionCount - 1)
{
// if it's not last piece, Rect's height should be partImageHeight
partialRect = Rect(0, i * partImageHeight, grayImage.cols, partImageHeight);
}
else
{
// if it's last piece, Rect's height should be (grayImage.rows - i * partImageHeight)
partialRect = Rect(0, i * partImageHeight, grayImage.cols, grayImage.rows - i * partImageHeight);
}
Mat partialResource = grayImage(partialRect);
partialResource.copyTo(partialGrayImage);
threshold( partialGrayImage, partialBinaryImage, 0, 255, THRESH_OTSU);
// combin partial binary image to one piece
partialBinaryImage.copyTo(binaryImage(partialRect));
///*stringstream resultStrm;
//resultStrm << "partial_" << (i + 1);
//string string = resultStrm.str();
//imshow(string, partialBinaryImage);
//waitKey(0);*/
}
imshow("result binary image.", binaryImage);
waitKey(0);
return;
}
int main(int argc, _TCHAR* argv[])
{
// get image path
string imgPath("C:\\Users\\Alfred\\Desktop\\gray.bmp");
// read image
Mat src = imread(imgPath);
imshow("Source", src);
//medianBlur(src, src, 7);
cvtColor(src, src, COLOR_BGR2GRAY);
imshow("gray", src);
// do filter
GaussianBlur(src, src, Size(3,3), 0);
// binary image
Mat threshold_output(src.rows, src.cols, CV_8UC1, Scalar(0, 0, 0));
GetBinaryImage(src, threshold_output);
imshow("binaryImage", threshold_output);
// get biggest contour
vector<vector<Point> > contours;
findContours(threshold_output,contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
int biggestContourIndex = 0;
int maxContourArea = -1000;
for (int i = 0; i < contours.size(); i++)
{
if (contourArea(contours[i]) > maxContourArea)
{
maxContourArea = contourArea(contours[i]);
biggestContourIndex = i;
}
}
// show biggest contour
Mat biggestContour(threshold_output.rows, threshold_output.cols, CV_8UC1, Scalar(0, 0, 0));
drawContours(biggestContour, contours, biggestContourIndex, cv::Scalar(255,255,255), 2, 8, vector<Vec4i>(), 0, Point());
imshow("maxContour", biggestContour);
waitKey(0);
}
could anybody please help me to get a better hand contour result?
thanks!!!
I have the code snippet in python, you can follow the same approach in C:
img = cv2.imread(x, 1)
cv2.imshow("img",img)
imgray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
cv2.imshow("gray",imgray)
#Code for histogram equalization
equ = cv2.equalizeHist(imgray)
cv2.imshow('equ', equ)
#Code for contrast limited adaptive histogram equalization
#clahe = cv2.createCLAHE(clipLimit=3.0, tileGridSize=(8,8))
#cl2 = clahe.apply(imgray)
#cv2.imshow('clahe2', cl2)
This is the result I obtained:
If you're image is horribly bad you could try the code that I commented involving contrast limited adaptive histogram equalization.
I am processing video images and I would like to detect if the video contains any pixels of a certain range of red. Is this possible?
Here is the code I am adapting from a tutorial:
#ifdef __cplusplus
- (void)processImage:(Mat&)image;
{
cv::Mat orig_image = image.clone();
cv::medianBlur(image, image, 3);
cv::Mat hsv_image;
cv::cvtColor(image, hsv_image, cv::COLOR_BGR2HSV);
cv::Mat lower_red_hue_range;
cv::Mat upper_red_hue_range;
cv::inRange(hsv_image, cv::Scalar(0, 100, 100), cv::Scalar(10, 255, 255), lower_red_hue_range);
cv::inRange(hsv_image, cv::Scalar(160, 100, 100), cv::Scalar(179, 255, 255), upper_red_hue_range);
// Interpret values here
}
Interpreting values
I would like to detect if the results from the inRange operations are nil or not. In other words I want to understand if there are any matching pixels in the original image with a colour inRange from the given lower and upper red scale. How can I interpret the results?
First you need to OR the lower and upper mask:
Mat mask = lower_red_hue_range | upper_red_hue_range;
Then you can countNonZero to see if there are non zero pixels (i.e. you found something).
int number_of_non_zero_pixels = countNonZero(mask);
It could be better to first apply morphological erosion or opening to remove small (probably noisy) blobs:
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
morphologyEx(mask, mask, MORPH_OPEN, kernel); // or MORPH_ERODE
or find connected components (findContours, connectedComponentsWithStats) and prune / search for according to some criteria:
vector<vector<Point>> contours
findContours(mask.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
double threshold_on_area = 100.0;
for(int i=0; i<contours.size(); ++i)
{
double area = countourArea(contours[i]);
if(area < threshold_on_area)
{
// don't consider this contour
continue;
}
else
{
// do something (e.g. drawing a bounding box around the contour)
Rect box = boundingRect(contours[i]);
rectangle(hsv_image, box, Scalar(0, 255, 255));
}
}
I have obtained a labeling with the connectedComponents function of C++ OpenCV, which looks like in the picture :
This is the output of the ccLabels variable, which is a cv::Mat of the same size with the original image.
So what I need to do is :
Count the occurences of each number, and select the ones that
occur more than N times, which are the "big" ones.
Segment the
areas of the "big" components, and then count the number of 4's and
0's inside that area.
My ultimate aim is to count the number of holes in the image, so I aim to infer number of holes from (number of 0's / number of 4's). This is probably not the prettiest way but the images are very uniform in terms of size and illumination, so it will meet my needs.
But I'm new to OpenCV and I don't have much idea how to accomplish this task.
Here is what I've done so far:
cv::Mat1b outImg;
cv::threshold(grayImg, outImg, 150, 255, 0); // Thresholded -binary- image
cv::Mat ccLabels;
cv::connectedComponents(outImg, ccLabels); // Each non-zero pixel is labeled with their connectedComponent ID's
// write the labels to file:
std::ofstream myfile;
myfile.open("ccLabels.txt");
cv::Size s = ccLabels.size();
myfile << "Size: " << s.height << " , " << s.width <<"\n";
for (int r1 = 0; r1 < s.height; r1++) {
for (int c1 = 0; c1 < s.height; c1++) {
myfile << ccLabels.at<int>(r1,c1);
}
myfile << "\n";
}
myfile.close();
Since I know how to iterate inside the matrix, counting the numbers should be OK, but first I have to separate(eliminate / ignore) the "background" pixels, which are the 0's outside the connected components. Then counting should be easy.
How can I segment these "big" components? Maybe obtaining a mask, and only consider pixels where mask(x,y) = 1?
Thanks for any help !
Edit
This is the thresholded image:
And this is what I get after Canny edge detection :
This is the actual image (thresholded) :
Here a simple procedure to find the number on the dices, starting from your thresholded image
find external contours
for each contour
eventually discard small blobs
draw the filled mask
use AND and XOR to isolate internal holes
find contours, again
count contours
Result:
Number: 5
Number: 2
Image:
Code:
#include <opencv2\opencv.hpp>
#include <iostream>
#include <vector>
using namespace std;
using namespace cv;
int main(void)
{
// Grayscale image
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Minimum area of the contour
double minContourArea = 10;
// Prepare outpot
Mat3b result;
cvtColor(img, result, COLOR_GRAY2BGR);
// Find contours
vector<vector<Point>> contours;
findContours(img.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
for (int i = 0; i < contours.size(); ++i)
{
// Check area
if (contourArea(contours[i]) < minContourArea) continue;
// Black mask
Mat1b mask(img.rows, img.cols, uchar(0));
// Draw filled contour
drawContours(mask, contours, i, Scalar(255), CV_FILLED);
mask = (mask & img) ^ mask;
vector<vector<Point>> cntrs;
findContours(mask, cntrs, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
cout << "Number: " << cntrs.size() << endl;
// Just for showing results
drawContours(result, cntrs, -1, Scalar(0,0,255), CV_FILLED);
}
imshow("Result", result);
waitKey();
return 0;
}
The easier way is findContours method. You find the inner contours and calculate their area( since the inner contours will be holes) and process this information accordingly.
To solve your 1st problem consider you have a set of values in values.Count the occurences of each number that as appeared.
int m=0;
for(int n=0;n<256;n++)
{
int c=0;
for(int q=0;q<values.size();q++)
{
if(n==values[q])
{
//int c;
c++;
m++;
}
}
cout<<n<<"= "<< c<<endl;
}
cout<<"Total number of elements "<< m<<endl;
To solve your second problem find the largest contour in the image using findcontours, draw bounding rectangle around it and then crop it. Again use the above code to count the pixel value "4" and "0". You can find the link of it here https://stackoverflow.com/a/32998275/3853072
Say I have the following binary image created from the output of cv::watershed():
Now I want to find and fill the contours, so I can separate the corresponding objects from the background in the original image (that was segmented by the watershed function).
To segment the image and find the contours I use the code below:
cv::Mat bgr = cv::imread("test.png");
// Some function that provides the rough outline for the segmented regions.
cv::Mat markers = find_markers(bgr);
cv::watershed(bgr, markers);
cv::Mat_<bool> boundaries(bgr.size());
for (int i = 0; i < bgr.rows; i++) {
for (int j = 0; j < bgr.cols; j++) {
boundaries.at<bool>(i, j) = (markers.at<int>(i, j) == -1);
}
}
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(
boundaries, contours, hierarchy,
CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE
);
So far so good. However, if I pass the contours acquired above to cv::drawContours() as below:
cv::Mat regions(bgr.size(), CV_32S);
cv::drawContours(
regions, contours, -1, cv::Scalar::all(255),
CV_FILLED, 8, hierarchy, INT_MAX
);
This is what I get:
The leftmost contour was left open by cv::findContours(), and as a result it is not filled by cv::drawContours().
Now I know this is a consequence of cv::findContours() clipping off the 1-pixel border around the image (as mentioned in the documentation), but what to do then? It seems an awful waste to discard a contour just because it happened to brush off the image's border. And anyway how can I even find which contour(s) fall in this category? cv::isContourConvex() is not a solution in this case; a region can be concave but "closed" and thus not have this problem.
Edit: About the suggestion to duplicate the pixels from the borders. The problem is that my marking function is also painting all pixels in the "background", i.e. those regions I'm sure aren't part of any object:
This results in a boundary being drawn around the output. If I somehow avoid cv::findContours() to clip off that boundary:
The boundary for the background gets merged with that leftmost object:
Which results in a nice white-filled box.
Solution number 1: use image extended by one pixel in each direction:
Mat extended(bgr.size()+Size(2,2), bgr.type());
Mat markers = extended(Rect(1, 1, bgr.cols, bgr.rows));
// all your calculation part
std::vector<std::vector<Point> > contours;
findContours(boundaries, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
Mat regions(bgr.size(), CV_8U);
drawContours(regions, contours, -1, Scalar(255), CV_FILLED, 8, Mat(), INT_MAX, Point(-1,-1));
Note that contours were extracted from extended image, i.e. their x and y values are bigger by 1 from what they should be. This is why I use drawContours with (-1,-1) pixel offset.
Solution number 2: add white pixels from boundary of image to the neighbor row/column:
bitwise_or(boundaries.row(0), boundaries.row(1), boundaries.row(1));
bitwise_or(boundaries.col(0), boundaries.col(1), boundaries.col(1));
bitwise_or(boundaries.row(bgr.rows()-1), boundaries.row(bgr.rows()-2), boundaries.row(bgr.rows()-2));
bitwise_or(boundaries.col(bgr.cols()-1), boundaries.col(bgr.cols()-2), boundaries.col(bgr.cols()-2));
Both solution are half-dirty workarounds, but this is all I could think about.
Following Burdinov's suggestions I came up with the code below, which correctly fills all extracted regions while ignoring the all-enclosing boundary:
cv::Mat fill_regions(const cv::Mat &bgr, const cv::Mat &prospective) {
static cv::Scalar WHITE = cv::Scalar::all(255);
int rows = bgr.rows;
int cols = bgr.cols;
// For the given prospective markers, finds
// object boundaries on the given BGR image.
cv::Mat markers = prospective.clone();
cv::watershed(bgr, markers);
// Copies the boundaries of the objetcs segmented by cv::watershed().
// Ensures there is a minimum distance of 1 pixel between boundary
// pixels and the image border.
cv::Mat borders(rows + 2, cols + 2, CV_8U);
for (int i = 0; i < rows; i++) {
uchar *u = borders.ptr<uchar>(i + 1) + 1;
int *v = markers.ptr<int>(i);
for (int j = 0; j < cols; j++, u++, v++) {
*u = (*v == -1);
}
}
// Calculates contour vectors for the boundaries extracted above.
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(
borders, contours, hierarchy,
CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE
);
int area = bgr.size().area();
cv::Mat regions(borders.size(), CV_32S);
for (int i = 0, n = contours.size(); i < n; i++) {
// Ignores contours for which the bounding rectangle's
// area equals the area of the original image.
std::vector<cv::Point> &contour = contours[i];
if (cv::boundingRect(contour).area() == area) {
continue;
}
// Draws the selected contour.
cv::drawContours(
regions, contours, i, WHITE,
CV_FILLED, 8, hierarchy, INT_MAX
);
}
// Removes the 1 pixel-thick border added when the boundaries
// were first copied from the output of cv::watershed().
return regions(cv::Rect(1, 1, cols, rows));
}