Drawing Polygons in OpenCV? - c++

What am I doing wrong here?
vector <vector<Point> > contourElement;
for (int counter = 0; counter < contours -> size (); counter ++)
{
contourElement.push_back (contours -> at (counter));
const Point *elementPoints [1] = {contourElement.at (0)};
int numberOfPoints [] = {contourElement.at (0).size ()};
fillPoly (contourMask, elementPoints, numberOfPoints, 1, Scalar (0, 0, 0), 8);
I keep getting an error on the const Point part. The compiler says
error: cannot convert 'std::vector<cv::Point_<int>, std::allocator<cv::Point_<int> > >' to 'const cv::Point*' in initialization
What am I doing wrong? (PS: Obviously ignore the missing bracket at the end of the for loop due to this being only part of my code)

Just for the record (and because the opencv docu is very sparse here) a more reduced snippet using the c++ API:
std::vector<cv::Point> fillContSingle;
[...]
//add all points of the contour to the vector
fillContSingle.push_back(cv::Point(x_coord,y_coord));
[...]
std::vector<std::vector<cv::Point> > fillContAll;
//fill the single contour
//(one could add multiple other similar contours to the vector)
fillContAll.push_back(fillContSingle);
cv::fillPoly( image, fillContAll, cv::Scalar(128));

Let's analyse the offending line:
const Point *elementPoints [1] = { contourElement.at(0) };
You declared contourElement as vector <vector<Point> >, which means that contourElement.at(0) returns a vector<Point> and not a const cv::Point*. So that's the first error.
In the end, you need to do something like:
vector<Point> tmp = contourElement.at(0);
const Point* elementPoints[1] = { &tmp[0] };
int numberOfPoints = (int)tmp.size();
Later, call it as:
fillPoly (contourMask, elementPoints, &numberOfPoints, 1, Scalar (0, 0, 0), 8);

contourElement is vector of vector<Point> and not Point :)
so instead of:
const Point *elementPoints
put
const vector<Point> *elementPoints

Some people may arrive here due to an apparently bug in the samples/cpp/create_mask.cpp from the OpenCV. This way, considering the above explained I edited the "if (event == EVENT_RBUTTONUP)" branch para to:
...
mask = Mat::zeros(src.size(), CV_8UC1);
vector<Point> tmp = pts;
const Point* elementPoints[1] = { &tmp[0] };
int npts = (int) pts.size();
cout << "elementsPoints=" << elementPoints << endl;
fillPoly(mask, elementPoints, &npts, 1, Scalar(255, 255, 255), 8);
bitwise_and(src, src, final, mask);
...
Hope it may help someone.

Related

Adjust a detected 2D map to a reference 2D map

I have a map with some reference positions that correspond to the center (small cross) of some objects like this:
I take pictures to find my objects but in the pictures I have some noise so I can't always find all of the objects, it can be something like this:
From the few found positions I need to know where in the picture the other not found objects should be. I've being reading about this for the last couple of days and experimenting but I can't find a proper way of doing this. In some examples they start by calculating the center of masses and translating them together, then rotating, some other examples use least squares minimization and start by a rotation. I can't use OpenCV or any other APIs, just plain C++. I can use Eigen library if that helps. Can anyone give me some pointers on this?
EDIT:
I've solved the correspondence between points, the picture is never very different from the reference so for each found position I can search for its corresponding reference. In brief, I have one 2D matrix with reference points and another 2D matrix with found points. In the found matrix of points, the not found points are saved as NaN just to keep the same matrix size, the NaN points are not used in the calculations.
Since you have already matched the points to one another, finding the transform is straight forward:
Eigen::Affine2d findAffine(Eigen::Matrix2Xd const& refCloud, Eigen::Matrix2Xd const& targetCloud)
{
// get translation
auto refCom = centerOfMass(refCloud);
auto refAtOrigin = refCloud.colwise() - refCom;
auto targetCom = centerOfMass(targetCloud);
auto targetAtOrigin = targetCloud.colwise() - targetCom;
// get scale
auto scale = targetAtOrigin.rowwise().norm().sum() / refAtOrigin.rowwise().norm().sum();
// get rotation
auto covMat = refAtOrigin * targetAtOrigin.transpose();
auto svd = covMat.jacobiSvd(Eigen::ComputeFullU | Eigen::ComputeFullV);
auto rot = svd.matrixV() * svd.matrixU().transpose();
// combine the transformations
Eigen::Affine2d trans = Eigen::Affine2d::Identity();
trans.translate(targetCom).scale(scale).rotate(rot).translate(-refCom);
return trans;
}
refCloud is your reference point set and targetCloud is the set of points you have found in your image. It is important that the clouds match index wise, so refCloud[n] must be the corresponding point to targetCloud[n]. This means that you have to remove all NaNs from your matrix and cherry pick the correspondances in your reference point set.
Here is a full example. I'm using OpenCV to draw the stuff:
#include <Eigen/Dense>
#include <opencv2/opencv.hpp>
#include <vector>
#include <iostream>
using Point = Eigen::Vector2d;
template <typename TMatrix>
Point centerOfMass(TMatrix const& points)
{
return points.rowwise().sum() / points.cols();
}
Eigen::Affine2d findAffine(Eigen::Matrix2Xd const& refCloud, Eigen::Matrix2Xd const& targetCloud)
{
// get translation
auto refCom = centerOfMass(refCloud);
auto refAtOrigin = refCloud.colwise() - refCom;
auto targetCom = centerOfMass(targetCloud);
auto targetAtOrigin = targetCloud.colwise() - targetCom;
// get scale
auto scale = targetAtOrigin.rowwise().norm().sum() / refAtOrigin.rowwise().norm().sum();
// get rotation
auto covMat = refAtOrigin * targetAtOrigin.transpose();
auto svd = covMat.jacobiSvd(Eigen::ComputeFullU | Eigen::ComputeFullV);
auto rot = svd.matrixV() * svd.matrixU().transpose();
// combine the transformations
Eigen::Affine2d trans = Eigen::Affine2d::Identity();
trans.translate(targetCom).scale(scale).rotate(rot).translate(-refCom);
return trans;
}
void drawCloud(cv::Mat& img, Eigen::Matrix2Xd const& cloud, Point const& origin, Point const& scale, cv::Scalar const& color, int thickness = cv::FILLED)
{
for (int c = 0; c < cloud.cols(); c++)
{
auto p = origin + cloud.col(c).cwiseProduct(scale);
cv::circle(img, {int(p.x()), int(p.y())}, 5, color, thickness, cv::LINE_AA);
}
}
int main()
{
// generate sample reference
std::vector<Point> points = {{4, 9}, {4, 4}, {6, 9}, {6, 4}, {8, 9}, {8, 4}, {10, 9}, {10, 4}, {12, 9}, {12, 4}};
Eigen::Matrix2Xd fullRefCloud(2, points.size());
for (int i = 0; i < points.size(); i++)
fullRefCloud.col(i) = points[i];
// generate sample target
Eigen::Matrix2Xd refCloud = fullRefCloud.leftCols(fullRefCloud.cols() * 0.6);
Eigen::Affine2d refTransformation = Eigen::Affine2d::Identity();
refTransformation.translate(Point(8, -4)).rotate(4.3).translate(-centerOfMass(refCloud)).scale(1.5);
Eigen::Matrix2Xd targetCloud = refTransformation * refCloud;
// find the transformation
auto transform = findAffine(refCloud, targetCloud);
std::cout << "Original: \n" << refTransformation.matrix() << "\n\nComputed: \n" << transform.matrix() << "\n";
// apply the computed transformation
Eigen::Matrix2Xd queryCloud = fullRefCloud.rightCols(fullRefCloud.cols() - refCloud.cols());
queryCloud = transform * queryCloud;
// draw it
Point scale = {15, 15}, origin = {100, 300};
cv::Mat img(600, 600, CV_8UC3);
cv::line(img, {0, int(origin.y())}, {800, int(origin.y())}, {});
cv::line(img, {int(origin.x()), 0}, {int(origin.x()), 800}, {});
drawCloud(img, refCloud, origin, scale, {0, 255, 0});
drawCloud(img, fullRefCloud, origin, scale, {255, 0, 0}, 1);
drawCloud(img, targetCloud, origin, scale, {0, 0, 255});
drawCloud(img, queryCloud, origin, scale, {255, 0, 255}, 1);
cv::flip(img, img, 0);
cv::imshow("img", img);
cv::waitKey();
return 0;
}
I managed to make it work with the code from here:
https://github.com/oleg-alexandrov/projects/blob/master/eigen/Kabsch.cpp
I'm calling the Find3DAffineTransform function and passing it my 2D maps, as this function expects 3D maps I've made all z coordinates = 0 and it works. If I have some time I'll try to adapt it to 2D.
Meanwhile a fellow programmer (Regis :-) found also this, that should work:
https://eigen.tuxfamily.org/dox/group__Geometry__Module.html#gab3f5a82a24490b936f8694cf8fef8e60
Its the function umeyama() that returns the transformation between two point sets. Its part of Eigen library. Didn't have the time to test this.

Problems with forloop and Points

So I'm trying too clean up my code as I was using too many Points written up. So I came up with the idea too use a forloop instead unfortunately I can't seem too get work.
I've changed my points into CVpoint arrays and made a forloop but I cant seem too get too work
Anyone know how I can make this work ? My error is Can't convert CVpoint to Int
My functions :
bool FindWhiteLine(Vec3b white)
{
bool color = false;
uchar blue = white.val[0];
uchar green = white.val[1];
uchar red = white.val[2];
if(blue == 255 && green == 255 && red == 255)
{
color = true;
}
return color;
}
// extends the line until whiteline is found
CvPoint DrawingLines(Mat img , CvPoint point,bool right)
{
int cols = img.cols;
Vec3b drawingLine = img.at<Vec3b>(point); //defines the color at current positions
while(point.x != cols){
if(right == true)
{
point.x = point.x +1; //increases the line too the right
drawingLine = img.at<cv::Vec3b>(point);
if(FindWhiteLine(drawingLine)){ // quites incase white line is found
break;
}
}
else if(right == false)
{
point.x = point.x -1; //Decrease the line too the left
drawingLine = img.at<cv::Vec3b>(point);
if(FindWhiteLine(drawingLine)){ // quites incase white line is found
break;
}
}
}
return point;
}
My main :
void LaneDetector::processImage() {
//http://docs.opencv.org/doc/user_guide/ug_mat.html Handeling images
Mat matImg(m_image);
Mat gray; // for converting to gray
cvtColor(matImg, gray, CV_BGR2GRAY); //Let's make the image gray
Mat canny; //Canny for detecting edges ,http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html
Canny(gray, canny, 50, 170, 3); //inputing Canny limits
cvtColor(canny, matImg, CV_GRAY2BGR); //Converts back from gray
// get matrix size http://docs.opencv.org/modules/core/doc/basic_structures.html
int rows = matImg.rows;
int cols = matImg.cols;
//Points
Point centerPoint; // Old way
Point centerPointEnd;
CvPoint startPos[4] , endXRight[4] , endxLeft[4]; // new way I tried
for (int i = 0; i< 4; i ++) {
startPos[i].x = cols/2;
endXRight[i].x = DrawingLines(matImg,endXRight[i],true); // error here
endxLeft[i].x = DrawingLines(matImg,endxLeft[i],false);
}
if (m_debug) {
line(matImg, centerPoint,centerPointEnd,cvScalar(0, 0, 255),2, 8);
for (i = 0; i< 4; i ++) {
line(matImg, startPos[i],endXRight[i],cvScalar(0, 0, 255),2, 8);
line(matImg, startPos[i],endXLeft[i],cvScalar(0, 0, 255),2, 8);
}
Error code :
/home/nicho/2015-mini-smart-vehicles/project-template/sources/OpenDaVINCI-msv/apps/lanedetector/src/LaneDetector.cpp:176:25: error: cannot convert ‘CvPoint’ to ‘int’ in assignment
endXRight[i].x = DrawingLines(matImg,endXRight[i],true);
/home/nicho/2015-mini-smart-vehicles/project-template/sources/OpenDaVINCI-msv/apps/lanedetector/src/LaneDetector.cpp:177:24: error: cannot convert ‘CvPoint’ to ‘int’ in assignment
endxLeft[i].x = DrawingLines(matImg,endxLeft[i],false);
The error couldn't be much clearer. The function returns a value of type CvPoint, and you try to assign it to a variable of type int. That can't be done because you can't convert CvPoint to int.
It looks like you want to assign to the point itself, not one of its co-ordinates:
endXRight[i] = DrawingLines(matImg,endXRight[i],true);
^ remove .x

How to find euclidean distance between keypoints of a single image in opencv

I want to get a distance vector d for each key point in the image. The distance vector should consist of distances from that keypoint to all other keypoints in that image.
Note: Keypoints are found using SIFT.
Im pretty new to opencv. Is there a library function in C++ that can make my task easy?
If you aren't interested int the position-distance but the descriptor-distance you can use this:
cv::Mat SelfDescriptorDistances(cv::Mat descr)
{
cv::Mat selfDistances = cv::Mat::zeros(descr.rows,descr.rows, CV_64FC1);
for(int keyptNr = 0; keyptNr < descr.rows; ++keyptNr)
{
for(int keyptNr2 = 0; keyptNr2 < descr.rows; ++keyptNr2)
{
double euclideanDistance = 0;
for(int descrDim = 0; descrDim < descr.cols; ++descrDim)
{
double tmp = descr.at<float>(keyptNr,descrDim) - descr.at<float>(keyptNr2, descrDim);
euclideanDistance += tmp*tmp;
}
euclideanDistance = sqrt(euclideanDistance);
selfDistances.at<double>(keyptNr, keyptNr2) = euclideanDistance;
}
}
return selfDistances;
}
which will give you a N x N matrix (N = number of keypoints) where Mat_i,j = euclidean distance between keypoint i and j.
with this input:
I get these outputs:
image where keypoints are marked which have a distance of less than 0.05
image that corresponds to the matrix. white pixels are dist < 0.05.
REMARK: you can optimize many things in the computation of the matrix, since distances are symmetric!
UPDATE:
Here is another way to do it:
From your chat I know that you would need 13GB memory to hold those distance information for 41381 keypoints (which you tried). If you want instead only the N best matches, try this code:
// choose double here if you are worried about precision!
#define intermediatePrecision float
//#define intermediatePrecision double
//
void NBestMatches(cv::Mat descriptors1, cv::Mat descriptors2, unsigned int n, std::vector<std::vector<float> > & distances, std::vector<std::vector<int> > & indices)
{
// TODO: check whether descriptor dimensions and types are the same for both!
// clear vector
// get enough space to create n best matches
distances.clear();
distances.resize(descriptors1.rows);
indices.clear();
indices.resize(descriptors1.rows);
for(int i=0; i<descriptors1.rows; ++i)
{
// references to current elements:
std::vector<float> & cDistances = distances.at(i);
std::vector<int> & cIndices = indices.at(i);
// initialize:
cDistances.resize(n,FLT_MAX);
cIndices.resize(n,-1); // for -1 = "no match found"
// now find the 3 best matches for descriptor i:
for(int j=0; j<descriptors2.rows; ++j)
{
intermediatePrecision euclideanDistance = 0;
for( int dim = 0; dim < descriptors1.cols; ++dim)
{
intermediatePrecision tmp = descriptors1.at<float>(i,dim) - descriptors2.at<float>(j, dim);
euclideanDistance += tmp*tmp;
}
euclideanDistance = sqrt(euclideanDistance);
float tmpCurrentDist = euclideanDistance;
int tmpCurrentIndex = j;
// update current best n matches:
for(unsigned int k=0; k<n; ++k)
{
if(tmpCurrentDist < cDistances.at(k))
{
int tmpI2 = cIndices.at(k);
float tmpD2 = cDistances.at(k);
// update current k-th best match
cDistances.at(k) = tmpCurrentDist;
cIndices.at(k) = tmpCurrentIndex;
// previous k-th best should be better than k+1-th best //TODO: a simple memcpy would be faster I guess.
tmpCurrentDist = tmpD2;
tmpCurrentIndex =tmpI2;
}
}
}
}
}
It computes the N best matches for each keypoint of the first descriptors to the second descriptors. So if you want to do that for the same keypoints you'll set to be descriptors1 = descriptors2 ion your call as shown below. Remember: the function doesnt know that both descriptor sets are identical, so the first best match (or at least one) will be the keypoint itself with distance 0 always! Keep that in mind if using the results!
Here's sample code to generate an image similar to the one above:
int main()
{
cv::Mat input = cv::imread("../inputData/MultiLena.png");
cv::Mat gray;
cv::cvtColor(input, gray, CV_BGR2GRAY);
cv::SiftFeatureDetector detector( 7500 );
cv::SiftDescriptorExtractor describer;
std::vector<cv::KeyPoint> keypoints;
detector.detect( gray, keypoints );
// draw keypoints
cv::drawKeypoints(input,keypoints,input);
cv::Mat descriptors;
describer.compute(gray, keypoints, descriptors);
int n = 4;
std::vector<std::vector<float> > dists;
std::vector<std::vector<int> > indices;
// compute the N best matches between the descriptors and themselves.
// REMIND: ONE best match will always be the keypoint itself in this setting!
NBestMatches(descriptors, descriptors, n, dists, indices);
for(unsigned int i=0; i<dists.size(); ++i)
{
for(unsigned int j=0; j<dists.at(i).size(); ++j)
{
if(dists.at(i).at(j) < 0.05)
cv::line(input, keypoints[i].pt, keypoints[indices.at(i).at(j)].pt, cv::Scalar(255,255,255) );
}
}
cv::imshow("input", input);
cv::waitKey(0);
return 0;
}
Create a 2D vector (size of which would be NXN) -->
std::vector< std::vector< float > > item;
Create 2 for loops to go till the number of keypoints (N) you have
Calculate distances as suggested by a-Jays
Point diff = kp1.pt - kp2.pt;
float dist = std::sqrt( diff.x * diff.x + diff.y * diff.y );
Add this to vector using push_back for each keypoint --> N times.
The keypoint class has a member called pt which in turn has x and y [the (x,y) location of the point] as its own members.
Given two keypoints kp1 and kp2, it's then easy to calculate the euclidean distance as:
Point diff = kp1.pt - kp2.pt;
float dist = std::sqrt( diff.x * diff.x + diff.y * diff.y )
In your case, it is going to be a double loop iterating over all the keypoints.

OpenCV - Finding contour end points?

I'm looking for a way to get the end points of a thin contour extracted from a Canny edge detection. I was wondering is this is possible with some built-in way. I would plan on walking through the contour to find the two points with the largest distance from each other (moving only along the contour), but it would be much easier if a way already exists. I see that cvarcLength exists to get the perimeter of a contour, so it's possible there would be a built-in way to achieve this. Is it are the points within a contour ordered in such a way that some information can be known about the end points? Any other ideas? Thank you much!
I was looking for the same function, I see HoughLinesP has end points since lines are used not contours. I am using findContours however so I found it was helpful to order the points in the contours like this below and than take the first and last points as the start and end points.
struct contoursCmpY {
bool operator()(const Point &a,const Point &b) const {
if (a.y == b.y)
return a.x < b.x;
return a.y < b.y;
}
} contoursCmpY_;
vector<Point> cont;
cont.push_back(Point(194,72));
cont.push_back(Point(253,14));
cont.push_back(Point(293,76));
cont.push_back(Point(245,125));
std::sort(cont.begin(),cont.end(), contoursCmpY_);
int size = cont.size();
printf("start Point x=%d,y=%d end Point x=%d,y=%d", cont[0].x, cont[0].y, cont[size].x, cont[size].y);
As you say, you can always step through the contour points.
The following finds the two points, ptLeft and ptRight, with the greatest separation along x, but could be modified as needed.
CvPoint ptLeft = cvPoint(image->width, image->height);
CvPoint ptRight = cvPoint(0, 0);
CvSlice slice = cvSlice(0, CV_WHOLE_SEQ_END_INDEX);
CvSeqReader reader;
cvStartReadSeq(contour, &reader, 0);
cvSetSeqReaderPos(&reader, slice.start_index);
int count = cvSliceLength(slice, contour);
for(int i = 0; i < count; i++)
{
reader.prev_elem = reader.ptr;
CV_NEXT_SEQ_ELEM(contour->elem_size, reader);
CvPoint* pt = (CvPoint*)reader.ptr;
if( pt->x < ptLeft.x )
ptLeft = *pt;
if( pt->x > ptRight.x )
ptRight = *pt;
}
The solution based on neighbor distances check didn't work for me (Python + opencv 3.0.0-beta), because all contours I get seem to be folded on themselves. What would appear as "open" contours at first glance on an image are actually "closed" contours collapsed on themselves.
So I had to resort to look for "u-turns" in each contour's sequence, an example in Python:
import numpy as np
def draw_closing_lines(img, contours):
for cont in contours:
v1 = (np.roll(cont, -2, axis=0) - cont)
v2 = (np.roll(cont, 2, axis=0) - cont)
dotprod = np.sum(v1 * v2, axis=2)
norm1 = np.sqrt(np.sum(v1 ** 2, axis=2))
norm2 = np.sqrt(np.sum(v2 ** 2, axis=2))
cosinus = (dotprod / norm1) / norm2
indexes = np.where(0.95 < cosinus)[0]
if len(indexes) == 1:
# only one u-turn found, mark in yellow
cv2.circle(img, tuple(cont[indexes[0], 0]), 3, (0, 255, 255))
elif len(indexes) == 2:
# two u-turns found, draw the closing line
cv2.line(img, tuple(tuple(cont[indexes[0], 0])), tuple(cont[indexes[1], 0]), (0, 0, 255))
else:
# too many u-turns, mark in red
for i in indexes:
cv2.circle(img, tuple(cont[i, 0]), 3, (0, 0, 255))
Not completely robust against polluting cusps and quite time-consuming, but that's a start. I'd be interested in other ideas, naturally :)

OpenCV 2 Centroid

I am trying to find the centroid of a contour but am having trouble implementing the example code in C++ (OpenCV 2.3.1). Can anyone help me out?
To find the centroid of a contour, you can use the method of moments. And functions are implemented OpenCV.
Check out these moments function (central and spatial moments).
Below code is taken from OpenCV 2.3 docs tutorial. Full code here.
/// Find contours
findContours( canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
/// Get the moments
vector<Moments> mu(contours.size() );
for( int i = 0; i < contours.size(); i++ )
{ mu[i] = moments( contours[i], false ); }
/// Get the mass centers:
vector<Point2f> mc( contours.size() );
for( int i = 0; i < contours.size(); i++ )
{ mc[i] = Point2f( mu[i].m10/mu[i].m00 , mu[i].m01/mu[i].m00 ); }
Also check out this SOF, although it is in Python, it would be useful. It finds all parameters of a contour.
If you have the mask of the contour area, you can find the centroid location as follows:
cv::Point computeCentroid(const cv::Mat &mask) {
cv::Moments m = moments(mask, true);
cv::Point center(m.m10/m.m00, m.m01/m.m00);
return center;
}
This approach is useful when one has the mask but not the contour. In that case the above method is computationally more efficient vs. using cv::findContours(...) and then finding mass center.
Here's the source
Given the contour points, and the formula from Wikipedia, the centroid can be efficiently computed like this:
template <typename T>
cv::Point_<T> computeCentroid(const std::vector<cv::Point_<T> >& in) {
if (in.size() > 2) {
T doubleArea = 0;
cv::Point_<T> p(0,0);
cv::Point_<T> p0 = in->back();
for (const cv::Point_<T>& p1 : in) {//C++11
T a = p0.x * p1.y - p0.y * p1.x; //cross product, (signed) double area of triangle of vertices (origin,p0,p1)
p += (p0 + p1) * a;
doubleArea += a;
p0 = p1;
}
if (doubleArea != 0)
return p * (1 / (3 * doubleArea) ); //Operator / does not exist for cv::Point
}
///If we get here,
///All points lies on one line, you can compute a fallback value,
///e.g. the average of the input vertices
[...]
}
Note:
This formula works with vertices given both in clockwise and
counterclockwise order.
If the points have integer coordinates, it
might be convenient to adapt the type of p and of the return value to Point2f or Point2d,
and to add a cast to float or double to the denominator in the return statement.
If all you need is an approximation of the centroid here are a couple of simple ways to do it:
sumX = 0; sumY = 0;
size = array_points.size;
if(size > 0){
foreach(point in array_points){
sumX += point.x;
sumY += point.y;
}
centroid.x = sumX/size;
centroid.y = sumY/size;
}
Or with the help of Opencv's boundingRect:
//pseudo-code:
Rect bRect = Imgproc.boundingRect(array_points);
centroid.x = bRect.x + (bRect.width / 2);
centroid.y = bRect.y + (bRect.height / 2);