value of Points modification - c++

I have two points top_left and bottom_right . To increase the area covered by the rectangle drawn from these points, I add/subtract sub values from them.
Point top_left -= Point( WIDTH_ADD, HEIGHT_ADD);
Point bottom_right += Point(WIDTH_ADD , HEIGHT_ADD );
Now I need to check whether they surpass the boundary of current frame (captured from camera).
If they do, I need to check and modify their values.
if ( top_left.x < 0 ) top_left.x = 0;
if ( bottom_right.x > frame.cols ) bottom_right.x = frame.cols;
if ( top_left.y < 0 ) top_left.y = 0;
if( bottom_right.y > frame.rows ) bottom_right.y = frame.rows;
Is there any fancy way of doing this in opencv ?

I don't know any, but even if there were, your code would probably be faster since you're skipping at least a function call to OpenCV.

Related

How to use mask in drawMatches function c++

I´m doing a program with opencv and a stereo camera. I want to know what detected point in firs camera below with what detected point in second camera. The think is I have some detectors, extractors and matches methods, and following the example in opencv I have a algorithm to filter the matches and only draw good matches but in my case the min_dist parameter depends on my trackBar position.
This is the code of the opencv example:http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html#feature-flann-matcher
And there are the changes that I did to move the minimum distance between matches.
//TrackBar position
dist_track = getTrackbarPos(nombreTrackbar, BUTTON_WINDOW);
cout <<"Posicion de la barra: " << dist_track << endl;
good_matches.clear();
//Obtain good_matches
for( int i = 0; i < descriptors[0].rows; i++ )
{ if( matches[i].distance <= coef*dist_track)
{ good_matches.push_back( matches[i]);}
}
The main think is that when I put the trackBar at the begining I have correct matches but when the trackbar is put in the end, the matches that I found aren´t correct. In this case I found a lot of matches but many of them are wrong.
Now I´m trying to do correctly the images. I want to use a mask in drawmatches function to force that the second-camera-points detected has to be near to the epipolar line. Can someone ask me something about it?
Do someone knows how to use the mask parameter to force that the founded matcher need to be in the epipolar line?
Or how to create the mask parameter?
Thanks friends!
Finally I decided to change the way i will operate. I'm trying to cut the originals images and keep only the necessary information. I mean I get out the information of the photo that it doesn't mind for my application and only keep the information that I'm goint to use.
My idea is to use the epipolar lines of both cameras to determine the interest area that I have, I will calculate where is the epipolar lines in both images and then y cut the images and only keep the information where the epipolar lines are.
Doing it I obtain two new images and my idea is to pass the news images to the matcher method to see if I can obtain more successful matching.
Image before cut off:
Image later cut off:
However I have a problem with the computation time. My code requiresa high computacional cost and sometimes the program fails. The error says "Segmentation fault:11".
"Bus error 10" appears if I quit the waitKey() line.
My code to copy the main image content in the second one is there:
> for (int i=0; i<RightEpipolarLines.rows; i++) {
> float m = -RightEpipolarLines(i, 0)/RightEpipolarLines(i, 1);
> float n = -RightEpipolarLines(i, 2)/RightEpipolarLines(i, 1);
> for (int x = 0; x < 480; x++) {
> float y_prima = m*x + n;
> int y = int (y_prima);
> (cut_image[0].at<float>(y, x)) = capture[0].at<float>(y, x);
> }
>
> }
> waitKey();
> for (int i = 0; i<LeftEpipolarLines.rows; i++) {
> float m = LeftEpipolarLines(i, 0)/LeftEpipolarLines(i, 1);
> float n = -LeftEpipolarLines(i, 2)/LeftEpipolarLines(i, 1);
> for (int x = 0; x < 480; x++) {
> float y_prima = m*x + n;
> int y = int (y_prima);
> (cut_image[1].at<float>(y, x)) = capture[1].at<float>(y, x);
> }
> }
> waitKey();
Does someone know how to pass the information between real capture and cut_image more efficiently? I only would like to pass the pixels information that are near the epipolar lines.

FastFeatureDetector opencv C++ filtering results

I am developing a game bot and using opencv and I am trying to make it detect spikes.
The spikes look like this :
What I tried was using a FastFeatureDetector to highlight keypoints , the result was the following :
The spikes are horizontal and change colors.the operation is on a full 1920x1080 screen
So my thinking was to take one of the points and compare to all of the other points X's since I have no way of filtering the result and 6094 KeyPoints the operation took too long. (37136836 iterations).
Is there a way to filter FastFeatureDetector results or should I approach this in another way?
my code :
Point * findSpikes( Mat frame , int * num_spikes )
{
Point * ret = NULL;
int spikes_counter = 0;
Mat frame2;
cvtColor( frame , frame2 , CV_BGR2GRAY );
Ptr<FastFeatureDetector> myBlobDetector = FastFeatureDetector::create( );
vector<KeyPoint> myBlobs;
myBlobDetector->detect( frame2 , myBlobs );
HWND wnd = FindWindow( NULL , TEXT( "Andy" ) );
RECT andyRect;
GetWindowRect( wnd , &andyRect );
/*Mat blobimg;
drawKeypoints( frame2 , myBlobs , blobimg );*/
//imshow( "Blobs" , blobimg );
//waitKey( 1 );
printf( "Size of vectors : %d\n" , myBlobs.size( ) );
for ( vector<KeyPoint>::iterator blobIterator = myBlobs.begin( ); blobIterator != myBlobs.end( ); blobIterator++ )
{
#pragma region FilteringArea
//filtering keypoints
if ( blobIterator->pt.x > andyRect.right || blobIterator->pt.x < andyRect.left
|| blobIterator->pt.y > andyRect.bottom || blobIterator->pt.y < andyRect.top )
{
printf( "Filtered\n" );
continue;
}
#pragma endregion
for ( vector<KeyPoint>::iterator comparsion = myBlobs.begin( ); comparsion != myBlobs.end( ); comparsion++ )
{
//filtering keypoints
#pragma region FilteringRegion
if ( comparsion->pt.x > andyRect.right || comparsion->pt.x < andyRect.left
|| comparsion->pt.y > andyRect.bottom || comparsion->pt.y < andyRect.top )
{
printf( "Filtered\n" );
continue;
}
printf( "Processing\n" );
double diffX = abs( blobIterator->pt.x - comparsion->pt.x );
if ( diffX <= 5 )
{
spikes_counter++;
printf( "Spike added\n" );
ret = ( Point * ) realloc( ret , sizeof( Point ) * spikes_counter );
if ( !ret )
{
printf( "Memory error\n" );
ret = NULL;
}
ret[spikes_counter - 1].y = ( ( blobIterator->pt.y + comparsion->pt.y ) / 2 );
ret[spikes_counter - 1].x = blobIterator->pt.x;
break;
}
#pragma endregion
}
}
( *( num_spikes ) ) = spikes_counter;
return ret;//Modify later
}
I'm aware of the usage of realloc and printf in C++ I just don't like cout and new
Are the spikes actually different sizes and irregularly spaced in real life? In your image they are regularly spaced and identically sized and so once you know the coordinates of one point, you can calculate all of the rest by simply adding a fixed increment to the X coordinate.
If the spikes are irregularly spaced and potentially different heights, I'd suggest you might try :
Use Canny edge detector to find the boundary between the spikes and the background
For each X coord in this edge image, search a single column of the edge image using minMaxIdx to find the brightest point in that column
If the Y coordinate of that point is higher up the screen than the Y coordinate of the brightest point in the previous column then the previous column was a spike, save (X,Y) coords.
If a spike was found in step 3, keep skipping across columns until the brightest Y coordinate in a column is the same as in the previous column. Then repeat spike detection, otherwise keep searching for next spike
Considering the form of your spikes, I'd suggest template pattern mathcing. It seems keypoints are a rather indirect approach.

Generating random coordinates with differences between them

So i seek to generate 5 random coordinates within an area of roughly 640,000 points with each axis value between 100 and 900. These coordinates must have a distance of more than 100 between them to prevent an overlap. Having searched previous answers and attempted a piece of code below:
struct point
{
int x;
int y;
};
point pointarray[6];
srand ( time(NULL) );
pointarray[1].x = 100+(std::rand()% 801);
pointarray[1].y = 100+(std::rand()% 801);
for (int n=2; n <= 5 ; n++)
{
double dist;
int currentx;
int currenty;
double xch;
double ych;
while (dist < 100)
{
srand ( time(NULL) );
currentx = (100+(std::rand()% 801));
currenty = (100+(std::rand()% 801));
xch = (currentx - (pointarray[(n-1)].x));
ych = (currenty - (pointarray[(n-1)].y));
dist = sqrt(xch*xch + ych*ych);
if (dist >= 100 && dist <= 800 )
{
currentx = pointarray[n].x;
currenty = pointarray[n].y;
}
}
}
I do not understand why the last 4 points are absolutely huge numbers (in the millions) while only the first is within the required range?
You use uninitialized dist, it could be the problem:
double dist;
....
while (dist < 100)
Also, I see no place where you write to pointarray (except pointarray[1]). Should not
currentx = pointarray[n].x;
become
pointarray[n].x = currentx;
Also, if dist gets bigger than 800, then nothing happens and we just go to the next element. I guess the intention was to stay within while loop instead.
Also, we check the distance to one previous point. I'm not sure, but it could be that the intention was to check distances to all previous points. In the latter case, we need an inner loop perhaps. But make sure that there is no possibility that already placed points make it impossible to put the next one.
Also, perhaps you do not want the second srand ( time(NULL) );

OpenCV - Finding contour end points?

I'm looking for a way to get the end points of a thin contour extracted from a Canny edge detection. I was wondering is this is possible with some built-in way. I would plan on walking through the contour to find the two points with the largest distance from each other (moving only along the contour), but it would be much easier if a way already exists. I see that cvarcLength exists to get the perimeter of a contour, so it's possible there would be a built-in way to achieve this. Is it are the points within a contour ordered in such a way that some information can be known about the end points? Any other ideas? Thank you much!
I was looking for the same function, I see HoughLinesP has end points since lines are used not contours. I am using findContours however so I found it was helpful to order the points in the contours like this below and than take the first and last points as the start and end points.
struct contoursCmpY {
bool operator()(const Point &a,const Point &b) const {
if (a.y == b.y)
return a.x < b.x;
return a.y < b.y;
}
} contoursCmpY_;
vector<Point> cont;
cont.push_back(Point(194,72));
cont.push_back(Point(253,14));
cont.push_back(Point(293,76));
cont.push_back(Point(245,125));
std::sort(cont.begin(),cont.end(), contoursCmpY_);
int size = cont.size();
printf("start Point x=%d,y=%d end Point x=%d,y=%d", cont[0].x, cont[0].y, cont[size].x, cont[size].y);
As you say, you can always step through the contour points.
The following finds the two points, ptLeft and ptRight, with the greatest separation along x, but could be modified as needed.
CvPoint ptLeft = cvPoint(image->width, image->height);
CvPoint ptRight = cvPoint(0, 0);
CvSlice slice = cvSlice(0, CV_WHOLE_SEQ_END_INDEX);
CvSeqReader reader;
cvStartReadSeq(contour, &reader, 0);
cvSetSeqReaderPos(&reader, slice.start_index);
int count = cvSliceLength(slice, contour);
for(int i = 0; i < count; i++)
{
reader.prev_elem = reader.ptr;
CV_NEXT_SEQ_ELEM(contour->elem_size, reader);
CvPoint* pt = (CvPoint*)reader.ptr;
if( pt->x < ptLeft.x )
ptLeft = *pt;
if( pt->x > ptRight.x )
ptRight = *pt;
}
The solution based on neighbor distances check didn't work for me (Python + opencv 3.0.0-beta), because all contours I get seem to be folded on themselves. What would appear as "open" contours at first glance on an image are actually "closed" contours collapsed on themselves.
So I had to resort to look for "u-turns" in each contour's sequence, an example in Python:
import numpy as np
def draw_closing_lines(img, contours):
for cont in contours:
v1 = (np.roll(cont, -2, axis=0) - cont)
v2 = (np.roll(cont, 2, axis=0) - cont)
dotprod = np.sum(v1 * v2, axis=2)
norm1 = np.sqrt(np.sum(v1 ** 2, axis=2))
norm2 = np.sqrt(np.sum(v2 ** 2, axis=2))
cosinus = (dotprod / norm1) / norm2
indexes = np.where(0.95 < cosinus)[0]
if len(indexes) == 1:
# only one u-turn found, mark in yellow
cv2.circle(img, tuple(cont[indexes[0], 0]), 3, (0, 255, 255))
elif len(indexes) == 2:
# two u-turns found, draw the closing line
cv2.line(img, tuple(tuple(cont[indexes[0], 0])), tuple(cont[indexes[1], 0]), (0, 0, 255))
else:
# too many u-turns, mark in red
for i in indexes:
cv2.circle(img, tuple(cont[i, 0]), 3, (0, 0, 255))
Not completely robust against polluting cusps and quite time-consuming, but that's a start. I'd be interested in other ideas, naturally :)

Render a texture to a window

I've got a dialog that I'd basically like to implement as a texture viewer using DirectX. The source texture can either come from a file on-disk or from an arbitrary D3D texture/surface in memory. The window will be resizable, so I'll need to be able to scale its contents accordingly (preserving aspect ratio, while not necessary, would be useful to know).
What would be the best way to go about implementing the above?
IMHO the easiest way to do this is to create a quad (or two triangles) whose vertices contain the correct UV-coordinates. The XYZ coordinates you set to the viewing cube coordinates. This only works if the identity matrix is set as projection. You can use -1 to 1 on both X and Y axes.
EDIT: Here an example turorial:
http://www.mvps.org/directx/articles/splash_screen.htm
This is the code I use to preserve size and scaling for resizeable dialogue. My texture is held in a memory bitmap. I am sure you can adapt if you do not have a memory bitmap. The important bits is the way I determine the right scaling factor to preserve the aspect ratio for any client area size
CRect destRect( 0, 0, frameRect.Width(), frameRect.Height() );
if( txBitmapInfo.bmWidth <= frameRect.Width() && txBitmapInfo.bmHeight <= frameRect.Height() )
{
destRect.left = ( frameRect.Width() - txBitmapInfo.bmWidth ) / 2;
destRect.right = destRect.left + txBitmapInfo.bmWidth;
destRect.top = ( frameRect.Height() - txBitmapInfo.bmHeight ) / 2;
destRect.bottom = destRect.top + txBitmapInfo.bmHeight;
}
else
{
double hScale = static_cast<double>( frameRect.Width() ) / txBitmapInfo.bmWidth;
double vScale = static_cast<double>( frameRect.Height() ) / txBitmapInfo.bmHeight;
if( hScale < vScale )
{
int height = static_cast<int>( frameRect.Width() * ( static_cast<double>(txBitmapInfo.bmHeight) / txBitmapInfo.bmWidth ) );
destRect.top = ( frameRect.Height() - height ) / 2;
destRect.bottom = destRect.top + height;
}
else
{
int width = static_cast<int>( frameRect.Height() * ( static_cast<double>(txBitmapInfo.bmWidth) / txBitmapInfo.bmHeight ) );
destRect.left = ( frameRect.Width() - width ) / 2;
destRect.right = destRect.left + width;
}
}
Hope this helps!