Access a circle of rays efficiently (find closest obstacle contours) - c++

Have a binary grid (like black and white pixels with black = empty and white = obstacle).
Starting from a given point on black, I want to emit "rays" in all directions. Those rays should either abort when reaching a given length (for example 100) or when hitting a white pixel (in which case the position of this pixel, aka obstacle contour shall be marked).
Those marked pixels result in a view of all obstacle contours that are "visible" from the given point (not obstructed by other obstacles).
What I have thought of so far, is to simply call a sufficient number of bresenham lines. In the case of a radius of 100, that means 100*2*pi = 628 lines to cover all pixels on the outer most circle.
However that results in many many multiple checks of same pixels closer to the center. Now I could split up the check in multiple rings, each of a different density of bresenham lines, but that appears rather inefficient too.
Does someone happen to have a more efficient algorithm idea for this?
Huge thanks in advance!

Unfortunately the hints towards graphics processing techniques while fascinating, are not well applicable in my case because I have no access to shaders or a camera, etc.
For now I have found a sufficiently efficient solution myself. The basic idea is to launch rays from the contours, not from the origin. Furthermore to use a helper grid named "reachable" where all pixels are marked that are successfully visible from the origin. This way only few pixels are read twice, most are read just once and some are written at most once. The code is rather messy yet, thus only pseudocode here:
Have desired origin O.x/O.y
Have obstacle bool grid Obstacle
Define bool grid Reachable[w][h]
Clear Reachable with false
Reachable[O.x][O.y] = true
For each Point C on all obstacle Contours // if not available, compute contours by finding all pixels adjacent to non-obstacle
For all Pixels A on Bresenham line from C to O
If Obstacle[A.x][A.y]
Continue with outer loop on contours // abort this line due to obstacle hit
If Reachable[A.x][A.y]
For all Pixels B on Bresenham line from C to A
Reachable[B.x][B.y] = true // mark all pixels on this line as Reachable
Mark C as a desired result pixel
Continue with outer loop on contours

Let the "square distance" between two points (x1,y1) and (x2,y2) be max(|x1-x2|,|y1-y2|), so that the points at increasing "square distance" around a center for increasingly large squares.
Now, for each square distance d from your center point, in increasing order, keep track of the angles the center point can see through all the obstacles with distance <= d.
You can use a list of angle intervals starting at [(0,360)] for d=0.
For each new distance, you can inspect the list, examine the new pixels within the given angles, and remove angle from the intervals when obstacles are hit. Each obstacle that causes you to modify an interval is visible from the center point, so you can mark it as such.
This method examines pixels only once, and only examines pixels you can see. The way I wrote it above requires trigonometry, however, which is slow. For a practical implementation, instead of actually using angles, use slopes, which require only simple math, and process each octant separately.

Related

The search for a set of points with a minimum sum of lengths to rectangles. What is the algorithm?

Good day.
I have the task of finding the set of points in 2D space for which the sum of the distances to the rectangles is minimal. For example, for two rectangles, the result will be the next area (picture). Any point in this area has the minimum sum of lengths to A and B rectangles.
Which algorithm is suitable for finding a region, all points of which have the minimum sum of lengths? The number of rectangles can be different, they are randomly located. They can even overlap each other. The sides of the rectangles are parallel to the coordinate axes and cannot be rotated. The region must be either a rectangle or a line or a point.
Hint:
The distance map of a rectangle (function that maps any point (x,y) to the closest distance to the rectangle) is made of four slanted planes (slope 45°), four quarter of cones and the rectangle itself, which is at ground level, forming a continuous surface.
To obtain the global distance map, it "suffices" to sum the distance maps of the individual rectangles. A pretty complex surface will result. Depending on the geometries, the minimum might be achieved on a single vertex, a whole edge or a whole face.
The construction of the global map seems more difficult than that of a line arrangement, due to the conic patches. A very difficult problem in the general case, though the axis-aligned constraint might ease it.
Add on Yves's answer.
As Yves described, each rectangle 'divide' plane into 9 parts and adds different distance method in to the sum. Middle part (rectangle) add distance 0, side parts add coordinate distance to that side, corner parts add point distance to that corner. With that approach plan has to be divided into 9^n parts, and distance sum is calculated by adding appropriate rectangle distance functions. That is feasible if number of rectangles is not too large.
Probably it is not needed to calculate all parts since it is easy to calculate some bound on part min value and check is it needed to calculate part at all.
I am not sure, but it seems to me that global distance map is convex function. If that is the case than it can be solved iteratively by similar idea as in linear programming.

Edge detection / angle

I can successfully threshold images and find edges in an image. What I am struggling with is trying to extract the angle of the black edges accurately.
I am currently taking the extreme points of the black edge and calculating the angle with the atan2 function, but because of aliasing, depending on the point you choose the angle can come out with some degree of variation. Is there a reliable programmable way of choosing the points to calculate the angle from?
Example image:
For example, the Gimp Measure tool angle at 3.12°,
If you're writing your own library, then creating a robust solution for this problem will allow you to develop several independent chunks of code that you can string together to solve other problems, too. I'll assume that you want to find the corners of the checkerboard under arbitrary rotation, under varying lighting conditions, in the presence of image noise, with a little nonlinear pincushion/barrel distortion, and so on.
Although there are simple kernel-based techniques to find whole pixels as edge pixels, when working with filled polygons you'll want to favor algorithms that can find edges with sub-pixel accuracy so that you can perform accurate line fits. Even though the gradient from dark square to white square crosses several pixels, the "true" edge will be found at some sub-pixel point, and very likely not the point you'd guess by manually clicking.
I tried to provide a simple summary of edge finding in this older SO post:
what is the relationship between image edges and gradient?
For problems like yours, a robust solution is to find edge points along the dark-to-light transitions with sub-pixel accuracy, then fit lines to the edge points, and use the line angles. If you are processing a true camera image, and if there is an uncorrected radial distortion in the image, then there are some potential problems with measurement accuracy, but we'll ignore those.
If you want to find an accurate fit for an edge, then it'd be great to scan for sub-pixel edges in a direction perpendicular to that edge. That presupposes that we have some reasonable estimate of the edge direction to begin with. We can first find a rough estimate of the edge orientation, then perform an accurate line fit.
The algorithm below may appear to have too many steps, but my purpose is point out how to provide a robust solution.
Perform a few iterations of erosion on black pixels to separate the black boxes from one another.
Run a connected components algorithm (blob-finding algorithm) to find the eroded black squares.
Identify the center (x,y) point of each eroded square as well as the (x,y) end points defining the major and minor axes.
Maintain the data for each square in a structure that has the total area in pixels, the center (x,y) point, the (x,y) points of the major and minor axes, etc.
As needed, eliminate all components (blobs) that are too small. For example, you would want to exclude all "salt and pepper" noise blobs. You might also temporarily ignore checkboard squares that are cut off by the image edges--we can return to those later.
Then you'll loop through your list of blobs and do the following for each blob:
Determine the direction roughly perpendicular to the edges of the checkerboard square. How you accomplish this depends in part on what data you calculate when you run your connected components algorithm. In a general-purpose image processing library, a standard connected components algorithm will determine dozens of properties and measurements for each individual blob: area, roundness, major axis direction, minor axis direction, end points of the major and minor axis, etc. For rectangular figures, it can be sufficient to calculate the topmost, leftmost, rightmost, and bottommost points, as these will define the four corners.
Generate edge scans in the direction roughly perpendicular to the edges. These must be performed on the original, unmodified image. This generally assumes you have bilinear interpolation implemented to find the grayscale values of sub-pixel (x,y) points such as (100.35, 25.72) since your scan lines won't fall exactly on whole pixels.
Use a sub-pixel edge point finding technique. In general, you'll perform a curve fit to the edge points in the direction of the scan, then find the real-valued (x,y) point at maximum gradient. That's the edge point.
Store all sub-pixel edge points in a list/array/collection.
Generate line fits for the edge points. These can use Hough, RANSAC, least squares, or other techniques.
From the line equations for each of your four line fits, calculate the line angle.
That algorithm finds the angles independently for each black checkerboard square. It may be overkill for this one application, but if you're developing a library maybe it'll give you some ideas about what sub-algorithms to implement and how to structure them. For example, the algorithm would rely on implementations of these techniques:
Image morphology (e.g. erode, dilate, close, open, ...)
Kernel operations to implement morphology
Thresholding to binarize an image -- the Otsu method is worth checking out
Connected components algorithm (a.k.a blob finding, or the OpenCV contours function)
Data structure for blob
Moment calculations for blob data
Bilinear interpolation to find sub-pixel (x,y) values
A linear ray-scanning technique to find (x,y) gray values along a specific direction (which will also rely on bilinear interpolation)
A curve fitting technique and means to determine steepest tangent to find edge points
Robust line fit technique: Hough, RANSAC, and/or least squares
Data structure for line equation, related functions
All that said, if you're willing to settle for a slight loss of accuracy, and if you know that the image does not suffer from radial distortion, etc., and if you just need to find the angle of the parallel lines defined by all checkboard edges, then you might try..
Simple kernel-based edge point finding technique (Laplacian on Gaussian-smoothed image)
Hough line fit to edge points
Choose the two line fits with the greatest number of votes, which should be one set of horizontal-ish lines and the other set of vertical-ish lines
There are also other techniques that are less accurate but easier to implement:
Use a kernel-based corner-finding operator
Find the angles between corner points.
And so on and so on. As you're developing your library and creating robust implementations of standalone functions that you can string together to create application-specific solutions, you're likely to find that robust solutions rely on more steps than you would have guessed, but it'll also be more clear what the failure mode will be at each incremental step, and how to address that failure mode.
Can I ask, what C++ library are you using to code this?
Jerry is right, if you actually apply a threshold to the image it would be in 2bit, black OR white. What you may have applied is a kind of limiter instead.
You can make a threshold function (if you're coding the image processing yourself) by applying the limiter you may have been using and then turning all non-white pixels black. If you have the right settings, the squares should be isolated and you will be able to calculate the angle.
Once this is done you can use a path finding algorithm to find some edge, any edge will do. If you find a more or less straight path, you can use the extreme points as you are doing now to determine the angle. Since the checker-board rotation is only relevant within 90 degrees, your angle should be modulo 90 degrees or pi over 2 radians.
I'm not sure it's (anywhere close to) the right answer, but my immediate reaction would be to threshold twice: once where anything but black is treated as white, and once where anything but white is treated as black.
Find the angle for each, then interpolate between the two angles.
Your problem have few solutions but all have one very important issue which you seem to neglect. Note: When you are trying to make geometrical calculation in the image, the points you use must be as far as possible one from the other. You are taking 2 points inside a single square. Those points are very close one to another, so a slight error in pixel location of of the points leads to a large error in the angle. Why do you use only a single square, when you have many squares in the image?
Here are few solutions:
Find the line angle of every square. You have at least 9 squares in the image, 4 lines in each square which give you total of 36 angles (18 will be roughly at 3[deg] and 18 will be ~93[deg]). Remove the 90[degrees] and you get 36 different measurements of the angle. Sort them and take the average of the middle 30 (disregarding the lower 3 and higher 3 measurements). This will give you an accurate result
Second solution, find the left extreme point of the leftmost square and the right extreme point of the rightmost square. Now calculate the angle between them. The result will be much more accurate because the points are far away.
A third algorithm which will give you accurate results because it doesn't involve finding any points and no need for thresholding. Just smooth the image, calculate gradients in X and Y directions (gx,gy), calculate the angle of the gradient in each pixel atan(gy,gx) and make histogram of the angles. You will have 2 significant peaks near the 3[deg] and 93[deg]. Just find the peaks by searching the maximum in the histogram. This will work even if you have a lot of noise in the image, even with antialising and jpg artifacts, and even if you have other drawings on the image. But remember, you must smooth the image a lot before calculating the derivatives.

Divide Contours at overlap

I have an Image on which i have extracted several contours with 1st) cvCanny and 2nd) findContours. I'm only interested in the external Points, so I got several closed contours that i do analyse further. I'm looking for ellipses or circles and due to some overlap in the image i got some contours that are actually interesting for me but my algorithm discards them because they do not look elliptic.
Is there a way to dividide those contours, e.g. based on the small connecting "bridges" between two overlapping contours detected as one?
In this example i would want to just cut the rod on the lower right corner.
Due to performance issues, Hough circle detection is not an option.
Thanks!
Never worked with these sorts of algorithms before, but here's an idea: Define a minimum length L between points less than which you'd want to create a bridge. Then for each point on the contour, construct the tangent line segment of length L with its origin at that point. Wherever that tangent line segment intersects two points you will have a place where the contour is effectively getting 'pinched' as with the rod/ellipse junction in your figure. When this happens draw the bridge, which will be the tangent segment itself.
It might be easier to imagine or do if you take a single segment at a single point (say at the top of your curve, oriented to the left) and you move the segment around the contour, moving it along the bridges created online when the above condition is met.

How to render a circle with as few vertices as possible?

I'm trying to figure out how to decide how many vertices I need to have to make my circle look as smooth as possible.
Here is an example of two circles, both having 24 vertices:
As you see, the bigger the circle becomes, the more vertices I need to hide the straight lines.
At first I thought that the minimum length of one line on the edge should be 6px, but that approach failed when I increased the circle size: I got way too many vertices. I also thought about calculating the angles, but I quickly realised that angles doesn't differ on different sized circles. I also checked this answer, but I don't have a clue how to convert it into code (and some weird stuff there: th uses itself for calculating itself), and I think it doesn't even work, since the author is using the angle from one slice to the middle of circle, which doesn't change if the circle gets larger.
Then I realised that maybe the solution is to check the angle between two vertices at the edges, in this way:
As you see, the fewer vertices, the bigger the lengths are for those triangles. So this has to be the answer, I just don't know how to calculate the number of vertices by using this information.
The answer you link to actually implements exactly the idea you propose at the end of your question.
The decisive formula that you need from that answer is this one:
th = arccos(2 * (1 - e / r)^2 - 1)
This tells you the angle between two vertices, where r is the radius of the circle and e is the maximum error you're willing to tolerate, i.e. the maximum deviation of your polygon from the circle -- this is the error marked in your diagram. For example, you might choose to set e to 0.5 of a pixel.
Because th is measured in radians, and 360 degrees (a full circle) is equal to 2*pi in radians, the number of vertices you need is
num_vertices = ceil(2*pi/th)
In case you want to draw the polygon triangles from the center of the circle, the formula for the required number of sides is:
sides = PI / arccos(1 - error / radius)
where error is the maximum deviation of polygon from the circle, in pixels and radius is also expressed in pixels.
Error 0.33 seems to produce results indistinguishable from an ideal circle. Circles drawn at error 0.5, on close inspection still show some subtly visible angles between sides, especially visible in small circles.
This function obviously breaks down when radius is much smaller than error, and may produce NaN values. You may want to use a special case (for example draw 3 sides) in this situation.
The graph below shows number of sides obtained from the function, with error set to 0.33:
First of all, if you are using OpenGL or DirectX you can significantly decrease the number of vertices by using a triangle fan structure.
As for the problem of the amount of vertices, I would imagine the number of vertices required for a smooth circle to scale with the circumference. This scales with r, so I would advice to find a good factor A such that:
#vertices = A * r
The angles are the same in the two cases of 24 vertices.
But with larger circle, the human eye is able to better see the individual straight lines.
So you need some heuristic that takes into account
the angle between two consecutive line segments in the curve, and
the size, and possibly
the scaling for display.
The third point is difficult since one does not in general know the size at which some graphic will be displayed. E.g., an SVG format picture can be displayed at any size. The most general solution, I think, is to have direct support for various figures (Bezier lines, circles, etc.) in the renderer, and then define the figure with a few parameters instead of as a sequence of points. Or, define it in terms of some figure that the renderer supports, e.g. as a sequence of connected Bezier curves. That way, the renderer can add the necessary number of points to make it look smooth and nice.
However, I guess that you're not creating a renderer, so then perhaps only the first two points above are relevant.

Are there algorithms for computing the bounding rects of sprites drawn on a monochrome background?

Imagine a plain rectangular bitmap of, say, 1024x768 pixels filled with white. There are a few (non-overlapping) sprites drawn onto the bitmap: circles, squares and triangles.
Is there an algorithm (possibly even a C++ implementation) which, given the bitmap and the color which is the background color (white, in the above example), yields a list containing the smallest bounding rectangles for each of the sprites?
Here's some sample: On the left side you can see a sample bitmap which my code is given (together with the information that the 'background' is white). On the right side you can see the same image together with the bounding rectangles of the four shapes (in red); the algorithm I'm looking for computes the geometry of these rectangles.
Some painting programs have a similiar feature for selecting shapes: they can even compute seemingly arbitrary bounding polygons. Instead of dragging a selection rectangle manually, you can click the 'background' (what's background and what's not is determined by some threshold) and then the tool automatically computes the shape of the object drawn onto the background. I need something like this, except that I'm perfectly fine if I just have the rectangular bounding areas for objects.
I became aware of OpenCV; it appears to be relevant (it seems to be a library which includes every graphics algorithm I can think of - and then some) but in the fast amount of information I couldn't find the way to the algorithm I'm thinking of. I would be surprised if OpenCV couldn't do this, but I fear you've got to have a PhD to use it. :-)
Here is the great article on the subject:
http://softsurfer.com/Archive/algorithm_0107/algorithm_0107.htm
I think that PhD is not required here :)
These are my first thoughts, none complicated, except for the edge detection
For each square,
if it's not-white
mark as "found"
if you havn't found one next to it already
add it to points list
for each point in the points list
use basic edge detection to find outline
keep track of bounds while doing so
add bounds to shapes list
remove duplicates from shapes list. (this can happen for concave shapes)
I just realized this will consider white "holes" (like in your leftmost circle in your sample) to be it's own shape. If the first "loop" is a flood fill, it doesn't have this problem, but will be much slower/take much more memory.
The basic edge detection I was thinking of was simple:
given eight cardinal directions left, downleft, etc...
given two relative directions cw(direction-1) and ccw(direction+1)
starting with a point "begin"
set bounds to point
find direction d, where the begin+d is not white, and begin+cw(d) is white.
set current to begin+d
do
if current is outside of bounds, increase bounds
set d = cw(d)
while(cur+d is white or cur+ccw(d) is not white)
d = ccw(d)
cur = cur + d;
while(cur != begin
http://ideone.com/
There's a quite a few edge cases not considered here: what if begin is a single point, what if it runs to the edge of the picture, what if start point is only 1 px wide, but has blobs to two sides, probably others... But the basic algorithm isn't that complicated.