Incomplete Delaunay triangulation - c++

I'm using C++ and OpenCV to create a Delaunay triangle mesh from user-specified sample points on an image (which will then be extrapolated across the domain using the FEM for the relevant ODE).
Since the 4 corners of the (rectangular) image are in the list of vertices supplied to Subdiv2D, I expect the outer convex hull of the triangulation to trace the perimeter of the image. However, very frequently, there are missing elements around the outside.
Sometimes I can get the expected result by nudging the coordinates of certain points to avoid high aspect ratio triangles. But this is not a solution as in general the user most be able to specify any valid coordinates.
An example output is like this: CV Output. Elements are in white with black edges. At the bottom and right edges, no triangles have been added, and you can see through to the black background.
How can I make the outer convex hull of the triangulation trace the image perimeter with no gaps please?
Here is a MWE (with a plotting function included):
#include <opencv2/opencv.hpp>
#include <vector>
void DrawDelaunay(cv::Mat& image,cv::Subdiv2D& subdiv);
int main(int argc,char** argv)
{
// image dim
int width=3440;
int height=2293;
// sample coords
std::vector<int> x={0,width-1,width-1,0,589,1015,1674,2239,2432,3324,2125,2110,3106,3295,1298,1223,277,208,54,54,1749,3245,431,1283,1397,3166};
std::vector<int> y={0,0,height-1,height-1,2125,1739,1154,817,331,143,1377,2006,1952,1501,872,545,812,310,2180,54,2244,2234,1387,1412,118,1040};
// add delaunay nodes
cv::Rect rect(0,0,width,height);
cv::Subdiv2D subdiv(rect);
for(size_t i=0;i<x.size();++i)
{
cv::Point2f p(x[i],y[i]);
subdiv.insert(p);
}
// draw elements
cv::Mat image(height,width,CV_8U);
DrawDelaunay(image,subdiv);
cv::resize(image,image,cv::Size(),0.3,0.3);
cv::imshow("Delaunay",image);
cv::waitKey(0);
return 0;
}
void DrawDelaunay(cv::Mat& image,cv::Subdiv2D& subdiv)
{
std::vector<cv::Vec6f> elements;
subdiv.getTriangleList(elements);
std::vector<cv::Point> pt(3);
for(size_t i=0;i<elements.size();++i)
{
// node coords
cv::Vec6f t=elements[i];
pt[0]=cv::Point(cvRound(t[0]),cvRound(t[1]));
pt[1]=cv::Point(cvRound(t[2]),cvRound(t[3]));
pt[2]=cv::Point(cvRound(t[4]),cvRound(t[5]));
// element edges
cv::Scalar black(0,0,0);
cv::line(image,pt[0],pt[1],black,3);
cv::line(image,pt[1],pt[2],black,3);
cv::line(image,pt[2],pt[0],black,3);
// element fill
int nump=3;
const cv::Point* pp[1]={&pt[0]};
cv::fillPoly(image,pp,&nump,1,cv::Scalar(255,0,0));
}
}
If relevant, I coded this in Matlab first where the Delaunay triangulation worked exactly as I expected.

My solution was to add a border around the 'cv::Rect rect' provided to cv::Subdiv2D, making it larger in width and height than the image (20% larger seems to work well).
Then instead of adding nodes to the corners of the image, I added 4 corner nodes and 4 edge nodes to the perimiter of this enlarged 'cv::Rect rect' variable which holds the Delaunay points.
This seems to solve the problem. I think what was happening was that if the user placed any samples near the edge of the image, it resulted in high aspect ratio triangles at the edges. This ticket suggests there is a bug around this in the OpenCV implementation of the Delaunay algorithm.
My solution hopefully means that corner and edge nodes are never too close to user samples, side-stepping the issue.
I haven't tested this extensively yet. I'm not sure how robust the solution will turn out to be. It has worked so far.
I'm still interested to know of other solutions.

I ran your data points through the Tinfour project's demo application and got the results shown below. It looks like your data is fine. Unfortunately, the Tinfour project is written in Java and you're working in C++, so it will have limited value to you.
Since you plan on using Finite Element Methods, you might want to see whether there is any way you can run a Delaunay Refinement operation over your data to improve the geometry. The skinny triangles sometimes lead to numerical issues when using FEM software.

Related

cv::detail::MultiBandBlender strange white streaks at the end of the photo

I'm working with OpenCV 3.4.8 with C++11 and I'm trying to blend images together.
In this example I have 2 images (thiers mask shown in the screen belowe). I have georeference, so I can easy calculate corners of this images in the final image.
The data outside the masks are black.
My code looks like something like that:
std::vector<cv::UMat> inputImages;
std::vector<cv::UMat> masks;
std::vector<cv::Point> corners;
std::vector<cv::Size> imgSizes;
/*
here is code where I load images, create thier masks
(like in the screen above) and calculate corners.
*/
cv::Ptr<cv::detail::SeamFinder> seamFinder = new cv::detail::DpSeamFinder();
seamFinder->find(inputImages, corners, masks);
cv::Ptr<cv::detail::Blender> blender = new cv::detail:: MultiBandBlender(false);
blender->prepare(corners, imgSizes);
for(size_t i = 0; i < inputImages.size(); i++)
{
blender->feed(inputImages[i], masks[i], corners[i]);
}
cv::UMat blendedImg, outMask;
blender->blend(blendedImg, outMask);
SeamFinder gives me result like in the screen above. Finded seam lines looks good and Im very satisied form them. But the other problem occurs in the next step. The MultiBandBlender is making strange white streaks when the seam line goes on the end of the data.
This is an example:
When I don't use blender, but just use masks to cut the oryginal images and just add (cv::add()) images together with additional alpha channel (made from masks) I get very good results without any holes and strange colors, but I need to have more smoothed transition :/
Can anyone help me? When I create MultiBand Blender with smaller num_bands the white streaks are smaller, and with the num_bands = 0 the results looks like with just adding images.
I looked at feed() and blend() methods in the MultiBandBlender and I think that it is connected with Gaussian or Laplacian pyramid and the final restoring images from Laplacian pyramid in the blend() method.
EDIT1:
When Gaussian and Laplacian pyramids are created the copyMakeBorder(), which prevents the MultiBandBlender from making this white streaks when images are fully filled with the data. So in my case I think that I need to create my blender almost the same like MultiBandBlender, but copyMakeBorder() method in the feed() method change to the something that will "extend" my image inside the mask, like #AlexanderKondratskiy suggested.
Now I don't know how to achive correct "extend" similar to BORDER_REFLECT or BORDER_REFLECT_101.
I suspect your input images contain white pixels outside those masks. The white banding occurs around the areas where the seam follows the mask exactly. For Laplacian for example, pixels outside the mask do influence the final result, as each layer of a pyramid is essentially some blurring kernel on the image.
If you have some kind of good data outside the mask, keep it. If you do not, I suggest "extending" your image beyond the mask to maintain a smooth transition.
Edit:
Here's two things you could try, unless someone with more experience with OpenCV comes along.
To prove/disprove my hypothesis, fill the black region with just the average or median color within the mask. This should make the transition to the outside region less sharp, and hopefully reduce the artefacts. If that does not happen, my answer is wrong.
In terms of what is probably a good generalization of "BORDER_REFLECT" when the edge is arbitrary, you could try something like this:
Find the centroid c of the mask polygon
For each pixel p outside the mask, think of the line between it and c
Calculate point p' along this line that is the same distance inside the mask area, as p is from the mask edge. (i.e. you're reflecting along the mask edge)
Linearly interpolate the color of from the neighbors of p' (as it's position may not fall exactly in the middle of a pixel). That's the color of pixel p

How can I detect the position and the radius of the ball using opencv?

I need to detect this ball: and find its position and radius using opencv. I have downloaded many codes, but neither of them works. Any helps are highly appreciated.
I see you have quite a setup installed. As mentioned in the comments, please make sure that you have appropriate lighting to capture the ball, as well as making the ball distinguishable from it's surroundings by painting it a different colour.
Once your setup is optimized for detection, you may proceed via different ways to track your ball (stationary or not). A few ways may be:
Feature detection : Via Hough Circles, detect 2D circles (and their radius) that lie within a certain range of color, as explained below
There are many more ways to detect objects via feature detection, such as this clever blog points out.
Object Detection: Via SURF, SIFT and many other methods, you may detect your ball, calculate it's radius and even predict it's motion.
This code uses Hough Circles to compute the ball position, display it in real time and calculate it's radius in real time. I am using Qt 5.4 with OpenCV version 2.4.12
void Dialog::TrackMe() {
webcam.read(cim); /*call read method of webcam class to take in live feed from webcam and store each frame in an OpenCV Matrice 'cim'*/
if(cim.empty()==false) /*if there is something stored in cim, ie the webcam is running and there is some form of input*/ {
cv::inRange(cim,cv::Scalar(0,0,175),cv::Scalar(100,100,256),cproc);
/* if any part of cim lies between the RGB color ranges (0,0,175) and (100,100,175), store in OpenCV Matrice cproc */
cv::HoughCircles(cproc,veccircles,CV_HOUGH_GRADIENT,2,cproc.rows/4,100,50,10,100);
/* take cproc, process the output to matrice veccircles, use method [CV_HOUGH_GRADIENT][1] to process.*/
for(itcircles=veccircles.begin(); itcircles!=veccircles.end(); itcircles++)
{
cv::circle(cim,cv::Point((int)(*itcircles)[0],(int)(*itcircles)[1]), 3, cv::Scalar(0,255,0), CV_FILLED); //create center point
cv::circle(cim,cv::Point((int)(*itcircles)[0],(int)(*itcircles)[1]), (int)(*itcircles)[2], cv::Scalar(0,0,255),3); //create circle
}
QImage qimgprocess((uchar*)cproc.data,cproc.cols,cproc.rows,cproc.step,QImage::Format_Indexed8); //convert cv::Mat to Qimage
ui->output->setPixmap(QPixmap::fromImage(qimgprocess));
/*render QImage to screen*/
}
else
return; /*no input, return to calling function*/
}
How does the processing take place?
Once you start taking in live input of your ball, the frame captured should be able to show where the ball is. To do so, the frame captured is divided into buckets which are further divides into grids. Within each grid, an edge is detected (if it exists) and thus, a circle is detected. However, only those circles that pass through the grids that lie within the range mentioned above (in cv::Scalar) are considered. Thus, for every circle that passes through a grid that lies in the specified range, a number corresponding to that grid is incremented. This is known as voting.
Each grid then stores it's votes in an accumulator grid. Here, 2 is the accumulator ratio. This means that the accumulator matrix will store only half as many values as resolution of input image cproc. After voting, we can find local maxima in the accumulator matrix. The positions of the local maxima are corresponding to the circle centers in the original space.
cproc.rows/4 is the minimum distance between centers of the detected circles.
100 and 50 are respectively the higher and lower threshold passed to the canny edge function, which basically detects edges only between the mentioned thresholds
10 and 100 are the minimum and maximum radius to be detected. Anything above or below these values will not be detected.
Now, the for loop processes each frame captured and stored in veccircles. It create a circle and a point as detected in the frame.
For the above, you may visit this link

OpenCV - approxPolyDP for edge maps (not contours)

I have successfully applied the method cv::approxPolyDP on contours (cv::findContours), in order to represent a contour with a simpler polygon and implicitly do some denoising.
I would like to do the same thing on an edge map acquired from an RGBD camera (which is in general very noisy), but with not much success up to now and I cannot find relative examples online. The reason I need this, is that by means of an edge map one can also use the edges between fingers, edges created by finger occlusion or edges created in the palm.
Is this method applicable to general edge maps, other than contours?
Could someone pinpoint me to an example?
Some images attached:
Successful example for contours:
Problematic case for edge maps:
Most probably I draw things in the wrong way, but drawing just the pixels returned by the method shows that probably large areas are not represented in the end result (and this doesn't change much according to the epsilon-parameter).
I attach also a depth image, similar to the ones I use in the experimental pipeline descibed above. This depth image was not acquired by a depth camera, but was synthetically generated by reading the depth buffer of the gpu, using OpenGL.
Just for reference, this is also the edge map of the depth image acquired straight from the depth camera (using the raw image, no smoothing etc applied)
(hand as viewd from a depth camera, palm facing upwards, fingers "closing" towards the palm)
Your issue with approxPolyDP is due to the formatting of the input into approxPolyDP.
Explanation
approxPolyDP expects its input to be a vector of Points. These points define a polygonal curve that will be processed by approxPolyDP. The curve could be open or closed, which can be controlled by a flag.
The ordering of the points in the list is important. Just as one traces out a polygon by hand, each subsequent point in the vector must be the next vertex of the polygon, clockwise or counter-clockwise.
If the list of points is stored in raster order (sorted by Y and then X), then the point[k] and point[k+1] do not necessarily belong to the same curve. This is the cause of the problem.
This issue is explained with illustrations in OpenCV - How to extract edges form result of Canny Function? . Quote from Mikhail: "Canny doesn't connect pixels into chains or segments."
Illustration of "raster order" that is generated by Canny.
Illustration of "contour order" that is expected by approxPolyDP
What is needed
What you need is a list of "chains of edge pixels". Each chain must contain edge pixels that are adjacent to each other, just like someone tracing out an object's outline by a pencil, without the tip of the pencil leaving the paper.
This is not what is returned from edge detection methods, such as Canny. Further processing is needed to convert an edge map into chains of adjacent (continuous) edge pixels.
Suggested solutions
(1) Use binary threshold instead of edge detection as the input to findContours
This would be applicable if there exists a threshold value that separates the hand from the background, and that this value works for the whole hand (not just part of the hand).
(2) Scan the edge map, and build the list of adjacent pixels by examining the neighbors of each edge pixel.
This is similar to the connected-components algorithm, except that instead of finding a blob (where you only need to know each pixel's membership), you try to find chains of pixels such that you can tell the previous and next edge pixels along the chain.
(3) Use an alternative edge detection algorithm, such as Edge Drawing.
Details can be found at http://ceng.anadolu.edu.tr/cv/EdgeDrawing/
Unfortunately, this is not provided out-of-the-box from OpenCV, so you may have to find an implementation elsewhere.
Sample code for option #1.
#include <stdint.h>
#include <iostream>
#include <vector>
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main()
{
Mat matInput = imread("~/Data/mA9EE.png", false);
// ---- Preprocessing of depth map. (Optional.) ----
GaussianBlur(matInput, matInput, cv::Size(9, 9), 4.0);
// ---- Here, we use cv::threshold instead of cv::Canny as explained above ----
Mat matEdge;
//Canny(matInput, matEdge, 0.1, 1.0);
threshold(matInput, matEdge, 192.0, 255.0, THRESH_BINARY_INV);
// ---- Use findContours to find chains of consecutive edge pixels ----
vector<vector<Point> > contours;
findContours(matEdge, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// ---- Code below is only used for visualizing the result. ----
Mat matContour(matEdge.size(), CV_8UC1);
for (size_t k = 0; k < contours.size(); ++k)
{
const vector<Point>& contour = contours[k];
for (size_t k2 = 0; k2 < contour.size(); ++k2)
{
const Point& p = contour[k2];
matContour.at<uint8_t>(p) = 255;
}
}
imwrite("~/Data/output.png", matContour);
cout << "Done!" << endl;
return 0;
}

OpenCV, how to use arrays of points for smoothing and sampling contours?

I have a problem to get my head around smoothing and sampling contours in OpenCV (C++ API).
Lets say I have got sequence of points retrieved from cv::findContours (for instance applied on this this image:
Ultimately, I want
To smooth a sequence of points using different kernels.
To resize the sequence using different types of interpolations.
After smoothing, I hope to have a result like :
I also considered drawing my contour in a cv::Mat, filtering the Mat (using blur or morphological operations) and re-finding the contours, but is slow and suboptimal. So, ideally, I could do the job using exclusively the point sequence.
I read a few posts on it and naively thought that I could simply convert a std::vector(of cv::Point) to a cv::Mat and then OpenCV functions like blur/resize would do the job for me... but they did not.
Here is what I tried:
int main( int argc, char** argv ){
cv::Mat conv,ori;
ori=cv::imread(argv[1]);
ori.copyTo(conv);
cv::cvtColor(ori,ori,CV_BGR2GRAY);
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i > hierarchy;
cv::findContours(ori, contours,hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_NONE);
for(int k=0;k<100;k += 2){
cv::Mat smoothCont;
smoothCont = cv::Mat(contours[0]);
std::cout<<smoothCont.rows<<"\t"<<smoothCont.cols<<std::endl;
/* Try smoothing: no modification of the array*/
// cv::GaussianBlur(smoothCont, smoothCont, cv::Size(k+1,1),k);
/* Try sampling: "Assertion failed (func != 0) in resize"*/
// cv::resize(smoothCont,smoothCont,cv::Size(0,0),1,1);
std::vector<std::vector<cv::Point> > v(1);
smoothCont.copyTo(v[0]);
cv::drawContours(conv,v,0,cv::Scalar(255,0,0),2,CV_AA);
std::cout<<k<<std::endl;
cv::imshow("conv", conv);
cv::waitKey();
}
return 1;
}
Could anyone explain how to do this ?
In addition, since I am likely to work with much smaller contours, I was wondering how this approach would deal with border effect (e.g. when smoothing, since contours are circular, the last elements of a sequence must be used to calculate the new value of the first elements...)
Thank you very much for your advices,
Edit:
I also tried cv::approxPolyDP() but, as you can see, it tends to preserve extremal points (which I want to remove):
Epsilon=0
Epsilon=6
Epsilon=12
Epsilon=24
Edit 2:
As suggested by Ben, it seems that cv::GaussianBlur() is not supported but cv::blur() is. It looks very much closer to my expectation. Here are my results using it:
k=13
k=53
k=103
To get around the border effect, I did:
cv::copyMakeBorder(smoothCont,smoothCont, (k-1)/2,(k-1)/2 ,0, 0, cv::BORDER_WRAP);
cv::blur(smoothCont, result, cv::Size(1,k),cv::Point(-1,-1));
result.rowRange(cv::Range((k-1)/2,1+result.rows-(k-1)/2)).copyTo(v[0]);
I am still looking for solutions to interpolate/sample my contour.
Your Gaussian blurring doesn't work because you're blurring in column direction, but there is only one column. Using GaussianBlur() leads to a "feature not implemented" error in OpenCV when trying to copy the vector back to a cv::Mat (that's probably why you have this strange resize() in your code), but everything works fine using cv::blur(), no need to resize(). Try Size(0,41) for example. Using cv::BORDER_WRAP for the border issue doesn't seem to work either, but here is another thread of someone who found a workaround for that.
Oh... one more thing: you said that your contours are likely to be much smaller. Smoothing your contour that way will shrink it. The extreme case is k = size_of_contour, which results in a single point. So don't choose your k too big.
Another possibility is to use the algorithm openFrameworks uses:
https://github.com/openframeworks/openFrameworks/blob/master/libs/openFrameworks/graphics/ofPolyline.cpp#L416-459
It traverses the contour and essentially applies a low-pass filter using the points around it. Should do exactly what you want with low overhead and (there's no reason to do a big filter on an image that's essentially just a contour).
How about approxPolyDP()?
It uses this algorithm to 'smooth' a contour (basically gettig rid of most of the contour's points and leave the ones that represent a good approximation of your contour)
From 2.1 OpenCV doc section Basic Structures:
template<typename T>
explicit Mat::Mat(const vector<T>& vec, bool copyData=false)
You probably want to set 2nd param to true in:
smoothCont = cv::Mat(contours[0]);
and try again (this way cv::GaussianBlur should be able to modify the data).
I know this was written a long time ago, but did you tried a big erode followed by a big dilate (opening), and then find the countours? It looks like a simple and fast solution, but I think it could work, at least to some degree.
Basically the sudden changes in contour corresponds to high frequency content. An easy way to smooth your contour would be to find the fourier coefficients assuming the coordinates form a complex plane x + iy and then by eliminating the high frequency coefficients.
My take ... many years later ...!
Maybe two easy ways to do it:
loop a few times with dilate,blur,erode. And find the contours on that updated shape. I found 6-7 times gives good results.
create a bounding box of the contour, and draw an ellipse inside the bounded rectangle.
Adding the visual results below:
This applies to me. The edges are smoother than before:
medianBlur(mat, mat, 7)
morphologyEx(mat, mat, MORPH_OPEN, getStructuringElement(MORPH_RECT, Size(12.0, 12.0)))
val contours = getContours(mat)
This is opencv4android code.

Robustly find N circles with the same diameter: alternative to bruteforcing Hough transform threshold

I am developing application to track small animals in Petri dishes (or other circular containers).
Before any tracking takes place, the first few frames are used to define areas.
Each dish will match an circular independent static area (i.e. will not be updated during tracking).
The user can request the program to try to find dishes from the original image and use them as areas.
Here are examples:
In order to perform this task, I am using Hough Circle Transform.
But in practice, different users will have very different settings and images and I do not want to ask the user to manually define the parameters.
I cannot just guess all the parameters either.
However, I have got additional informations that I would like to use:
I know the exact number of circles to be detected.
All the circles have the almost same dimensions.
The circles cannot overlap.
I have a rough idea of the minimal and maximal size of the circles.
The circles must be entirely in the picture.
I can therefore narrow down the number of parameters to define to one: the threshold.
Using these informations and considering that I have got N circles to find, my current solution is to
test many values of threshold and keep the circles between which the standard deviation is the smallest (since all the circles should have a similar size):
//at this point, minRad and maxRad were calculated from the size of the image and the number of circles to find.
//assuming circles should altogether fill more than 1/3 of the images but cannot be altogether larger than the image.
//N is the integer number of circles to find.
//img is the picture of the scene (filtered).
//the vectors containing the detected circles and the --so far-- best circles found.
std::vector<cv::Vec3f> circles, bestCircles;
//the score of the --so far-- best set of circles
double bestSsem = 0;
for(int t=5; t<400 ; t=t+2){
//Apply Hough Circles with the threshold t
cv::HoughCircles(img, circles, CV_HOUGH_GRADIENT, 3, minRad*2, t,3, minRad, maxRad );
if(circles.size() >= N){
//call a routine to give a score to this set of circles according to the similarity of their radii
double ssem = scoreSetOfCircles(circles,N);
//if no circles are recorded yet, or if the score of this set of circles is higher than the former best
if( bestCircles.size() < N || ssem > bestSsem){
//this set become the temporary best set of circles
bestCircles=circles;
bestSsem=ssem;
}
}
}
With:
//the methods to assess how good is a set of circle (the more similar the circles are, the higher is ssem)
double scoreSetOfCircles(std::vector<cv::Vec3f> circles, int N){
double ssem=0, sum = 0;
double mean;
for(unsigned int j=0;j<N;j++){
sum = sum + circles[j][2];
}
mean = sum/N;
for(unsigned int j=0;j<N;j++){
double em = mean - circles[j][2];
ssem = 1/(ssem + em*em);
}
return ssem;
}
I have reached a higher accuracy by performing a second pass in which I repeated this algorithm narrowing the [minRad:maxRad] interval using the result of the first pass.
For instance minRad2 = 0.95 * average radius of best circles and maxRad2 = 1.05 * average radius of best circles.
I had fairly good results using this method so far. However, it is slow and rather dirty.
My questions are:
Can you thing of any alternative algorithm to solve this problem in a cleaner/faster manner ?
Or what would you suggest to improve this algorithm?
Do you think I should investigate generalised Hough transform ?
Thank you for your answers and suggestions.
The following approach should work pretty well for your case:
Binarize your image (you might need to do this on several levels of threshold to make algorithm independent of the lighting conditions)
Find contours
For each contour calculate the moments
Filter them by area to remove too small contours
Filter contours by circularity:
double area = moms.m00;
double perimeter = arcLength(Mat(contours[contourIdx]), true);
double ratio = 4 * CV_PI * area / (perimeter * perimeter);
ratio close to 1 will give you circles.
Calculate radius and center of each circle
center = Point2d(moms.m10 / moms.m00, moms.m01 / moms.m00);
And you can add more filters to improve the robustness.
Actually you can find an implementation of the whole procedure in OpenCV. Look how the SimpleBlobDetector class and findCirclesGrid function are implemented.
Within the current algorithm, the biggest thing that sticks out is the for(int t=5; t<400; t=t+2) loop. Trying recording score values for some test images. Graph score(t) versus t. With any luck, it will either suggest a smaller range for t or be a smoothish curve with a single maximum. In the latter case you can change your loop over all t values into a smarter search using Hill Climbing methods.
Even if it's fairly noisy, you can first loop over multiples of, say, 30, and for the best 1 or 2 of those loop over nearby multiples of 2.
Also, in your score function, you should disqualify any results with overlapping circles and maybe penalize overly spaced out circles.
You don't explain why you are using a black background. Unless you are using a telecentric lens (which seems unlikely, given the apparent field of view), and ignoring radial distortion for the moment, the images of the dishes will be ellipses, so estimating them as circles may lead to significant errors.
All and all, it doesn't seem to me that you are following a good approach. If the goals is simply to remove the background, so you can track the bugs inside the dishes, then your goal should be just that: find which pixels are background and mark them. The easiest way to do that is to take a picture of the background without dishes, under the same illumination and camera, and directly detect differences with the picture with the images. A colored background would be preferable to do that, with a color unlikely to appear in the dishes (e.g. green or blue velvet). So you'd have reduced the problem to bluescreening (or chroma keying), a classic technique in machine vision as applied to visual effects. Do a google search for "matte petro vlahos assumption" to find classic algorithms for solving this problem.