I've been struggling with a small problem for some time now and just can't figure out what is wrong.
So I have a black 126 x 126 image with a 1 pixel blue border ( [B,G,R] = [255, 0, 0] ).
What I want, is the pixel which is furthest away from all blue pixels (such as the border). I understand how this is done. Iterate through every pixel, if it is black then compute distance to every other pixel which is blue looking for the minimum, then select the black pixel with the largest minimum distance to any blue.
Note: I don't need to actually know the true distance, so when doing the sum of the squares for distance I don't square root, I only want to know which distance is larger (less expensive).
First thing I do is loop through every pixel and if it is blue, add the row and column to a vector. I can confirm this part works correctly. Next, I loop through all pixels again and compare every black pixel's distance to every pixel in the blue pixel vector.
Where blue is a vector of Blue objects (has row and column)
region is the image
int distance;
int localShortest = 0;
int bestDist = 0;
int posX = 0;
int posY = 0;
for(int i = 0; i < image.rows; i++)
{
for(int j = 0; j < image.cols; j++)
{
//Make sure pixel is black
if(image.at<cv::Vec3b>(i,j)[0] == 0
&& image.at<cv::Vec3b>(i,j)[1] == 0
&& image.at<cv::Vec3b>(i,j)[2] == 0)
{
for(int k = 0; k < blue.size(); k++)
{
//Distance between pixels
distance = (i - blue.at(k).row)*(i - blue.at(k).row) + (j - blue.at(k).col)*(j - blue.at(k).col);
if(k == 0)
{
localShortest = distance;
}
if(distance < localShortest)
{
localShortest = distance;
}
}
if(localShortest > bestDist)
{
posX = i;
posY = j;
bestDistance = localShortest;
}
}
}
}
This works absolutely fine for a 1 pixel border around the edge.
https://dl.dropboxusercontent.com/u/3879939/works.PNG
Similarly, if I add more blue but keep a square ish black region, then it also works.
https://dl.dropboxusercontent.com/u/3879939/alsoWorks.PNG
But as soon as I make the image not have a square black portion, but maybe rectangular. Then the 'furthest away' is off. Sometimes it even says a blue pixel is the furthest away from blue, which is just not right.
https://dl.dropboxusercontent.com/u/3879939/off.PNG
Any help much appreciated! Hurting my head a bit.
One possibility, given that you're using OpenCV anyway, is to just use the supplied distance transform function.
For your particular case, you would need to do the following:
Convert your input to a single-channel binary image (e.g. map black to white and blue to black)
Run the cv::distanceTransform function with CV_DIST_L2 (Euclidean distance)
Examine the resulting greyscale image to get the results.
Note that there may be more than one pixel at the maximum distance from the border, so you need to handle this case according to your application.
The brightest pixels in the distance transform will be the ones that you need. For example, here is a white rectangle and its distance transform:
In square due to its symmetry the furthest black point (the center) is also the furthest no matter in which direction you look from there. But now try to imagine a very long rectangle with a very short height. There will be multiple points on its horizontal axis, to which the largest minimum distance will be the short distance to both the top and bottom sides, because the left and right sides are far away. In this case the pixel your algorithm finds can be any one on this line, and the result will depend on your pixel scanning order.
It's because there is a line(more than one pixel) to meet your condition for a rectangular
Related
I'm trying to reduce noise in an image by creating a 2D uniform smoothing algorithm in C++. The function I'm using for each pixel computes the average value of neighboring pixels within a square odd window and uses that as the new value.
Whenever I run the code, however, the pixels in the new image become darker (a pixel value of 255=white, and 0=black). Here is the function for obtaining the new pixel value:
int utility::windowAverage (image &src, int x, int y, int window_size)
{
int sum = 0;
int avg;
for(int i = x-(window_size/2); i < x+(window_size/2);++i)
{
for(int j = y-(window_size/2); j < y+(window_size/2);++j)
{
sum += src.getPixel(i,j);
}
}
avg = sum/(window_size*window_size);
return avg;
}
The parameter image &src is the source image, and the function src.getPixel(i,j) returns an integer from 0 to 255 representing the brightness of the pixel at the specified coordinates (i,j).
I am running the code over gray-level images of the .pgm format.
How can I smooth the image while maintaining the same brightness?
The problem is that you are not actually adding the pixels in a window with the dimension of windows_size*windows_size, but you are missing the last pixel in each dimension when computing sum.
You can fix this by using <=instead of <in both of your for loops.
Example of what is going wrong for window_size = 3 and x=0 and y=0:
The integer division by 2 in your for loops is floored, which means that your loops would become for (int i=-1; i < 1; i++). This obviously only loops over the (two) pixles -1 and 0 in the given direction, but you still divide by the full window_size of 3, which makes the image darker (in this case by one third if it has constant color values).
Here i try to find the upper arc and lower arc using image vector(contours of images) But It could n't gave Extract result. Suggest any other method to find upper and lower arc from images and their length.
Here my code
Mat image =cv::imread("thinning/20d.jpg");
int i=0,j=0,k=0,x=320;
for(int y = 0; y < image.rows; y++)
{
if(image.at<Vec3b>(Point(x, y))[0] >= 250 && image.at<Vec3b>(Point(x, y))[1] >= 250 && image.at<Vec3b>(Point(x, y))[2] >= 250){
qDebug()<<x<<y;
x1[i]=x;
y1[i]=y;
i=i+1;
}
}
for(i=0;i<=1;i++){
qDebug()<<x1[i]<<y1[i];
}
qDebug()<<"UPPER ARC";
for(int x = 0; x < image.cols; x++)
{
for(int y = 0; y <= (y1[0]+20); y++)
{
if(image.at<Vec3b>(Point(x, y))[0] >= 240 && image.at<Vec3b>(Point(x, y))[1] >= 240 && image.at<Vec3b>(Point(x, y))[2] >= 240){
x2[j]=x;
y2[j]=y;
j=j+1;
qDebug()<<x<<y;
}}
}
qDebug()<<"Lower ARC";
for(int x = 0; x < image.cols; x++)
{
for(int y = (y1[1]-20); y <= image.rows; y++)
{
if(image.at<Vec3b>(Point(x, y))[0] >= 240 && image.at<Vec3b>(Point(x, y))[1] >= 240 && image.at<Vec3b>(Point(x, y))[2] >= 240){
x3[k]=x;
y3[k]=y;
k=k+1;
qDebug()<<x<<y;
}}
}
By Above code I get Coordinates, by using Coordinates points I can find the length of arc but its mismatch with extract result.
Here is actual image:
Image1:
After thinning i got:
Expected Output:
As you are unable to define what exactly is upper/lower arc then I will assume you cut the ellipse in halves by horizontal line going through the ellipse's middle point. If that is not the case then you have to adapt this on your own... Ok now how to do it:
binarize image
As you provide JPG the colors are distorted so there is more then just black and white
thin the border to 1 pixel
Fill the inside with white and then recolor all white pixels not neighboring any black pixels to some unused or black color. There are many other variation how to achieve this...
find the bounding box
search all pixels and remember min,max x,y coordinates of all white pixels. Let call them x0,y0,x1,y1.
compute center of ellipse
simply find middle point of bounding box
cx=(x0+x1)/2
cy=(y0+y1)/2
count the pixels for each elliptic arc
have counter for each arc and simply increment upper arc counter for any white pixel that have y<=cy and lower if y>=cy. If your coordinate system is different then the conditions can be reverse.
find ellipse parameters
simply find white pixel closest to (cx,cy) this will be endpoint of minor semi-axis b let call it (bx,by). Also find the most far white pixel to (cx,cy) that will be the major semi axis endpoint (ax,ay). The distances between them and center will give you a,b and their position substracted by center will give you vectors with rotation of your ellipse. the angle can be obtained by atan2 or use basis vectors as I do. You can test ortogonality by dot product. There can be more then 2 points for closest and farest point. in that case you should find the middle of each group to enhance precision.
Integrate fitted ellipse
You need first to find angle at which the ellipse points are with y=cy then integrate ellipse between these two angles. The other half is the same just integrate angles + PI. To determine which half it is just compute point in the middle between angle range and decide according y>=cy ...
[Edit2] Here updated C++ code I busted for this:
picture pic0,pic1,pic2;
// pic0 - source
// pic1 - output
float a,b,a0,a1,da,xx0,xx1,yy0,yy1,ll0,ll1;
int x,y,i,threshold=127,x0,y0,x1,y1,cx,cy,ax,ay,bx,by,aa,bb,dd,l0,l1;
pic1=pic0;
// bbox,center,recolor (white,black)
x0=pic1.xs; x1=0;
y0=pic1.ys; y1=0;
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
if (pic1.p[y][x].db[0]>=threshold)
{
if (x0>x) x0=x;
if (y0>y) y0=y;
if (x1<x) x1=x;
if (y1<y) y1=y;
pic1.p[y][x].dd=0x00FFFFFF;
} else pic1.p[y][x].dd=0x00000000;
cx=(x0+x1)/2; cy=(y0+y1)/2;
// fill inside (gray) left single pixel width border (thining)
for (y=y0;y<=y1;y++)
{
for (x=x0;x<=x1;x++) if (pic1.p[y][x].dd)
{
for (i=x1;i>=x;i--) if (pic1.p[y][i].dd)
{
for (x++;x<i;x++) pic1.p[y][x].dd=0x00202020;
break;
}
break;
}
}
for (x=x0;x<=x1;x++)
{
for (y=y0;y<=y1;y++) if (pic1.p[y][x].dd) { pic1.p[y][x].dd=0x00FFFFFF; break; }
for (y=y1;y>=y0;y--) if (pic1.p[y][x].dd) { pic1.p[y][x].dd=0x00FFFFFF; break; }
}
// find min,max radius (periaxes)
bb=pic1.xs+pic1.ys; bb*=bb; aa=0;
ax=cx; ay=cy; bx=cx; by=cy;
for (y=y0;y<=y1;y++)
for (x=x0;x<=x1;x++)
if (pic1.p[y][x].dd==0x00FFFFFF)
{
dd=((x-cx)*(x-cx))+((y-cy)*(y-cy));
if (aa<dd) { ax=x; ay=y; aa=dd; }
if (bb>dd) { bx=x; by=y; bb=dd; }
}
aa=sqrt(aa); ax-=cx; ay-=cy;
bb=sqrt(bb); bx-=cx; by-=cy;
//a=float((ax*bx)+(ay*by))/float(aa*bb); // if (fabs(a)>zero_threshold) not perpendicular semiaxes
// separate/count upper,lower arc by horizontal line
l0=0; l1=0;
for (y=y0;y<=y1;y++)
for (x=x0;x<=x1;x++)
if (pic1.p[y][x].dd==0x00FFFFFF)
{
if (y>=cy) { l0++; pic1.p[y][x].dd=0x000000FF; } // red
if (y<=cy) { l1++; pic1.p[y][x].dd=0x00FF0000; } // blue
}
// here is just VCL/GDI info layer output so you can ignore it...
// arc separator axis
pic1.bmp->Canvas->Pen->Color=0x00808080;
pic1.bmp->Canvas->MoveTo(x0,cy);
pic1.bmp->Canvas->LineTo(x1,cy);
// draw analytical ellipse to compare
pic1.bmp->Canvas->Pen->Color=0x0000FF00;
pic1.bmp->Canvas->MoveTo(cx,cy);
pic1.bmp->Canvas->LineTo(cx+ax,cy+ay);
pic1.bmp->Canvas->MoveTo(cx,cy);
pic1.bmp->Canvas->LineTo(cx+bx,cy+by);
pic1.bmp->Canvas->Pen->Color=0x00FFFF00;
da=0.01*M_PI; // dash step [rad]
a0=0.0; // start
a1=2.0*M_PI; // end
for (i=1,a=a0;i;)
{
a+=da; if (a>=a1) { a=a1; i=0; }
x=cx+(ax*cos(a))+(bx*sin(a));
y=cy+(ay*cos(a))+(by*sin(a));
pic1.bmp->Canvas->MoveTo(x,y);
a+=da; if (a>=a1) { a=a1; i=0; }
x=cx+(ax*cos(a))+(bx*sin(a));
y=cy+(ay*cos(a))+(by*sin(a));
pic1.bmp->Canvas->LineTo(x,y);
}
// integrate the arclengths from fitted ellipse
da=0.001*M_PI; // integration step [rad] (accuracy)
// find start-end angles
ll0=M_PI; ll1=M_PI;
for (i=1,a=0.0;i;)
{
a+=da; if (a>=2.0*M_PI) { a=0.0; i=0; }
xx1=(ax*cos(a))+(bx*sin(a));
yy1=(ay*cos(a))+(by*sin(a));
b=atan2(yy1,xx1);
xx0=fabs(b-0.0); if (xx0>M_PI) xx0=2.0*M_PI-xx0;
xx1=fabs(b-M_PI);if (xx1>M_PI) xx1=2.0*M_PI-xx1;
if (ll0>xx0) { ll0=xx0; a0=a; }
if (ll1>xx1) { ll1=xx1; a1=a; }
}
// [upper half]
ll0=0.0;
xx0=cx+(ax*cos(a0))+(bx*sin(a0));
yy0=cy+(ay*cos(a0))+(by*sin(a0));
for (i=1,a=a0;i;)
{
a+=da; if (a>=a1) { a=a1; i=0; }
xx1=cx+(ax*cos(a))+(bx*sin(a));
yy1=cy+(ay*cos(a))+(by*sin(a));
// sum arc-line sizes
xx0-=xx1; xx0*=xx0;
yy0-=yy1; yy0*=yy0;
ll0+=sqrt(xx0+yy0);
// pic1.p[int(yy1)][int(xx1)].dd=0x0000FF00; // recolor for visualy check the right arc selection
xx0=xx1; yy0=yy1;
}
// lower half
a0+=M_PI; a1+=M_PI; ll1=0.0;
xx0=cx+(ax*cos(a0))+(bx*sin(a0));
yy0=cy+(ay*cos(a0))+(by*sin(a0));
for (i=1,a=a0;i;)
{
a+=da; if (a>=a1) { a=a1; i=0; }
xx1=cx+(ax*cos(a))+(bx*sin(a));
yy1=cy+(ay*cos(a))+(by*sin(a));
// sum arc-line sizes
xx0-=xx1; xx0*=xx0;
yy0-=yy1; yy0*=yy0;
ll1+=sqrt(xx0+yy0);
// pic1.p[int(yy1)][int(xx1)].dd=0x00FF00FF; // recolor for visualy check the right arc selection
xx0=xx1; yy0=yy1;
}
// handle if the upper/lower parts are swapped
a=a0+0.5*(a1-a0);
if ((ay*cos(a))+(by*sin(a))<0.0) { a=ll0; ll0=ll1; ll1=a; }
// info texts
pic1.bmp->Canvas->Font->Color=0x00FFFF00;
pic1.bmp->Canvas->Brush->Style=bsClear;
x=5; y=5; i=16; y-=i;
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("center = (%i,%i) px",cx,cy));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("a = %i px",aa));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("b = %i px",bb));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("upper = %i px",l0));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("lower = %i px",l1));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("upper`= %.3lf px",ll0));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("lower`= %.3lf px",ll1));
pic1.bmp->Canvas->Brush->Style=bsSolid;
It use my own picture class with members:
xs,ys resolution of image
p[y][x].dd pixel access as 32bit unsigned integer as color
p[y][x].db[4] pixel access as 4*8bit unsigned integer as color channels
You can look at picture::p member as simple 2D array of
union color
{
DWORD dd; WORD dw[2]; byte db[4];
int i; short int ii[2];
color(){}; color(color& a){ *this=a; }; ~color(){}; color* operator = (const color *a) { dd=a->dd; return this; }; /*color* operator = (const color &a) { ...copy... return this; };*/
};
int xs,ys;
color p[ys][xs];
Graphics::TBitmap *bmp; // VCL GDI Bitmap object you do not need this...
where each cell can be accessed as 32 bit pixel p[][].dd as 0xAABBGGRR or 0xAARRGGBB not sure now which. Also you can access the channels directly with p[][].db[4] as 8bit BYTEs.
The bmp member is GDI bitmap so bmp->Canvas-> access all the GDI stuff which is not important for you.
Here result for your second image:
Gray horizontal line is the arc boundary line going through center
Red,Blue are the arc halves (recolored during counting)
Green are the semi-axes basis vectors
Aqua dash-dash is analytical ellipse overlay to compare the fit.
As you can see the fit is pretty close (+/-1 pixel). The counted arc-lengths upper,lower are pretty close to approximated average circle half perimeter(circumference).
You should add a0 range check to decide if the start is upper or lower half because there is no quarantee which side of major axis this will find. The integration of both halves are almost the same and saturated around integration step 0.001*M_PI around 307.3 pixels per arc-length which is only 17 and 22 pixels difference from the direct pixel count which is even better then I anticipate due to aliasing ...
For more eccentric ellipses the fit is not as good but the results are still good enough:
So I am trying to write a Raytracer as a personal project, and I have got the basic recursion, mesh geometry, and ray triangle intersection down.
I am trying to get a plausible image out of it but encounter the problem that all pixel rows are the same, giving me straight vertical lines.
I found that all pixel positions generated from the camera function are the same on the y axis but cannot find the problem with my vector math here (I use my Vertex structure as vectors too, its lazy I know):
void Renderer::CameraShader()
{
//compute the width and height of the screen based on angle and distance of the near clip plane
double widthRad = tan(0.5*m_Cam.angle)*m_Cam.nearClipPlane;
double heightRad = ((double)m_Cam.pixelRows / (double)m_Cam.pixelCols)*widthRad;
//get the horizontal vector of the camera by crossing the direction angle with an
Vertex cross = ((m_Cam.direction - m_Cam.origin).CrossProduct(Vertex(0, 1, 0)).Normalized(0.0001))*widthRad;
//get the up/down vector of the camera by crossing the horizontal vector with the direction vector
Vertex crossDown = m_Cam.direction.CrossProduct(cross).Normalized(0.0001)*heightRad;
//generate rays per pixel row and column
for (int i = 0; i < m_Cam.pixelCols;i++)
{
for (int j = 0; j < m_Cam.pixelRows; j++)
{
Vertex pixelPos = m_Cam.origin + (m_Cam.direction - m_Cam.origin).Normalized(0.0001)*m_Cam.nearClipPlane //vector of the screen center
- cross + (cross*((i / (double)m_Cam.pixelCols)*widthRad*2)) //horizontal vector based on i
+ crossDown - (crossDown*((j / (double)m_Cam.pixelRows)*heightRad*2)); //vertical vector based on j
//cast a ray through according screen pixel to get color
m_Image[i][j] = raycast(m_Cam.origin, pixelPos - m_Cam.origin, p_MaxBounces);
}
}
}
I hope the comments in the code make clear what is happening.
If anyone sees the problem help would be nice
The problem was that I had to substract the camera origin from the direction point. It now actually renders sillouettes, so I guess I can say its fixed :)
I've been researching here and the rest of the web for over a week now and am unable to come up with anything.
I'm coding using C++ and opencv on linux.
I have this video in black and white of a cloud chamber (http://youtu.be/40wnB8ukI7s). I want to draw contours around the moving particle tracks. Currently I'm using findContours and drawContours; however, it draws contours around all of the white pixels, including the ones that quickly appear and disappear. I don't want to draw contours around my background, the flickering white pixels.
My problem is that the background is also moving so background subtraction doesn't work. Is there a way to:
a) only draw a contour if it exists roughly in the same location over several frames
b) remove a white pixel if it doesn't exist for multiple frames (probably at least 4 or 5 frames)
Thank you for any help you can provide.
Edit: Code for comparing two frames (firstFrame and secondFrame)
Vec3b frameColour;
Vec3b frameColour2;
for (int x = 0; x < firstFrame.cols; x++){
for (int y = 0; y < firstFrame.rows; y++){
frameColour = firstFrame.at<Vec3b>(Point(x, y));
frameColour2 = secondFrame.at<Vec3b>(Point(x, y));
if(frameColour == white && frameColour2 == white){
secondFrameAfter.at<Vec3b>(Point(x, y)) = white;
}else{
secondFrameAfter.at<Vec3b>(Point(x, y)) = black;
}
}
}
You could implement your idea:
For each frame do:
For each white pixel do:
If the pixels in the neigbourhood of the last N frames are *mostly* white
Set the current pixel to white
Else
Set the current pixel to black
The neigbourhood can be defined as a 3x3 mask around the pixel.
Mostly refers to an appropriate threshold, let's say 80% of the N frames should support (be white) the pixel position.
The red pixel is the current pixel (x,y) and the green pixels are its neigbourhood.
Comparing the neigbouring pixel of a pixel (x,y) can be achieved as follows:
const int MASK_SIZE = 3;
int numberOfSupportingFrames = 0;
for(int k = 0; k < N; k++)
{
Mat currentPreviousFrame = previousFrames.at(k);
bool whitePixelAvailable = false;
for(int i = x-(MASK_SIZE/2); i < x+(MASK_SIZE/2) && !whitePixelAvailable; i++)
{
for(int j = y-(MASK_SIZE/2); j < y+(MASK_SIZE/2) && !whitePixelAvailable; j++)
{
if(currentPreviousFrame.at<Vec3b>(Point(i, j)) == white)
{
whitePixelAvailable = true;
numberOfSupportingFrames++;
}
}
}
}
if((float)numberOfSupportingFrames / (float)N > 0.8)
secondFrameAfter.at<Vec3b>(Point(x, y)) = white;
else
secondFrameAfter.at<Vec3b>(Point(x, y)) = black;
The previous frames are stored inside std::vector previousFrames.
The algorithm checks the spatio-temporal neigbourhood of the pixel (x,y). The outer loop iterates over the neigbouring frames (temporal neigbourhood), while the inner two loops iterate over the neigbouring eight pixels (spatial neighbourhood). If there is a white pixel in the current spatial neighbourhood, this previous frame supports the current pixel (x,y). At the end it is checked if there are enough frames supporting the current pixel (80% of the previous frames should contain at least on white pixel in the 8-neigbourhood).
This code should be nested inside your two for-loops with some modifications (variable names, border handling).
I'm in the process of creating a 2P Connect 4 game, but I can't seem to get the circular areas to place tokens spaced evenly.
Here's the code that initializes the positions of each circle:
POINT tilePos;
for (int i = 0; i < Board::Dims::MAXX; ++i)
{
tileXY.push_back (std::vector<POINT> (Board::Dims::MAXY)); //add column
for (int j = 0; j < Board::Dims::MAXY; ++j)
{
tilePos.x = boardPixelDims.left + (i + 1./2) * (boardPixelDims.width / Board::Dims::MAXX);
tilePos.y = boardPixelDims.top + (j + 1./2) * (boardPixelDims.height / Board::Dims::MAXY);
tileXY.at (i).push_back (tilePos); //add circle in column
}
}
I use a 2D vector of POINTs, tileXY, to store the positions. Recall the board is 7 circles wide by 6 circles high.
My logic is such that the first circle starts (for X) at:
left + width / #circles * 0 + width / #circles / 2
and increases by width / #circles each time, which is easy to picture for smaller numbers of circles.
Later, I draw the circles like this:
for (const std::vector<POINT> &col : _tileXY)
{
for (const POINT pos : col)
{
if (g.FillEllipse (&red, (int)(pos.x - CIRCLE_RADIUS), pos.y - CIRCLE_RADIUS, CIRCLE_RADIUS, CIRCLE_RADIUS) != Gdiplus::Status::Ok)
MessageBox (_windows.gameWindow, "FillEllipse failed.", 0, MB_SYSTEMMODAL);
}
}
Those loops iterate through each element of the vector and draws each circle in red (to stand out at the moment). The int conversion is to disambiguate the function call. The first two arguments after the brush are the top-left corner, and CIRCLE_RADIUS is 50.
The problem is that my board looks like this (sorry if it hurts your eyes a bit):
As you can see, the circles are too far up and left. They're also too small, but that's easily fixed. I tried changing some ints to doubles, but ultimately ended up with this being the closest I ever got to the real pattern. The expanded formula (expanding (i + 1./2)) for the positions looks the same as well.
Have I missed a small detail, or is my whole logic behind it off?
Edit:
As requested, types:
tilePos.x: POINT (the windows API one, type used is LONG)
boardPixelDims.*: double
Board::Dims::MAXX/MAXY: enum values (integral, contain 7 and 6 respectively)
Depending on whether CIRCLE_SIZE is intended as radius or diameter, two of your parameters seem to be wrong in the FillEllipse call. If it's a diameter, then you should be setting location to pos.x - CIRCLE_SIZE/2 and pos.y - CIRCLE_SIZE/2. If it's a radius, then the height and width paramters should each be 2*CIRCLE_SIZE rather than CIRCLE_SIZE.
Update - since you changed the variable name to CIRCLE_RADIUS, the latter solution is now obviously the correct one.
The easiest way I remember what arguments the shape related functions take is to always think in rectangles. FillEllipse will just draw an ellipse to fill the rectangle you give it. x, y, width and height.
A simple experiment to practice with is if you change your calls to FillRect, get everything positioned okay, and then change them to FillEllipse.