Program crashes, because of std::rotate_copy - c++

I am trying to display two cams next to each other, rotated by 90°. Displaying both cams works fine, but if I want to rotate the cams, the program crashes.
The camera is read with a QByteArray and shown with the QCamera variable.
You can choose which camera is displayed in which viewfinder, so it has a code like this:
QActionGroup *videoDevicesGroup = new QActionGroup(this);
videoDevicesGroup->setExclusive(true);
foreach(const QByteArray &deviceName, QCamera::availableDevices()) {
QString description = camera->deviceDescription(deviceName);
QAction *videoDeviceAction = new QAction(description, videoDevicesGroup);
videoDeviceAction->setCheckable(true);
videoDeviceAction->setData(QVariant(deviceName));
if (cameraDevice.isEmpty()) {
cameraDevice = deviceName;
videoDeviceAction->setChecked(true);
}
ui->menuDevices->addAction(videoDeviceAction);
}
connect(videoDevicesGroup, SIGNAL(triggered(QAction*)), SLOT(updateCameraDevice(QAction*)));
if (cameraDevice.isEmpty())
{
camera = new QCamera;
}
else
{
camera = new QCamera(cameraDevice);
}
connect(camera, SIGNAL(stateChanged(QCamera::State)), this, SLOT(updateCameraState(QCamera::State)));
connect(camera, SIGNAL(error(QCamera::Error)), this, SLOT(displayCameraError()));
camera->setViewfinder(ui->viewfinder);
updateCameraState(camera->state());
camera->start();
Now I'm trying to rotate this cam with the command:
std::roate_copy(cameraDevice.constBegin(), cameraDevice.constEnd(), cameraDevice.constEnd, reverse.begin());
camera = new QCamera(reverse);
But when I try to start the program the program crashes, without any errors.
How can I fix this?

I think you have a misunderstanding on what std::rotate_copy does.
std::rotate_copy takes a range of data and shifts it as it copies into the location pointed to by the result iterator.
This won't rotate a camera. It just shifts and copies ranges: http://www.cplusplus.com/reference/algorithm/rotate_copy
EDIT:
Think about it this way say I have: std::string("wroybgivb");
Now say I do str::rotate_copy and I pick the "y" as my middle the std::string that I copied into will contain "ybgivbwro".
Now think about that like I was working with a 3X3 image and each character represented a color:
wro ybg
ybg => ivb
ivb wro
Note that this is doing a linear array rotation (position shifting). I can never pick a middle such that rows will become columns and columns will become rows.
PS:
OK so say that you knew the width of the image, and assigned it to the variable width. You can do something like this to rotate 90° clockwise:
for(int x = 0; x < size; ++x){
output[width - 1 - i / width + (i % width) * width] = input[i];
}
To understand this you need to understand indexing a linear array as though it's a 2D array.
Use this to access the x coordinate: i % width
Use this to access the y coordinate: (i / width) * width
Now you need to take those indices and rotate them still inside a linear array.
Use this to access the x coordinate: width - 1 - i / width
Use this to access the y coordinate: (i % width) * width

Related

QGraphicsScene.itemAt() only returns zero and whole scene is slow

I am trying to create a dot matrix in a QGraphicsScene. I have a button, which fills a 2d-array with random numbers and than i will paint a pixel on every position where the array has a 0.
Now, when I wants to generate the matrix again i want to check every pixel and array-field whether they are empty or not. If the pixel is empty and the array not, i want to set a pixel. If there is a pixel but the array is empty i want to remove the pixel. Now the problem is, the function itemAt() always returns 0 even if i can clearly see existen pixels.
What is my problem?
//creating the scene in the constructor
QPainter MyPainter(this);
scene = new QGraphicsScene(this);
ui.graphicsView->setScene(scene);
//generating matrix
void MaReSX_ClickDummy::generate(void)
{
QGraphicsItem *item;
int x, y;
for(x=0; x< 400; x++)
{
for(y=0; y<400; y++)
{
dataArray[x][y] = rand()%1001;
}
}
for(x=0; x < 400; x++)
{
for(y=0; y<400; y++)
{
item = scene->itemAt(x, y, QTransform());//supposed to check whether there is already a pixel on that place but always returns zero
if(dataArray[x][y] == 0 && item == 0)
scene->addEllipse(x, y, 1, 1);
//does not work
else if(dataArray[x][y] != 0 && item != 0)
scene->removeItem(item);
}
}
}
Also the generating of the matrix is very slow. Since the matrix is supposed to show realtime data later, it should run as fast as possible. (and the scene will be bigger than 400*400 pixels like now). Any ideas how to improve the code?
And can somebody explain what the third parameter of itemAt() is supposed to do?
Thank you!
400x400 'dot matrix' is up to 16000 dots, or up to 2500 characters, which quite big.
The QGraphicsScene is designed to handle a small number of large shapes, and was probably not designed to handle this many shapes. Using it in this way to create thousands of identical tiny 'pixel' objects is incredibly inefficent.
Could you create a 400x400 bitmap(QBitmap?) instead, and set the individual pixels that you want?
You are supposed to be using a QGraphicsPixmapItem instead of an array of dots!

Isometric Collision - 'Diamond' shape detection

My project uses an isometric perspective for the time being I am showing the co-ordinates in grid-format above them for debugging. However, when it comes to collision/grid-locking of the player, I have an issue.
Due to the nature of sprite drawing, my maths is creating some issues with the 'triangular' corner empty areas of the textures. I think that the issue is something like below (blue is what I think is the way my tiles are being detected, whereas the red is how they ideally should be detected for accurate roaming movement on the tiles:
As you can see, the boolean that checks the tile I am stood on (which takes the pixel central to the player's feet, the player will later be a car and take a pixel based on the direction of movement) is returning false and denying movement in several scenarios, as well as letting the player move in some places that shouldn't be allowed.
I think that it's because the cutoff areas of each texture are (I think) being considered part of the grid area, so when the player is in one of these corner areas it is not truly checking the correct tile, and so returning the wrong results.
The code I'm using for creating the grid is this:
int VisualComponent::TileConversion(Tile* tileToConvert, bool xOrY)
{
int X = (tileToConvert->x - tileToConvert->y) * 64; //change 64 to TILE_WIDTH_HALF
int Y = (tileToConvert->x + tileToConvert->y) * 25;
/*int X = (tileToConvert->x * 128 / 2) + (tileToConvert->y * 128 / 2) + 100;
int Y = (tileToConvert->y * 50 / 2) - (tileToConvert->x * 50 / 2) + 100;*/
if (xOrY)
{
return X;
}
else
{
return Y;
}
}
and the code for checking the player's movement is:
bool Clsentity::CheckMovementTile(int xpos, int ypos, ClsMapData* mapData) //check if the movement will end on a legitimate road tile UNOPTIMISED AS RUNS EVERY FRAME FOR EVERY TILE
{
int x = xpos + 7; //get the center bottom pixel as this is more suitable than the first on an iso grid (more realistic 'foot' placement)
int y = ypos + 45;
int mapX = (x / 64 + y / 25) / 2; //64 is TILE-WIDTH HALF and 25 is TILE HEIGHT
int mapY = (y / 25 - (x / 64)) / 2;
for (int i = 0; i < mapData->tilesList.size(); i++) //for each tile of the map
{
if (mapData->tilesList[i]->x == mapX && mapData->tilesList[i]->y == mapY) //if there is an existing tile that will be entered
{
if (mapData->tilesList[i]->movementTile)
{
HAPI->DebugText(std::to_string(mapX) + " is the x and the y is " + std::to_string(mapY));
return true;
}
}
}
return false;
}​
I'm a little stuck on progression until having this fixed in the game loop aspect of things. If anyone thinks they either know the issue from this or might be able to help it'd be great and I would appreciate it. For reference also, my tile textures are 128x64 pixels and the math behind drawing them to screen treats them as 128x50 (to cleanly link together).
Rather than writing specific routines for rendering and click mapping, seriously consider thinking of these as two views on the data, which can be transformed in terms of matrix transformations of a coordinate space. You can have two coordinate spaces - one is a nice rectangular grid that you use for positioning and logic. The other is the isometric view that you use for display and input.
If you're not familiar with linear algebra, it'll take a little bit to wrap your head around it, but once you do, it makes everything trivial.
So, how does that work? Your isometric view is merely a rotation of a bog standard grid view, right? Well, close. Isometric view also changes the dimensions if you're starting with a square grid. Anyhow: can we just do a simple coordinate transformation?
Logical coordinate system -> display system (e.g. for rendering)
Texture point => Rotate 45 degrees => Scale by sqrt(2) because a 45 degree rotation changes the dimension of the block by sqrt(1 * 1 + 1 * 1)
Display system -> logical coordinate system (e.g. for mapping clicks into logical space)
Click point => descale by sqrt(2) to unsquish => unrotate by 45 degrees
Why?
If you can do coordinate transformations, then you'd be dealing with a pretty bog-standard rectangular grid for everything else you write, which will make your any other logic MUCH simpler. Your calculations there won't involve computing angles or slopes. E.g. now your "can I move 'down'" logic is much simpler.
Let's say you have 64 x 64 tiles, for simplicity. Now transforming a screen space click to a logical tile is simply:
(int, int) whichTile(clickX, clickY) {
logicalX, logicalY = transform(clickX, clickY)
return (logicalX / 64, logicalY / 64)
}
You can do checks like see if x0,y0 and x1,y1 are on the same tile, in the logical space by someting as simple as:
bool isSameTile(x0, y0, x1, y1) {
return floor(x0/64) == floor(x1/64) && floor(y0/64) == floor(y1/64)
}
Everything gets much simpler once you define the transforms and work in the logical space.
http://en.wikipedia.org/wiki/Rotation_matrix
http://en.wikipedia.org/wiki/Scaling_%28geometry%29#Matrix_representation
http://www.alcove-games.com/advanced-tutorials/isometric-tile-picking/
If you don't want to deal with some matrix library, you can do the equivalent math pretty straightforwardly, but if you separate concerns of logic management from display / input through these transformations, I suspect you'll have a much easier time of it.

How can I pixelate a 1d array

I want to pixelate an image stored in a 1d array, although i am not sure how to do it, this is what i have comeup with so far...
the value of pixelation is currently 3 for testing purposes.
currently it just creates a section of randomly coloured pixels along the left third of the image, if i increase the value of pixelation the amount of random coloured pixels decreases and vice versa, so what am i doing wrong?
I have also already implemented the rotation, reading of the image and saving of a new image this is just a separate function which i need assistance with.
picture pixelate( const std::string& file_name, picture& tempImage, int& pixelation /* TODO: OTHER PARAMETERS HERE */)
{
picture pixelated = tempImage;
RGB tempPixel;
tempPixel.r = 0;
tempPixel.g = 0;
tempPixel.b = 0;
int counter = 0;
int numtimesrun = 0;
for (int x = 1; x<tempImage.width; x+=pixelation)
{
for (int y = 1; y<tempImage.height; y+=pixelation)
{
//RGB tempcol;
//tempcol for pixelate
for (int i = 1; i<pixelation; i++)
{
for (int j = 1; j<pixelation; j++)
{
tempPixel.r +=tempImage.pixel[counter+pixelation*numtimesrun].colour.r;
tempPixel.g +=tempImage.pixel[counter+pixelation*numtimesrun].colour.g;
tempPixel.b +=tempImage.pixel[counter+pixelation*numtimesrun].colour.b;
counter++;
//read colour
}
}
for (int k = 1; k<pixelation; k++)
{
for (int l = 1; l<pixelation; l++)
{
pixelated.pixel[numtimesrun].colour.r = tempPixel.r/pixelation;
pixelated.pixel[numtimesrun].colour.g = tempPixel.g/pixelation;
pixelated.pixel[numtimesrun].colour.b = tempPixel.b/pixelation;
//set colour
}
}
counter = 0;
numtimesrun++;
}
cout << x << endl;
}
cout << "Image successfully pixelated." << endl;
return pixelated;
}
I'm not too sure what you really want to do with your code, but I can see a few problems.
For one, you use for() loops with variables starting at 1. That's certainly wrong. Arrays in C/C++ start at 0.
The other main problem I can see is the pixelation parameter. You use it to increase x and y without knowing (at least in that function) whether it is a multiple of width and height. If not, you will definitively be missing pixels on the right edge and at the bottom (which edges will depend on the orientation, of course). Again, it very much depends on what you're trying to achieve.
Also the i and j loops start at the position defined by counter and numtimesrun which means that the last line you want to hit is not tempImage.width or tempImage.height. With that you are rather likely to have many overflows. Actually that would also explain the problems you see on the edges. (see update below)
Another potential problem, cannot tell for sure without seeing the structure declaration, but this sum using tempPixel.c += <value> may overflow. If the RGB components are defined as unsigned char (rather common) then you will definitively get overflows. So your average sum is broken if that's the fact. If that structure uses floats, then you're good.
Note also that your average is wrong. You are adding source data for pixelation x pixalation and your average is calculated as sum / pixelation. So you get a total which is pixalation times larger. You probably wanted sum / (pixelation * pixelation).
Your first loop with i and j computes a sum. The math is most certainly wrong. The counter + pixelation * numtimesrun expression will start reading at the second line, it seems. However, you are reading i * j values. That being said, it may be what you are trying to do (i.e. a moving average) in which case it could be optimized but I'll leave that out for now.
Update
If I understand what you are doing, a representation would be something like a filter. There is a picture of a 3x3:
.+. *
+*+ =>
.+.
What is on the left is what you are reading. This means the source needs to be at least 3x3. What I show on the right is the result. As we can see, the result needs to be 1x1. From what I see in your code you do not take that in account at all. (the varied characters represent varied weights, in your case all weights are 1.0).
You have two ways to handle that problem:
The resulting image has a size of width - pixelation * 2 + 1 by height - pixelation * 2 + 1; in this case you keep one result and do not care about the edges...
You rewrite the code to handle edges. This means you use less source data to compute the resulting edges. Another way is to compute the edge cases and save that in several output pixels (i.e. duplicate the pixels on the edges).
Update 2
Hmmm... looking at your code again, it seems that you compute the average of the 3x3 and save it in the 3x3:
.+. ***
+*+ => ***
.+. ***
Then the problem is different. The numtimesrun is wrong. In your k and l loops you save the pixels pixelation * pixelation in the SAME pixel and that advanced by one each time... so you are doing what I shown in my first update, but it looks like you were trying to do what is shown in my 2nd update.
The numtimesrun could be increased by pixelation each time:
numtimesrun += pixelation;
However, that's not enough to fix your k and l loops. There you probably need to calculate the correct destination. Maybe something like this (also requires a reset of the counter before the loop):
counter = 0;
... for loops ...
pixelated.pixel[counter+pixelation*numtimesrun].colour.r = ...;
... (take care of g and b)
++counter;
Yet again, I cannot tell for sure what you are trying to do, so I do not know why you'd want to copy the same pixel pixelation x pixelation times. But that explains why you get data only at the left (or top) of the image (very much depends on the orientation, one side for sure. And if that's 1/3rd then pixelation is probably 3.)
WARNING: if you implement the save properly, you'll experience crashes if you do not take care of the overflows mentioned earlier.
Update 3
As explained by Mark in the comment below, you have an array representing a 2d image. In that case, your counter variable is completely wrong since this is 100% linear whereas the 2d image is not. The 2nd line is width further away. At this point, you read the first 3 pixels at the top-left, then the next 3 pixels on the same, and finally the next 3 pixels still on the same line. Of course, it could be that your image is thus defined and these pixels are really one after another, although it is not very likely...
Mark's answer is concise and gives you the information necessary to access the correct pixels. However, you will still be hit by the overflow and possibly the fact that the width and height parameters are not a multiple of pixelation...
I don't do a lot of C++, but here's a pixelate function I wrote for Processing. It takes an argument of the width/height of the pixels you want to create.
void pixelateImage(int pxSize) {
// use ratio of height/width...
float ratio;
if (width < height) {
ratio = height/width;
}
else {
ratio = width/height;
}
// ... to set pixel height
int pxH = int(pxSize * ratio);
noStroke();
for (int x=0; x<width; x+=pxSize) {
for (int y=0; y<height; y+=pxH) {
fill(p.get(x, y));
rect(x, y, pxSize, pxH);
}
}
}
Without the built-in rect() function you'd have to write pixel-by-pixel using another two for loops:
for (int px=0; px<pxSize; px++) {
for (int py=0; py<pxH; py++) {
pixelated.pixel[py * tempImage.width + px].colour.r = tempPixel.r;
pixelated.pixel[py * tempImage.width + px].colour.g = tempPixel.g;
pixelated.pixel[py * tempImage.width + px].colour.b = tempPixel.b;
}
}
Generally when accessing an image stored in a 1D buffer, each row of the image will be stored as consecutive pixels and the next row will follow immediately after. The way to address into such a buffer is:
image[y*width+x]
For your purposes you want both inner loops to generate coordinates that go from the top and left of the pixelation square to the bottom right.

Kinect for Windows v2 depth to color image misalignment

currently I am developing a tool for the Kinect for Windows v2 (similar to the one in XBOX ONE). I tried to follow some examples, and have a working example that shows the camera image, the depth image, and an image that maps the depth to the rgb using opencv. But I see that it duplicates my hand when doing the mapping, and I think it is due to something wrong in the coordinate mapper part.
here is an example of it:
And here is the code snippet that creates the image (rgbd image in the example)
void KinectViewer::create_rgbd(cv::Mat& depth_im, cv::Mat& rgb_im, cv::Mat& rgbd_im){
HRESULT hr = m_pCoordinateMapper->MapDepthFrameToColorSpace(cDepthWidth * cDepthHeight, (UINT16*)depth_im.data, cDepthWidth * cDepthHeight, m_pColorCoordinates);
rgbd_im = cv::Mat::zeros(depth_im.rows, depth_im.cols, CV_8UC3);
double minVal, maxVal;
cv::minMaxLoc(depth_im, &minVal, &maxVal);
for (int i=0; i < cDepthHeight; i++){
for (int j=0; j < cDepthWidth; j++){
if (depth_im.at<UINT16>(i, j) > 0 && depth_im.at<UINT16>(i, j) < maxVal * (max_z / 100) && depth_im.at<UINT16>(i, j) > maxVal * min_z /100){
double a = i * cDepthWidth + j;
ColorSpacePoint colorPoint = m_pColorCoordinates[i*cDepthWidth+j];
int colorX = (int)(floor(colorPoint.X + 0.5));
int colorY = (int)(floor(colorPoint.Y + 0.5));
if ((colorX >= 0) && (colorX < cColorWidth) && (colorY >= 0) && (colorY < cColorHeight))
{
rgbd_im.at<cv::Vec3b>(i, j) = rgb_im.at<cv::Vec3b>(colorY, colorX);
}
}
}
}
}
Does anyone have a clue of how to solve this? How to prevent this duplication?
Thanks in advance
UPDATE:
If I do a simple depth image thresholding I obtain the following image:
This is what more or less I expected to happen, and not having a duplicate hand in the background. Is there a way to prevent this duplicate hand in the background?
I suggest you use the BodyIndexFrame to identify whether a specific value belongs to a player or not. This way, you can reject any RGB pixel that does not belong to a player and keep the rest of them. I do not think that CoordinateMapper is lying.
A few notes:
Include the BodyIndexFrame source to your frame reader
Use MapColorFrameToDepthSpace instead of MapDepthFrameToColorSpace; this way, you'll get the HD image for the foreground
Find the corresponding DepthSpacePoint and depthX, depthY, instead of ColorSpacePoint and colorX, colorY
Here is my approach when a frame arrives (it's in C#):
depthFrame.CopyFrameDataToArray(_depthData);
colorFrame.CopyConvertedFrameDataToArray(_colorData, ColorImageFormat.Bgra);
bodyIndexFrame.CopyFrameDataToArray(_bodyData);
_coordinateMapper.MapColorFrameToDepthSpace(_depthData, _depthPoints);
Array.Clear(_displayPixels, 0, _displayPixels.Length);
for (int colorIndex = 0; colorIndex < _depthPoints.Length; ++colorIndex)
{
DepthSpacePoint depthPoint = _depthPoints[colorIndex];
if (!float.IsNegativeInfinity(depthPoint.X) && !float.IsNegativeInfinity(depthPoint.Y))
{
int depthX = (int)(depthPoint.X + 0.5f);
int depthY = (int)(depthPoint.Y + 0.5f);
if ((depthX >= 0) && (depthX < _depthWidth) && (depthY >= 0) && (depthY < _depthHeight))
{
int depthIndex = (depthY * _depthWidth) + depthX;
byte player = _bodyData[depthIndex];
// Identify whether the point belongs to a player
if (player != 0xff)
{
int sourceIndex = colorIndex * BYTES_PER_PIXEL;
_displayPixels[sourceIndex] = _colorData[sourceIndex++]; // B
_displayPixels[sourceIndex] = _colorData[sourceIndex++]; // G
_displayPixels[sourceIndex] = _colorData[sourceIndex++]; // R
_displayPixels[sourceIndex] = 0xff; // A
}
}
}
}
Here is the initialization of the arrays:
BYTES_PER_PIXEL = (PixelFormats.Bgr32.BitsPerPixel + 7) / 8;
_colorWidth = colorFrame.FrameDescription.Width;
_colorHeight = colorFrame.FrameDescription.Height;
_depthWidth = depthFrame.FrameDescription.Width;
_depthHeight = depthFrame.FrameDescription.Height;
_bodyIndexWidth = bodyIndexFrame.FrameDescription.Width;
_bodyIndexHeight = bodyIndexFrame.FrameDescription.Height;
_depthData = new ushort[_depthWidth * _depthHeight];
_bodyData = new byte[_depthWidth * _depthHeight];
_colorData = new byte[_colorWidth * _colorHeight * BYTES_PER_PIXEL];
_displayPixels = new byte[_colorWidth * _colorHeight * BYTES_PER_PIXEL];
_depthPoints = new DepthSpacePoint[_colorWidth * _colorHeight];
Notice that the _depthPoints array has a 1920x1080 size.
Once again, the most important thing is to use the BodyIndexFrame source.
Finally I get some time to write the long awaited answer.
Lets start with some theory to understand what is really happening and then a possible answer.
We should start by knowing the way to pass from a 3D point cloud which has the depth camera as the coordinate system origin to an image in the image plane of the RGB camera. To do that it is enough to use the camera pinhole model:
In here, u and v are the coordinates in the image plane of the RGB camera. the first matrix in the right side of the equation is the camera matrix, AKA intrinsics of the RGB Camera. The following matrix is the rotation and translation of the extrinsics, or better said, the transformation needed to go from the Depth camera coordinate system to the RGB camera coordinate system. The last part is the 3D point.
Basically, something like this, is what the Kinect SDK does. So, what could go wrong that makes the hand gets duplicated? well, actually more than one point projects to the same pixel....
To put it in other words and in the context of the problem in the question.
The depth image, is a representation of an ordered point cloud, and I am querying the u v values of each of its pixels that in reality can be easily converted to 3D points. The SDK gives you the projection, but it can point to the same pixel (usually, the more distance in the z axis between two neighbor points may give this problem quite easily.
Now, the big question, how can you avoid this.... well, I am not sure using the Kinect SDK, since you do not know the Z value of the points AFTER the extrinsics are applied, so it is not possible to use a technique like the Z buffering.... However, you may assume the Z value will be quite similar and use those from the original pointcloud (at your own risk).
If you were doing it manually, and not with the SDK, you can apply the Extrinsics to the points, and the use the project them into the image plane, marking in another matrix which point is mapped to which pixel and if there is one existing point already mapped, check the z values and compared them and always leave the closest point to the camera. Then, you will have a valid mapping without any problems. This way is kind of a naive way, probably you can get better ones, since the problem is now clear :)
I hope it is clear enough.
P.S.:
I do not have Kinect 2 at the moment so I can'T try to see if there is an update relative to this issue or if it still happening the same thing. I used the first released version (not pre release) of the SDK... So, a lot of changes may had happened... If someone knows if this was solve just leave a comment :)

Uneven Circles in Connect 4 Board

I'm in the process of creating a 2P Connect 4 game, but I can't seem to get the circular areas to place tokens spaced evenly.
Here's the code that initializes the positions of each circle:
POINT tilePos;
for (int i = 0; i < Board::Dims::MAXX; ++i)
{
tileXY.push_back (std::vector<POINT> (Board::Dims::MAXY)); //add column
for (int j = 0; j < Board::Dims::MAXY; ++j)
{
tilePos.x = boardPixelDims.left + (i + 1./2) * (boardPixelDims.width / Board::Dims::MAXX);
tilePos.y = boardPixelDims.top + (j + 1./2) * (boardPixelDims.height / Board::Dims::MAXY);
tileXY.at (i).push_back (tilePos); //add circle in column
}
}
I use a 2D vector of POINTs, tileXY, to store the positions. Recall the board is 7 circles wide by 6 circles high.
My logic is such that the first circle starts (for X) at:
left + width / #circles * 0 + width / #circles / 2
and increases by width / #circles each time, which is easy to picture for smaller numbers of circles.
Later, I draw the circles like this:
for (const std::vector<POINT> &col : _tileXY)
{
for (const POINT pos : col)
{
if (g.FillEllipse (&red, (int)(pos.x - CIRCLE_RADIUS), pos.y - CIRCLE_RADIUS, CIRCLE_RADIUS, CIRCLE_RADIUS) != Gdiplus::Status::Ok)
MessageBox (_windows.gameWindow, "FillEllipse failed.", 0, MB_SYSTEMMODAL);
}
}
Those loops iterate through each element of the vector and draws each circle in red (to stand out at the moment). The int conversion is to disambiguate the function call. The first two arguments after the brush are the top-left corner, and CIRCLE_RADIUS is 50.
The problem is that my board looks like this (sorry if it hurts your eyes a bit):
As you can see, the circles are too far up and left. They're also too small, but that's easily fixed. I tried changing some ints to doubles, but ultimately ended up with this being the closest I ever got to the real pattern. The expanded formula (expanding (i + 1./2)) for the positions looks the same as well.
Have I missed a small detail, or is my whole logic behind it off?
Edit:
As requested, types:
tilePos.x: POINT (the windows API one, type used is LONG)
boardPixelDims.*: double
Board::Dims::MAXX/MAXY: enum values (integral, contain 7 and 6 respectively)
Depending on whether CIRCLE_SIZE is intended as radius or diameter, two of your parameters seem to be wrong in the FillEllipse call. If it's a diameter, then you should be setting location to pos.x - CIRCLE_SIZE/2 and pos.y - CIRCLE_SIZE/2. If it's a radius, then the height and width paramters should each be 2*CIRCLE_SIZE rather than CIRCLE_SIZE.
Update - since you changed the variable name to CIRCLE_RADIUS, the latter solution is now obviously the correct one.
The easiest way I remember what arguments the shape related functions take is to always think in rectangles. FillEllipse will just draw an ellipse to fill the rectangle you give it. x, y, width and height.
A simple experiment to practice with is if you change your calls to FillRect, get everything positioned okay, and then change them to FillEllipse.