If I am having a convex polygon vertices, then my area calculation in image is not coming accurate by the standard formula.
(For simiplicity) if I am having 3x3 square, and the vertices are (1,1) (1,3) (3,3) (3,1)
by polygon area calculation method depicted here
and dividing the summation by 2 we get the Area.
So for the 3 x 3 data above, we'll get area as 4 instead of 9.
This is happening because vertices are not points but a pixel.
this is the corresponding code. The coordinates are cyclic.
int X[] = { 1, 1, 3, 3, 1};
int Y[] = { 1, 3, 3, 1, 1};
double Sum1 = 0;
double Sum2 = 0;
int numElements = 5;
for (int k = 0; k < numElements-1; k++)
{
Sum1 += X[k] * Y[k + 1];
Sum2 += Y[k] * X[k + 1];
}
double area = std::abs((double)(Sum1 - Sum2))/2;
For square, we can do +1 to width and height and get the area correct. But what about the irregular polygons in the image?
I hope the question makes sense.
If you don't want to work with pixel corners as vertices, consider next method (works for simple figures - all convex ones, some concave ones):
Add additional fake pixel at right side of every right-border pixel, at bottom side of every bottom-border pixel, at right-bottom of right-bottom corner pixel. Here gray pixels are initial, light blues ones - fake.
The area can be calculated in following steps:
1) fetching the pixels between the vertices
2) sorting the pixels by x coordinates (or y coordinates)
3) taking the difference between min and max y coordinates (or x) for a particular x (or y) value and adding one to the difference
4) summing up the total difference
NOTE: the area might vary (if there are slanted edges in the polygon) depending on the line drawing method chosen
int compare(const void * a, const void * b)
{
return (((Point*)a)->x() - ((Point*)b)->x());
}
double CalculateConvexHullArea(vector<int> ConvexHullX, vector<int> ConvexHullY)
{
float Sum1 = 0;
float Sum2 = 0;
std::vector<Point> FillPoints;
for (int k = 0; k < ConvexHullX.size() - 1; k++)
{
drawLine(ConvexHullX[k], ConvexHullX[k+1], ConvexHullY[k], ConvexHullY[k+1], FillPoints);
}
//sorting coordinates
qsort(FillPoints.data(), FillPoints.size(), sizeof(Point), compare);
double area = 0;
int startY = FillPoints[0].y(), endY = FillPoints[0].y();
int currX = FillPoints[0].x();
// traversing x and summing up diff of min and max Y
for (int cnt = 0; cnt < FillPoints.size(); cnt++)
{
if (FillPoints[cnt].x() == currX)
{
startY = startY > FillPoints[cnt].y() ? FillPoints[cnt].y() : startY;
endY = endY < FillPoints[cnt].y() ? FillPoints[cnt].y() : endY;
}
else
{
int diffY = endY - startY + 1;
area += diffY;
currX = FillPoints[cnt].x();
startY = endY = FillPoints[cnt].y();
}
}
return area + endY - startY + 1;
}
Related
I've generated a cubic world using FastNoiseLite but I don't know how to differentiate top level blocks as grass and bottom one's dirt when using 3d noise.
TArray<float> CalculateNoise(const FVector& ChunkPosition)
{
Densities.Reset();
// ChunkSize is 32
for (int z = 0; z < ChunkSize; z++)
{
for (int y = 0; y < ChunkSize; y++)
{
for (int x = 0; x < ChunkSize; x++)
{
const float Noise = GetNoise(FVector(ChunkPosition.X + x, ChunkPosition.Y + y, ChunkPosition.Z + z));
Densities.Add(Noise - ChunkPosition.Z);
}
}
}
return Densities;
}
void AddCubeMaterial(const FVector& ChunkPosition)
{
const int32 DensityIndex = GetIndex(ChunkPosition);
const float Density = Densities[DensityIndex];
if (Density < 1)
{
// Add Grass block
}
// Add dirt block
}
void GetNoise(const FVector& Position) const
{
const float Height = 280.f;
if (bIs3dNoise)
{
FastNoiseLiteObj->GetNoise(Position.X, Position.Y, Position.Z) * Height;
}
FastNoiseLiteObj->GetNoise(Position.X, Position.Y) * Height;
}
This is the result when using 3D noise.
3D Noise result
But if I switch to 2D noise it works perfectly fine.
2D Noise result
This answer applies to Perlin like noise.
Your integer chunk size is dis-contiguous in noise space.
'Position' needs to be scaled by 1/Height. To scale the noise as a contiguous block. Then scale by Height.
If you were happy with the XY axes(2D), you could limit the scaling to the Z axis:
FastNoiseLiteObj->GetNoise(Position.X, Position.Y, Position.Z / Height) * Height;
This adjustment provides a noise continuous Z block location with respect to Position(X,Y).
Edit in response to comments
Contiguous:
The noise algorithm guarantees continuous output in all dimensions.
By sampling every 32 pixels (dis-contiguous sampling), The continuity is broken, on purpose(?) and augmented by the Density.
To guarantee a top level grass layer:
Densities.Add(Noise + (ChunkPosition.Z > Threshold) ? 1: 0);
Your code- ChunkPosition.Z made grass thicker as it went down. Add it back if you wish.
To add random overhangs/underhangs reduce the Density threshold randomly:
if (Density < (rnd() < 0.125)? 0.5 : 1)
I leave the definition of rnd() to your preferred random distribution.
To almost always have overhangs, requires forward lookup of the next and previous blocks' Z in noise.
Precalculate the noise values for the next line into alternating arrays 2 wider than the width to support the edges set at 0.
The algorithm is:
// declare arrays: currentnoise[ChunkSize + 2] and nextnoise[ChunkSize +2] and alpha=.2; //see text
for (int y = 0; y < ChunkSize; y++) // note the reorder y-z-x
{
// pre load currentnoise for z=0
currentnoise[0] = 0;
currentnoise[ChunkSize+1] = 0;
for (int x = 0; x < ChunkSize; x++)
{
currentnoise[x + 1] = GetNoise(FVector(ChunkPosition.X + x, ChunkPosition.Y + y, ChunkPosition.Z));
}
for (int z = 1; z < ChunkSize -2; z++)
{
nextnoise[0] = 0;
nextnoise[ChunkSize+1] = 0;
// load next
for (int x = 0; x < ChunkSize; x++)
{
nextnoise[x + 1] = GetNoise(FVector(ChunkPosition.X + x, ChunkPosition.Y + y, ChunkPosition.Z + z+1));
}
// apply current with next
for (int x = 0; x < ChunkSize; x++)
{
Densities.Add(currentnoise[x + 1] * .75 + nextnoise[x+2] * alpha + nextnoise[x] * alpha);
}
// move next to current in a memory safe manor:
// it is faster to swap pointers, but this is much safer for portability
for (int i = 1; i < ChunkSize + 1; i++)
currentnoise[i]=nextnoise[i];
}
// apply last z(no next)
for (int x = 0; x < ChunkSize; x++)
{
Densities.Add(currentnoise[X + 1]);
}
}
Where alpha is approximately between .025 and .25 depending on preferred fill amounts.
The 2 inner most x for loops could be streamlined into 1 loop, but left separate for readability.(it requires 2 preloads)
I'm trying to covert an image from Polar Coordinates to Cartesian Coordinates but after applying the formulas I get float coordinates (r and teta) and I don't know how to represent the points in space using floats for x and y. There might be a way of transforming them in int numbers and still preserving the distribution but I don't see how. I know that there are functions in OpenCV like warpPolar that to the work but I would like to implement it by myself. Any ideas would help :)
This is my code:
struct Value
{
double r;
double teta;
int value; // pixel intensity
};
void irisNormalization(Mat img, Circle pupilCircle, Circle irisCircle, int &matrixWidth, int &matrixHeight)
{
int w = img.size().width;
int h = img.size().height;
int X, Y;
double r, teta;
int rayOfIris = irisCircle.getRay();
std::vector<Value> polar;
// consider the rectangle the iris circle is confined in
int xstart = irisCircle.getA() - rayOfIris;
int ystart = irisCircle.getB() - rayOfIris;
int xfinish = irisCircle.getA() + rayOfIris;
int yfinish = irisCircle.getB() + rayOfIris;
for (int x = xstart; x < xfinish; x++)
for (int y = ystart; y < yfinish; y++)
{
X = x - xstart - rayOfIris;
Y = y - ystart - rayOfIris;
r = sqrt(X * X + Y * Y);
if (X != 0)
{
teta = (atan(abs(Y / X)) * double(180 / M_PI));
if (X > 0 && Y > 0) // quadrant 1
teta = teta;
if (X > 0 && Y < 0)
teta = 360 - teta; // quadrant 4
if (X < 0 && Y > 0) // quadrant 2
teta = 180 - teta;
if (X < 0 && Y < 0) // quadrant 3
teta = 180 + teta;
if (r < rayOfIris)
{
polar.push_back({ r, teta, int(((Scalar)(img.at<uchar>(Point(x, y)))).val[0]) });
}
}
}
std::sort(polar.begin(), polar.end(), [](const Value &left, const Value &right) {
return left.r < right.r && left.teta < right.teta;
});
for (std::vector<Value>::const_iterator i = polar.begin(); i != polar.end(); ++i)
std::cout << i->r << ' ' << i->teta << endl;
Your implementation attempts to express every integer-coordinate point inside a given circle in polar-coordinates. In this way, however, you terminate with an array of coordinates toghether with a value.
If instead you want to geometrically transform your image, you should:
create the destination image with proper width (rho resolution) and height (theta resolution);
loop through every pixel of the destination image and map it back into the original image with the inverse transformation;
get the value of the back-transformed point into the original image by eventually interpolating near values.
For interpolating the values different methods are available. A non-exhaustive list includes:
nearest-neighbor interpolation;
bilinear interpolation;
bicubic interpolation.
I have a circular brush of with a diameter of 200px and hardness of 0 (the brush is a circular gradient). The spacing between each brush is 25% of the brush diameter. However, when I compare the stroke my program draws and the stroke Photoshop draws, where all settings are equal...
It is clear that photoshop's is much smoother! I can't reduce the spacing because that causes the edges to become harder
How can i make my stroke like photoshop's?
Here is the relevant code from my program...
//defining a circle
Mat alphaBrush(2*outerRadius,2*outerRadius,CV_32FC1);
float floatInnerRadius = outerRadius * hardness;
for(int i = 0; i < alphaBrush.rows; i++ ){
for(int j=0; j<alphaBrush.cols; j++ ){
int x = outerRadius - i;
int y = outerRadius - j;
float radius=hypot((float) x, (float) y );
auto& pixel = alphaBrush.at<float>(i,j);
if(radius>outerRadius){ pixel=0.0; continue;} // transparent
if(radius<floatInnerRadius){ pixel=1.0; continue;} // solid
pixel=1-((radius-floatInnerRadius)/(outerRadius-floatInnerRadius)); // partial
}
}
/*
(...irrelevant stuff)
*/
//drawing the brush onto the canvas
for (int j = 0; j < inMatROI.rows; j++) {
Vec3b *thisBgRow = inMatROI.ptr<Vec3b>(j);
float *thisAlphaRow = brushROI.ptr<float>(j);
for (int i = 0; i < inMatROI.cols; i++) {
for (int c = 0; c < 3; c++) {
thisBgRow[i][c] = saturate_cast<uchar>((brightness * thisAlphaRow[i]) + ((1.0 - thisAlphaRow[i]) * thisBgRow[i][c]));
}
}
}
I have also tried resultValue = max(backgroundValue, brushValue), but the intersection between the two circles is pretty obvious.
this is the approach, drawing a solid thin line and afterwards computing the distance of each pixel to that line.
As you can see there are some artifacts, probably mostly because of only approximated distance values from cv::distanceTransform. If you compute the distances precisely (and maybe in double precision) you should get very smooth results.
int main()
{
cv::Mat canvas = cv::Mat(768, 768, CV_8UC3, cv::Scalar::all(255));
cv::Mat canvasMask = cv::Mat::zeros(canvas.size(), CV_8UC1);
// make sure the stroke has always a size of >= 2, otherwise will be cv::line way not work...
std::vector<cv::Point> strokeSampling;
strokeSampling.push_back(cv::Point(250, 100));
strokeSampling.push_back(cv::Point(250, 200));
strokeSampling.push_back(cv::Point(600, 300));
strokeSampling.push_back(cv::Point(600, 400));
strokeSampling.push_back(cv::Point(250, 500));
strokeSampling.push_back(cv::Point(250, 650));
for (int i = 0; i < strokeSampling.size() - 1; ++i)
cv::line(canvasMask, strokeSampling[i], strokeSampling[i + 1], cv::Scalar::all(255));
// computing a distance map:
cv::Mat tmp1 = 255 - canvasMask;
cv::Mat distMap;
cv::distanceTransform(tmp1, distMap, CV_DIST_L2, CV_DIST_MASK_PRECISE);
float outerRadius = 50;
float innerRadius = 10;
cv::Scalar strokeColor = cv::Scalar::all(0);
for (int y = 0; y < distMap.rows; ++y)
for (int x = 0; x < distMap.cols; ++x)
{
float percentage = 0.0f;
float radius = distMap.at<float>(y, x);
if (radius>outerRadius){ percentage = 0.0; } // transparent
else
if (radius<innerRadius){ percentage = 1.0; } // solid
else
{
percentage = 1 - ((radius - innerRadius) / (outerRadius - innerRadius)); // partial
}
if (percentage > 0)
{
// here you could use the canvasMask if you like to, instead of directly drawing on the canvas
cv::Vec3b canvasColor = canvas.at<cv::Vec3b>(y, x);
cv::Vec3b cColor = cv::Vec3b(strokeColor[0], strokeColor[1], strokeColor[2]);
canvas.at<cv::Vec3b>(y, x) = percentage*cColor + (1 - percentage) * canvasColor;
}
}
cv::imshow("out", canvas);
cv::imwrite("C:/StackOverflow/Output/stroke.png", canvas);
cv::waitKey(0);
}
I'm trying to clone space invader, collision with the green barriers.
https://www.youtube.com/watch?v=bLAhmnCZym4
Right now I have access to the pixels of the green barrier
I would like to draw a solid black circle around the collision point of the bullet, right now I'm using the following code, but it spread random pixels, not solid black circle and it's center the hitting point of the bullet
The result is shown here:
http://i.stack.imgur.com/mpgkM.png
int radius = 50;
for (int y = -radius; y <= radius; y++)
{
for (int x = -radius; x <= radius; x++)
{
if (x*x + y*y <= radius*radius)
{
int j = x + normX;
int i = y + normY;
uint8 pixelOffset = j + i;
ptr += pixelOffset;
*ptr = 0xff000000;
}
}
}
Pixels are normally stored in a raster order so you need to change
uint8 pixelOffset = j + i;
to
int pixelOffset = j + i*pitch;
where pitch is the width of your image.
In addition, each time you write a pixel you are moving ptr to a new location so you get a diagonal line.
Replace
ptr += pixelOffset;
*ptr = 0xff000000;
with
ptr[pixelOffset] = 0xff000000;
I try to fill a grid with points and only keep the points inside a imaginary circle.
First i did this with:
createColorDetectionPoints(int xSteps, int ySteps)
But for me it's a lot easier to set it with a target in mind:
void ofxDTangibleFinder::createColorDetectionPoints(int nTargetPoints)
The target doesn't have to be too precise. But at the moment when i want 1000 points for example i get 2289 points.
I think my logic is wrong but i can't figure it out.
The idea is to get the right amount of xSteps and ySteps.
Can someone help?
void ofxDTangibleFinder::createColorDetectionPoints(int nTargetPoints) {
colorDetectionVecs.clear();
// xSteps and ySteps needs to be calculated
// the ratio between a rect and ellipse is
// 0.7853982
int xSteps = sqrt(nTargetPoints);
xSteps *= 1.7853982; // make it bigger in proportion to the ratio
int ySteps = xSteps;
float centerX = (float)xSteps/2;
float centerY = (float)ySteps/2;
float fX, fY, d;
float maxDistSquared = 0.5*0.5;
for (int y = 0; y < ySteps; y++) {
for (int x = 0; x < xSteps; x++) {
fX = x;
fY = y;
// normalize
fX /= xSteps-1;
fY /= ySteps-1;
d = ofDistSquared(fX, fY, 0.5, 0.5);
if(d <= maxDistSquared) {
colorDetectionVecs.push_back(ofVec2f(fX, fY));
}
}
}
// for(int i = 0; i < colorDetectionVecs.size(); i++) {
// printf("ellipse(%f, %f, 1, 1);\n", colorDetectionVecs[i].x*100, colorDetectionVecs[i].y*100);
// }
printf("colorDetectionVecs: %lu\n", colorDetectionVecs.size());
}