I try to fill a grid with points and only keep the points inside a imaginary circle.
First i did this with:
createColorDetectionPoints(int xSteps, int ySteps)
But for me it's a lot easier to set it with a target in mind:
void ofxDTangibleFinder::createColorDetectionPoints(int nTargetPoints)
The target doesn't have to be too precise. But at the moment when i want 1000 points for example i get 2289 points.
I think my logic is wrong but i can't figure it out.
The idea is to get the right amount of xSteps and ySteps.
Can someone help?
void ofxDTangibleFinder::createColorDetectionPoints(int nTargetPoints) {
colorDetectionVecs.clear();
// xSteps and ySteps needs to be calculated
// the ratio between a rect and ellipse is
// 0.7853982
int xSteps = sqrt(nTargetPoints);
xSteps *= 1.7853982; // make it bigger in proportion to the ratio
int ySteps = xSteps;
float centerX = (float)xSteps/2;
float centerY = (float)ySteps/2;
float fX, fY, d;
float maxDistSquared = 0.5*0.5;
for (int y = 0; y < ySteps; y++) {
for (int x = 0; x < xSteps; x++) {
fX = x;
fY = y;
// normalize
fX /= xSteps-1;
fY /= ySteps-1;
d = ofDistSquared(fX, fY, 0.5, 0.5);
if(d <= maxDistSquared) {
colorDetectionVecs.push_back(ofVec2f(fX, fY));
}
}
}
// for(int i = 0; i < colorDetectionVecs.size(); i++) {
// printf("ellipse(%f, %f, 1, 1);\n", colorDetectionVecs[i].x*100, colorDetectionVecs[i].y*100);
// }
printf("colorDetectionVecs: %lu\n", colorDetectionVecs.size());
}
Related
Faced the following problem: I have a grid and a beam, in the form of a circle. At this stage, you just need to draw them.
Grid::render():
for (int i = 0; i < cellsInColumn; i++) {
for (int j = 0; j < cellsInRow; j++) {
SDL_Rect outlineRect = { this->x + this->bord_x + (cellWidth*j), this->y+this->bord_y, this->cellWidth, this->cellHeight };
SDL_RenderDrawRect( this->rend, &outlineRect );
}
y+=cellHeight;
}
Beam::render():
for (int w = 0; w < radius * 2; w++) {
for (int h = 0; h < radius * 2; h++) {
double dx = radius - w;
double dy = radius - h;
if ((dx*dx + dy*dy) <= (radius * radius)) {
SDL_RenderDrawPoint(this->rend, x + dx, y + dy);
}
}
}
But my screen seems to have "eaten" the top line of the grid. It turned out that the top of the grid, along with the "beam", was drawn under the title bar.
bord_y == 0
bord_y == 70
Question for the connoisseurs: how do I now draw the grid and the circle? Does the SDL know how many pixels are in the title bar, or should this indent be "by eye"? If it knows, where is this information stored?
UPD:
Grid and beam values are set in the following function:
void setStartValues(int screenWidth, int screenHeight){
Grid::setBord(screenWidth, screenHeight);
Grid::setCellSize(screenHeight);
Beam::setValues(Grid::getCellHeight(), Grid::getBord());
}
And here are all the getters and setters that are used above:
void setBord(int scrW, int scrH) {
this->bord_x = this->cellsInRow <= this->cellsInColumn? (scrW-scrH)/2 : (scrW-scrH)/6;
this->bord_y = 0;
}
void setCellSize(int scrH) {
this->cellWidth = this->cellHeight = scrH/cellsInColumn;
}
double getCellHeight() {
return this->cellHeight;
}
double getBord() {
return this->bord_x;
}
void setValues(double cellH, double bord) { //Beam
this->x = cellH/2 + bord;
this->y = cellH/2;
this->radius = cellH/4;
}
I've generated a cubic world using FastNoiseLite but I don't know how to differentiate top level blocks as grass and bottom one's dirt when using 3d noise.
TArray<float> CalculateNoise(const FVector& ChunkPosition)
{
Densities.Reset();
// ChunkSize is 32
for (int z = 0; z < ChunkSize; z++)
{
for (int y = 0; y < ChunkSize; y++)
{
for (int x = 0; x < ChunkSize; x++)
{
const float Noise = GetNoise(FVector(ChunkPosition.X + x, ChunkPosition.Y + y, ChunkPosition.Z + z));
Densities.Add(Noise - ChunkPosition.Z);
}
}
}
return Densities;
}
void AddCubeMaterial(const FVector& ChunkPosition)
{
const int32 DensityIndex = GetIndex(ChunkPosition);
const float Density = Densities[DensityIndex];
if (Density < 1)
{
// Add Grass block
}
// Add dirt block
}
void GetNoise(const FVector& Position) const
{
const float Height = 280.f;
if (bIs3dNoise)
{
FastNoiseLiteObj->GetNoise(Position.X, Position.Y, Position.Z) * Height;
}
FastNoiseLiteObj->GetNoise(Position.X, Position.Y) * Height;
}
This is the result when using 3D noise.
3D Noise result
But if I switch to 2D noise it works perfectly fine.
2D Noise result
This answer applies to Perlin like noise.
Your integer chunk size is dis-contiguous in noise space.
'Position' needs to be scaled by 1/Height. To scale the noise as a contiguous block. Then scale by Height.
If you were happy with the XY axes(2D), you could limit the scaling to the Z axis:
FastNoiseLiteObj->GetNoise(Position.X, Position.Y, Position.Z / Height) * Height;
This adjustment provides a noise continuous Z block location with respect to Position(X,Y).
Edit in response to comments
Contiguous:
The noise algorithm guarantees continuous output in all dimensions.
By sampling every 32 pixels (dis-contiguous sampling), The continuity is broken, on purpose(?) and augmented by the Density.
To guarantee a top level grass layer:
Densities.Add(Noise + (ChunkPosition.Z > Threshold) ? 1: 0);
Your code- ChunkPosition.Z made grass thicker as it went down. Add it back if you wish.
To add random overhangs/underhangs reduce the Density threshold randomly:
if (Density < (rnd() < 0.125)? 0.5 : 1)
I leave the definition of rnd() to your preferred random distribution.
To almost always have overhangs, requires forward lookup of the next and previous blocks' Z in noise.
Precalculate the noise values for the next line into alternating arrays 2 wider than the width to support the edges set at 0.
The algorithm is:
// declare arrays: currentnoise[ChunkSize + 2] and nextnoise[ChunkSize +2] and alpha=.2; //see text
for (int y = 0; y < ChunkSize; y++) // note the reorder y-z-x
{
// pre load currentnoise for z=0
currentnoise[0] = 0;
currentnoise[ChunkSize+1] = 0;
for (int x = 0; x < ChunkSize; x++)
{
currentnoise[x + 1] = GetNoise(FVector(ChunkPosition.X + x, ChunkPosition.Y + y, ChunkPosition.Z));
}
for (int z = 1; z < ChunkSize -2; z++)
{
nextnoise[0] = 0;
nextnoise[ChunkSize+1] = 0;
// load next
for (int x = 0; x < ChunkSize; x++)
{
nextnoise[x + 1] = GetNoise(FVector(ChunkPosition.X + x, ChunkPosition.Y + y, ChunkPosition.Z + z+1));
}
// apply current with next
for (int x = 0; x < ChunkSize; x++)
{
Densities.Add(currentnoise[x + 1] * .75 + nextnoise[x+2] * alpha + nextnoise[x] * alpha);
}
// move next to current in a memory safe manor:
// it is faster to swap pointers, but this is much safer for portability
for (int i = 1; i < ChunkSize + 1; i++)
currentnoise[i]=nextnoise[i];
}
// apply last z(no next)
for (int x = 0; x < ChunkSize; x++)
{
Densities.Add(currentnoise[X + 1]);
}
}
Where alpha is approximately between .025 and .25 depending on preferred fill amounts.
The 2 inner most x for loops could be streamlined into 1 loop, but left separate for readability.(it requires 2 preloads)
I have a circular brush of with a diameter of 200px and hardness of 0 (the brush is a circular gradient). The spacing between each brush is 25% of the brush diameter. However, when I compare the stroke my program draws and the stroke Photoshop draws, where all settings are equal...
It is clear that photoshop's is much smoother! I can't reduce the spacing because that causes the edges to become harder
How can i make my stroke like photoshop's?
Here is the relevant code from my program...
//defining a circle
Mat alphaBrush(2*outerRadius,2*outerRadius,CV_32FC1);
float floatInnerRadius = outerRadius * hardness;
for(int i = 0; i < alphaBrush.rows; i++ ){
for(int j=0; j<alphaBrush.cols; j++ ){
int x = outerRadius - i;
int y = outerRadius - j;
float radius=hypot((float) x, (float) y );
auto& pixel = alphaBrush.at<float>(i,j);
if(radius>outerRadius){ pixel=0.0; continue;} // transparent
if(radius<floatInnerRadius){ pixel=1.0; continue;} // solid
pixel=1-((radius-floatInnerRadius)/(outerRadius-floatInnerRadius)); // partial
}
}
/*
(...irrelevant stuff)
*/
//drawing the brush onto the canvas
for (int j = 0; j < inMatROI.rows; j++) {
Vec3b *thisBgRow = inMatROI.ptr<Vec3b>(j);
float *thisAlphaRow = brushROI.ptr<float>(j);
for (int i = 0; i < inMatROI.cols; i++) {
for (int c = 0; c < 3; c++) {
thisBgRow[i][c] = saturate_cast<uchar>((brightness * thisAlphaRow[i]) + ((1.0 - thisAlphaRow[i]) * thisBgRow[i][c]));
}
}
}
I have also tried resultValue = max(backgroundValue, brushValue), but the intersection between the two circles is pretty obvious.
this is the approach, drawing a solid thin line and afterwards computing the distance of each pixel to that line.
As you can see there are some artifacts, probably mostly because of only approximated distance values from cv::distanceTransform. If you compute the distances precisely (and maybe in double precision) you should get very smooth results.
int main()
{
cv::Mat canvas = cv::Mat(768, 768, CV_8UC3, cv::Scalar::all(255));
cv::Mat canvasMask = cv::Mat::zeros(canvas.size(), CV_8UC1);
// make sure the stroke has always a size of >= 2, otherwise will be cv::line way not work...
std::vector<cv::Point> strokeSampling;
strokeSampling.push_back(cv::Point(250, 100));
strokeSampling.push_back(cv::Point(250, 200));
strokeSampling.push_back(cv::Point(600, 300));
strokeSampling.push_back(cv::Point(600, 400));
strokeSampling.push_back(cv::Point(250, 500));
strokeSampling.push_back(cv::Point(250, 650));
for (int i = 0; i < strokeSampling.size() - 1; ++i)
cv::line(canvasMask, strokeSampling[i], strokeSampling[i + 1], cv::Scalar::all(255));
// computing a distance map:
cv::Mat tmp1 = 255 - canvasMask;
cv::Mat distMap;
cv::distanceTransform(tmp1, distMap, CV_DIST_L2, CV_DIST_MASK_PRECISE);
float outerRadius = 50;
float innerRadius = 10;
cv::Scalar strokeColor = cv::Scalar::all(0);
for (int y = 0; y < distMap.rows; ++y)
for (int x = 0; x < distMap.cols; ++x)
{
float percentage = 0.0f;
float radius = distMap.at<float>(y, x);
if (radius>outerRadius){ percentage = 0.0; } // transparent
else
if (radius<innerRadius){ percentage = 1.0; } // solid
else
{
percentage = 1 - ((radius - innerRadius) / (outerRadius - innerRadius)); // partial
}
if (percentage > 0)
{
// here you could use the canvasMask if you like to, instead of directly drawing on the canvas
cv::Vec3b canvasColor = canvas.at<cv::Vec3b>(y, x);
cv::Vec3b cColor = cv::Vec3b(strokeColor[0], strokeColor[1], strokeColor[2]);
canvas.at<cv::Vec3b>(y, x) = percentage*cColor + (1 - percentage) * canvasColor;
}
}
cv::imshow("out", canvas);
cv::imwrite("C:/StackOverflow/Output/stroke.png", canvas);
cv::waitKey(0);
}
I've been working on drawing the Julia set using a distance estimator instead of the normalized iteration count. I usually use the code below and play around with the iteration count until I get a decent enough picture
double Mandelbrot::getJulia(double x, double y)
{
complex<double> z(x, y);
complex<double> c(-0.7269, 0.1889);
double iterations = 0;
while (iterations < MAX)
{
z = z * z + c;
if (abs(z) > 2) {
return iterations + 1.0 - log(log2(abs(z)));
break;
}
iterations++;
}
return double(MAX);
}
I then call this for each point and draw to a bitmap;
ZoomTool zt(WIDTH, HEIGHT);
zt.add(Zoom(WIDTH / 2, HEIGHT / 2, 4.0 / WIDTH));
for (int y = 0; y < HEIGHT; y++) {
for (int x = 0; x < WIDTH; x++) {
pair<double, double> coords = zt.zoomIn(x, y);
double iterations = Mandelbrot::getJulia(coords.first,
coords.second);
double ratio = iterations / Mandelbrot::MAX;
double h = 0;
double s= 0;
double v = 0;
if (ratio != 1)
{
h = 360.0*ratio;
s = 1.0;
v = 1.0;
}
HSV hsv(h, s, v);
RGB rgb(0, 0, 0);
rgb = toRGB(hsv);
bitmap.setPixel(x, y, rgb._r, rgb._g, rgb._b);
}
}
At 600 iterations, I get this;
Which is not great but better than what I get with the distance estimator which I am attempting to now use. I've implemented the distance estimator as below;
double Mandelbrot::getJulia(double x, double y)
{
complex<double> z(x,y);
complex<double> c(-0.7269, 0.1889);
complex<double> dz = 0;
double iterations = 0;
while (iterations < MAX)
{
dz = 2.0 * dz * z + 1.0;
z = z * z + c;
if (abs(z) > 2)
{
return abs(z) * log(abs(z)) / abs(dz);
}
iterations++;
}
return Mandelbrot::MAX;
}
At 600 iterations, I get the following image
Am I not normalizing the colors correctly? I'm guessing this is happening because I'm normalizing to 360.0 and doing a conversion from HSV to RGB. Since the distances are quite small, I get a very condensed distribution of colors.
I am working on a college compsci project and I would like some help with a field of view algorithm. I works mostly, but in some situations the algorithm sees through walls and hilights walls the player should not be able to see.
void cMap::los(int x0, int y0, int radius)
{ //Does line of sight from any particular tile
for(int x = 0; x < m_Height; x++) {
for(int y = 0; y < m_Width; y++) {
getTile(x,y)->setVisible(false);
}
}
double xdif = 0;
double ydif = 0;
bool visible = false;
float dist = 0;
for (int x = MAX(x0 - radius,0); x < MIN(x0 + radius, m_Height); x++) { //Loops through x values within view radius
for (int y = MAX(y0 - radius,0); y < MIN(y0 + radius, m_Width); y++) { //Loops through y values within view radius
xdif = pow( (double) x - x0, 2);
ydif = pow( (double) y - y0, 2);
dist = (float) sqrt(xdif + ydif); //Gets the distance between the two points
if (dist <= radius) { //If the tile is within view distance,
visible = line(x0, y0, x, y); //check if it can be seen.
if (visible) { //If it can be seen,
getTile(x,y)->setVisible(true); //Mark that tile as viewable
}
}
}
}
}
bool cMap::line(int x0,int y0,int x1,int y1)
{
bool steep = abs(y1-y0) > abs(x1-x0);
if (steep) {
swap(x0, y0);
swap(x1, y1);
}
if (x0 > x1) {
swap(x0,x1);
swap(y0,y1);
}
int deltax = x1-x0;
int deltay = abs(y1-y0);
int error = deltax/2;
int ystep;
int y = y0;
if (y0 < y1)
ystep = 1;
else
ystep = -1;
for (int x = x0; x < x1; x++) {
if ( steep && getTile(y,x)->isBlocked()) {
getTile(y,x)->setVisible(true);
getTile(y,x)->setDiscovered(true);
return false;
} else if (!steep && getTile(x,y)->isBlocked()) {
getTile(x,y)->setVisible(true);
getTile(x,y)->setDiscovered(true);
return false;
}
error -= deltay;
if (error < 0) {
y = y + ystep;
error = error + deltax;
}
}
return true;
}
If anyone could help me make the first blocked tiles visible but stops the rest, I would appreciate it.
thanks,
Manderin87
You seem to be attempting to create a raycasting algorithm. I assume you have knowledge of how Bresenham's lines work, so I'll cut to the chase.
Instead of checking the visibility of each cell in the potential field of view, you only need to launch Bresenham lines from the FOV centre towards each cell at the very perimetre of the potential FOV area (the square you loop through). At each step of the Bresenham line, you check the cell status. The pseudocode for each ray would go like this:
while (current_cell != destination) {
current_cell.visible = true;
if (current_cell.opaque) break;
else current_cell = current_cell.next();
}
Please remember that raycasting produces tons of artifacts and you might also need postprocessing after you have calculated your field of view.
Some useful resources:
ray casting on Roguebasin
ray casting FOV implementation in libtcod (in C, you can dig through the repository for a C++ wrapper to it)
a FOV study