Perlin Noise getting wrong values in Y axis (C++) - c++

Issue
I'm trying to implement the Perlin Noise algorithm in 2D with a single octave with a size of 16x16. I'm using this as heightmap data for a terrain, however it only seems to work in one axis. Whenever the sample point moves to a new Y section in the Perlin Noise grid, the gradient is very different from what I expect (for example, it often flips from 0.98 to -0.97, which is a very sudden change).
This image shows the staggered terrain in the z direction (which is the y axis in the 2D Perlin Noise grid)
Code
I've put the code that calculates which sample point to use at the end since it's quite long and I believe it's not where the issue is, but essentially I scale down the terrain to match the Perlin Noise grid (16x16) and then sample through all the points.
Gradient At Point
So the code that calculates out the gradient at a sample point is the following:
// Find the gradient at a certain sample point
float PerlinNoise::gradientAt(Vector2 point)
{
// Decimal part of float
float relativeX = point.x - (int)point.x;
float relativeY = point.y - (int)point.y;
Vector2 relativePoint = Vector2(relativeX, relativeY);
vector<float> weights(4);
// Find the weights of the 4 surrounding points
weights = surroundingWeights(point);
float fadeX = fadeFunction(relativePoint.x);
float fadeY = fadeFunction(relativePoint.y);
float lerpA = MathUtils::lerp(weights[0], weights[1], fadeX);
float lerpB = MathUtils::lerp(weights[2], weights[3], fadeX);
float lerpC = MathUtils::lerp(lerpA, lerpB, fadeY);
return lerpC;
}
Surrounding Weights of Point
I believe the issue is somewhere here, in the function that calculates the weights for the 4 surrounding points of a sample point, but I can't seem to figure out what is wrong since all the values seem sensible in the function when stepping through it.
// Find the surrounding weight of a point
vector<float> PerlinNoise::surroundingWeights(Vector2 point){
// Produces correct values
vector<Vector2> surroundingPoints = surroundingPointsOf(point);
vector<float> weights;
for (unsigned i = 0; i < surroundingPoints.size(); ++i) {
// The corner to the sample point
Vector2 cornerToPoint = surroundingPoints[i].toVector(point);
// Getting the seeded vector from the grid
float x = surroundingPoints[i].x;
float y = surroundingPoints[i].y;
Vector2 seededVector = baseGrid[x][y];
// Dot product between the seededVector and corner to the sample point vector
float dotProduct = cornerToPoint.dot(seededVector);
weights.push_back(dotProduct);
}
return weights;
}
OpenGL Setup and Sample Point
Setting up the heightmap and getting the sample point. Variables 'wrongA' and 'wrongA' is an example of when the gradient flips and changes suddenly.
void HeightMap::GenerateRandomTerrain() {
int perlinGridSize = 16;
PerlinNoise perlin_noise = PerlinNoise(perlinGridSize, perlinGridSize);
numVertices = RAW_WIDTH * RAW_HEIGHT;
numIndices = (RAW_WIDTH - 1) * (RAW_HEIGHT - 1) * 6;
vertices = new Vector3[numVertices];
textureCoords = new Vector2[numVertices];
indices = new GLuint[numIndices];
float perlinScale = RAW_HEIGHT/ (float) (perlinGridSize -1);
float height = 50;
float wrongA = perlin_noise.gradientAt(Vector2(0, 68.0f / perlinScale));
float wrongB = perlin_noise.gradientAt(Vector2(0, 69.0f / perlinScale));
for (int x = 0; x < RAW_WIDTH; ++x) {
for (int z = 0; z < RAW_HEIGHT; ++z) {
int offset = (x* RAW_WIDTH) + z;
float xVal = (float)x / perlinScale;
float yVal = (float)z / perlinScale;
float noise = perlin_noise.gradientAt(Vector2( xVal , yVal));
vertices[offset] = Vector3(x * HEIGHTMAP_X, noise * height, z * HEIGHTMAP_Z);
textureCoords[offset] = Vector2(x * HEIGHTMAP_TEX_X, z * HEIGHTMAP_TEX_Z);
}
}
numIndices = 0;
for (int x = 0; x < RAW_WIDTH - 1; ++x) {
for (int z = 0; z < RAW_HEIGHT - 1; ++z) {
int a = (x * (RAW_WIDTH)) + z;
int b = ((x + 1)* (RAW_WIDTH)) + z;
int c = ((x + 1)* (RAW_WIDTH)) + (z + 1);
int d = (x * (RAW_WIDTH)) + (z + 1);
indices[numIndices++] = c;
indices[numIndices++] = b;
indices[numIndices++] = a;
indices[numIndices++] = a;
indices[numIndices++] = d;
indices[numIndices++] = c;
}
}
BufferData();
}

Turned out the issue was in the interpolation stage:
float lerpA = MathUtils::lerp(weights[0], weights[1], fadeX);
float lerpB = MathUtils::lerp(weights[2], weights[3], fadeX);
float lerpC = MathUtils::lerp(lerpA, lerpB, fadeY);
I had the interpolation in the y axis the wrong way around, so it should have been:
lerp(lerpB, lerpA, fadeY)
Instead of:
lerp(lerpA, lerpB, fadeY)

Related

Is there a method to either recalculate and equation in terms of a different variable?

I am currently a senior in AP Calculus BC and have taken the challenge of replicating a topic in C++ Qt. This topic covers integrals as area beneath a curve, and rotations of said areas to form a solid model with a definite volume.
I have successfully rotated a custom equation defined as:
double y = abs(qSin(qPow(graphXValue,graphXValue))/qPow(2, (qPow(graphXValue,graphXValue)-M_PI/2)/M_PI))
OR
My question is how to rotate such an equation around the Y-Axis instead of the X-Axis. Are there any methods to approximate the solving of this equation in terms of y instead of x? Are there any current implementations of such a task?
Keep in mind, I am calculating each point for the transformation in a 3D coordinate system:
for (float x = 0.0f; x < t_functionMaxX - t_projectionStep; x+=t_projectionStep)
{
currentSet = new QSurfaceDataRow;
nextSet = new QSurfaceDataRow;
float x_pos_mapped = x;
float y_pos_mapped = static_cast<float>(ui->customPlot->graph(0)->data()->findBegin(static_cast<double>(x), true)->value);
float x_pos_mapped_ahead = x + t_projectionStep;
float y_pos_mapped_ahead = static_cast<float>(graph1->data()->findBegin(static_cast<double>(x + t_projectionStep), true)->value);
QList<QVector3D> temp_points;
for (float currentRotation = static_cast<float>(-2*M_PI); currentRotation < static_cast<float>(2*M_PI); currentRotation += static_cast<float>((1) * M_PI / 180))
{
float y_pos_calculated = static_cast<float>(qCos(static_cast<qreal>(currentRotation))) * y_pos_mapped;
float z_pos_calculated = static_cast<float>(qSin(static_cast<qreal>(currentRotation))) * y_pos_mapped;
float y_pos_calculated_ahead = static_cast<float>(qCos(static_cast<qreal>(currentRotation))) * y_pos_mapped_ahead;
float z_pos_calculated_ahead = static_cast<float>(qSin(static_cast<qreal>(currentRotation))) * y_pos_mapped_ahead;
QVector3D point(x_pos_mapped, y_pos_calculated, z_pos_calculated);
QVector3D point_ahead(x_pos_mapped_ahead, y_pos_calculated_ahead, z_pos_calculated_ahead);
*currentSet << point;
*nextSet << point_ahead;
temp_points << point;
}
*data << currentSet << nextSet;
points << temp_points;
}
Essentially, you rotate the vector (x,f(x),0) around the Y axis, so the Y value remains the same but the X and Y parts vary according to rotation.
I also replaced all the static_cast<float> parts by explicit invocations of the float constructor, which (I find) reads a bit better.
// Render the upper part, grow from the inside
for (float x = 0.0f; x < t_functionMaxX - t_projectionStep; x+=t_projectionStep)
{
currentSet = new QSurfaceDataRow;
nextSet = new QSurfaceDataRow;
float x_pos_mapped = x;
float y_pos_mapped = float(ui->customPlot->graph(0)->data()->findBegin(double(x), true)->value);
float x_pos_mapped_ahead = x + t_projectionStep;
float y_pos_mapped_ahead = float(graph1->data()->findBegin(double(x + t_projectionStep), true)->value);
QList<QVector3D> temp_points;
for (float currentRotation = float(-2*M_PI); currentRotation < float(2*M_PI); currentRotation += float((1) * M_PI / 180))
{
float x_pos_calculated = float(qCos(qreal(currentRotation))) * x_pos_mapped;
float z_pos_calculated = float(qSin(qreal(currentRotation))) * x_pos_mapped;
float x_pos_calculated_ahead = float(qCos(qreal(currentRotation))) * x_pos_mapped_ahead;
float z_pos_calculated_ahead = float(qSin(qreal(currentRotation))) * x_pos_mapped_ahead;
QVector3D point(x_pos_calculated, y_pos_mapped, z_pos_calculated);
QVector3D point_ahead(x_pos_calculated_ahead, y_pos_mapped_ahead, z_pos_calculated_ahead);
*currentSet << point;
*nextSet << point_ahead;
temp_points << point;
}
*data << currentSet << nextSet;
points << temp_points;
}
Next, you need to add the bottom "plate". This is simply a bunch of triangles that connect (0,0,0) with two adjacent points of the rotation of (1,0,0) around the Y axis, just like we did above.
Finally, if f(t_functionmaxX) is not zero, you need to add a side that connects (t_functionmaxX, f(t_functionmaxX), 0) to (t_functionmaxX, 0, 0), again rotating in steps around the Y axis.
Note that this will do weird things if y < 0. How you want to solve that is up to you.

weird inaccuracy in line rotation - c++

I have programmed a simple dragon curve fractal. It seems to work for the most part, but there is an odd logical error that shifts the rotation of certain lines by one pixel. This wouldn't normally be an issue, but after a few generations, at the right size, the fractal begins to look wonky.
I am using open cv in c++ to generate it, but I'm pretty sure it's a logical error rather than a display error. I have printed the values to the console multiple times and seen for myself that there is a one-digit difference between values that are intended to be the exact same - meaning a line may have a y of 200 at one end and 201 at another.
Here is the full code:
#include<iostream>
#include<cmath>
#include<opencv2/opencv.hpp>
const int width=500;
const int height=500;
const double PI=std::atan(1)*4.0;
struct point{
double x;
double y;
point(double x_,double y_){
x=x_;
y=y_;
}};
cv::Mat img(width,height,CV_8UC3,cv::Scalar(255,255,255));
double deg_to_rad(double degrees){return degrees*PI/180;}
point rotate(int degree, int centx, int centy, int ll) {
double radians = deg_to_rad(degree);
return point(centx + (ll * std::cos(radians)), centy + (ll * std::sin(radians)));
}
void generate(point & r, std::vector < point > & verticies, int rotation = 90) {
int curRotation = 90;
bool start = true;
point center = r;
point rot(0, 0);
std::vector<point> verticiesc(verticies);
for (point i: verticiesc) {
double dx = center.x - i.x;
double dy = center.y - i.y;
//distance from centre
int ll = std::sqrt(dx * dx + dy * dy);
//angle from centre
curRotation = std::atan2(dy, dx) * 180 / PI;
//add 90 degrees of rotation
rot = rotate(curRotation + rotation, center.x, center.y, ll);
verticies.push_back(rot);
//endpoint, where the next centre will be
if (start) {
r = rot;
start = false;
}
}
}
void gen(int gens, int bwidth = 1) {
int ll = 7;
std::vector < point > verticies = {
point(width / 2, height / 2 - ll),
point(width / 2, height / 2)
};
point rot(width / 2, height / 2);
for (int i = 0; i < gens; i++) {
generate(rot, verticies);
}
//draw lines
for (int i = 0; i < verticies.size(); i += 2) {
cv::line(img, cv::Point(verticies[i].x, verticies[i].y), cv::Point(verticies[i + 1].x, verticies[i + 1].y), cv::Scalar(0, 0, 0), 1, 8);
}
}
int main() {
gen(10);
cv::imshow("", img);
cv::waitKey(0);
return 0;
}
First, you use int to store point coordinates - that's a bad idea - you lose all accuracy of point position. Use double or float.
Second, your method for drawing fractals is not too stable numericly. You'd better store original shape and all rotation/translation/scale that indicate where and how to draw scaled copies of the original shape.
Also, I believe this is a bug:
for(point i: verices)
{
...
vertices.push_back(rot);
...
}
Changing size of vertices while inside such a for-loop might cause a crash or UB.
Turns out it was to do with floating-point precision. I changed
x=x_;
y=y_;
to
x=std::round(x_);
y=std::round(y_);
and it works.

Intersection of 3 circle not correct c++

I am trying to find the common intersection(x,y) of 3 circles using c++. But i'm not getting the proper output. What am i doing wrong in my code? Here i my program i'm using to calculate the common intersection point. Here first i have calculated the intersection of two pixels which comes from quadric equation,as (x0,y0), (x1,y1). After that considering that 3rd circle intersects at atleast one point, i have used those two intersection points in 3rd circle, whichever satisfies the 3rd circle it is considered as the common intersection point of 3 circle. Am i doing anything wrong?
vector<pix> obj; struct pix { int x; int y; };
auto p0 = obj[stoi(r[2])][stoi(r[0])];
auto p1 = obj[stoi(r[2])][stoi(r[1])];
int ax = p1.x-p0.x; int ay = p1.y-p0.y;
int bx = -ay; int by = ax;
pix pv;
pv.x = p1.x+bx; pv.y = p1.y+by;
OrigImg.copyTo(cv_ptr->image);
for(auto pi : obj[stoi(r[2])]) {
float p0pi = sqrt(pow(p0.x-pi.x,2)+pow(p0.y-pi.y,2));
float p1pi = sqrt(pow(p1.x-pi.x,2)+pow(p1.y-pi.y,2));
float pvpi = sqrt(pow(pv.x-pi.x,2)+pow(pv.y-pi.y,2));
float a1 = 2*(p1.x-p0.x);
float b1 = 2*(p1.y-p0.y);
float c1 = p0.x*p0.x-p1.x*p1.x+p0.y*p0.y-p1.y*p1.y-p0pi*p0pi+p1pi*p1pi;
float a = a1*a1+b1*b1;
float b = 2*(b1*c1 + b1*a1*p0.x - p0.y*a1*a1);
float c = c1*c1+2*c1*p0.x*a1 + a1*a1*(p0.x*p0.x + p0.y*p0.y - p0pi*p0pi);
int y0 = -(b+sqrt(b*b-4*a*c))/2*a;
int y1 = (b+sqrt(b*b-4*a*c))/2*a;
int x0 = -(b1*y0+c1)/a1;
int x1 = -(b1*y1+c1)/a1;
int x,y;
cout<<"hello"<<x0<<"\t"<<y0<<"\t"<<x1<<"\t"<<y1<<endl;
cout<<pow(x0-pv.x,2)+pow(y0-pv.y,2)<<"\t"<<pvpi*pvpi<<"\t"<<
pow(x1-pv.x,2)+pow(y1-pv.y,2)<<"\t"<<pvpi*pvpi<<endl;
if(sqrt(pow(x0-pv.x,2)+pow(y0-pv.y,2))==pvpi) {
x = x0; y = y0;
}
else if(sqrt(pow(x1-pv.x,2)+pow(y1-pv.y,2))==pvpi) {
x = x1; y = y1;
}
if(x>=0 && x<OrigImg.rows && y>=0 && y<OrigImg.cols) {
cv_ptr->image.at<cv::Vec3b>( y, x )[2] = 0;
cv_ptr->image.at<cv::Vec3b>( y, x )[1] = 0;
cv_ptr->image.at<cv::Vec3b>( y, x )[0] = 0;
}
}
}
image_pub_.publish(cv_ptr->toImageMsg());
Here p0, p1, pv are the position of 3 circles which are at different position. Here what i'm trying to do it, i have saved the pixels belonging to one object in a map obj[obj_index][pixel_index] where pixel index is index for each unique pixel belonging to that pixel and obj_index is index for each unique object.
After applying some pattern matching algorithm i get the r[0]=obj_index, r[1]=p0 index, r[2]=p1 index of object. Now what i'm trying to do it to visualize and check which pixels are present in current analysed object w.r.t previously saved object.
Here the output comes like:
hello 150492 150336 -150180 -150336
4.51763e+10 873 4.52274e+10 873

Texture mapping so close but not quite right

I am building a raytracer and my texture mapping isn't quite right. Its very close though. I build a cup in blender and did a UV unwrap to display a texture. I exported the object and loaded it into my raytracer with the same texture. Here are two pictures:
As you can see the textures look very close, but something is off. If you look at the bottom of the cup on the sides you can see they aren't the same, but the textures are all aligned correctly so it does look somewhat right. The way the textures are calculated is by using barycentric coordinates.
Vect n = getTriangleNormal();
Vect ba = B.add(A.negative()).negative();
Vect ca = C.add(A.negative()).negative();
Vect ap = A.add(point.negative()).negative();
Vect bp = B.add(point.negative()).negative();
Vect cp = C.add(point.negative()).negative();
double areaABC = n.dotProduct(ba.crossProduct(ca));
double areaPBC = n.dotProduct(bp.crossProduct(cp));
double areaPCA = n.dotProduct(cp.crossProduct(ap));
if(areaABC < 0){areaABC = -areaABC;}
if(areaPBC < 0){areaPBC = -areaPBC;}
if(areaPCA < 0){areaPCA = -areaPCA;}
double u = areaPBC / areaABC ; // alpha
double v = areaPCA / areaABC ; // beta
double w = 1.0f - u - v ; // gamma
Then to find the color I take the new point and map it onto the image
Vect uv = (textA.mult(u)).add(textB.mult(v)).add(textC.mult(w));
int width = texture ->columns();
int height = texture ->rows();
double x = width * (uv.getX()) ; x = (int) x;
double y = height * (1-uv.getY()) ; y = (int) y;
//vector<unsigned int> c = texture -> getPixel(x,y);
//return Color(c[0]/255.0,c[1]/255.0,c[2]/255.0,0);
int row = y;
int column = x;
Magick::PixelPacket *pixels = texture->getPixels(0, 0, width, height);
Magick::Color color = pixels[width * row + column];
double range = pow(2, texture -> modulusDepth());
double r = color.redQuantum()/range ;
double g = color.greenQuantum()/range ;
double b = color.blueQuantum()/range ;
return Color(r, g, b, 0);

A method for indexing triangles from a loaded heightmap?

I am currently making a method to load in a noisy heightmap, but lack the triangles to do so. I want to make an algorithm that will take an image, its width and height and construct a terrain node out of it.
Here's what I have so far, in somewhat pseudo
Vertex* vertices = new Vertices[image.width * image.height];
Index* indices; // How do I judge how many indices I will have?
float scaleX = 1 / image.width;
float scaleY = 1 / image.height;
float currentYScale = 0;
for(int y = 0; y < image.height; ++y) {
float currentXScale = 0;
for (int x = 0; x < image.width; ++x) {
Vertex* v = vertices[x * y];
v.x = currentXScale;
v.y = currentYScale;
v.z = image[x,y];
currentXScale += scaleX;
}
currentYScale += scaleY;
}
This works well enough to my needs, my only problem is this: How would I calculate the # of indices and their positions for drawing the triangles? I have somewhat familiarity with indices, but not how to programmatically calculate them, I can only do that statically.
As far as your code above goes, using vertices[x * y] isn't right - if you use that, then e.g. vert(2,3) == vert(3,2). What you want is something like vertices[y * image.width + x], but you can do it more efficiently by incrementing a counter (see below).
Here's the equivalent code I use. It's in C# unfortunately, but hopefully it should illustrate the point:
/// <summary>
/// Constructs the vertex and index buffers for the terrain (for use when rendering the terrain).
/// </summary>
private void ConstructBuffers()
{
int heightmapHeight = Heightmap.GetLength(0);
int heightmapWidth = Heightmap.GetLength(1);
int gridHeight = heightmapHeight - 1;
int gridWidth = heightmapWidth - 1;
// Construct the individual vertices for the terrain.
var vertices = new VertexPositionTexture[heightmapHeight * heightmapWidth];
int vertIndex = 0;
for(int y = 0; y < heightmapHeight; ++y)
{
for(int x = 0; x < heightmapWidth; ++x)
{
var position = new Vector3(x, y, Heightmap[y,x]);
var texCoords = new Vector2(x * 2f / heightmapWidth, y * 2f / heightmapHeight);
vertices[vertIndex++] = new VertexPositionTexture(position, texCoords);
}
}
// Create the vertex buffer and fill it with the constructed vertices.
this.VertexBuffer = new VertexBuffer(Renderer.GraphicsDevice, typeof(VertexPositionTexture), vertices.Length, BufferUsage.WriteOnly);
this.VertexBuffer.SetData(vertices);
// Construct the index array.
var indices = new short[gridHeight * gridWidth * 6]; // 2 triangles per grid square x 3 vertices per triangle
int indicesIndex = 0;
for(int y = 0; y < gridHeight; ++y)
{
for(int x = 0; x < gridWidth; ++x)
{
int start = y * heightmapWidth + x;
indices[indicesIndex++] = (short)start;
indices[indicesIndex++] = (short)(start + 1);
indices[indicesIndex++] = (short)(start + heightmapWidth);
indices[indicesIndex++] = (short)(start + 1);
indices[indicesIndex++] = (short)(start + 1 + heightmapWidth);
indices[indicesIndex++] = (short)(start + heightmapWidth);
}
}
// Create the index buffer.
this.IndexBuffer = new IndexBuffer(Renderer.GraphicsDevice, typeof(short), indices.Length, BufferUsage.WriteOnly);
this.IndexBuffer.SetData(indices);
}
I guess the key point is that given a heightmap of size heightmapHeight * heightmapWidth, you need (heightmapHeight - 1) * (heightmapWidth - 1) * 6 indices, since you're drawing:
2 triangles per grid square
3 vertices per triangle
(heightmapHeight - 1) * (heightmapWidth - 1) grid squares in your terrain.