Texture mapping so close but not quite right - c++

I am building a raytracer and my texture mapping isn't quite right. Its very close though. I build a cup in blender and did a UV unwrap to display a texture. I exported the object and loaded it into my raytracer with the same texture. Here are two pictures:
As you can see the textures look very close, but something is off. If you look at the bottom of the cup on the sides you can see they aren't the same, but the textures are all aligned correctly so it does look somewhat right. The way the textures are calculated is by using barycentric coordinates.
Vect n = getTriangleNormal();
Vect ba = B.add(A.negative()).negative();
Vect ca = C.add(A.negative()).negative();
Vect ap = A.add(point.negative()).negative();
Vect bp = B.add(point.negative()).negative();
Vect cp = C.add(point.negative()).negative();
double areaABC = n.dotProduct(ba.crossProduct(ca));
double areaPBC = n.dotProduct(bp.crossProduct(cp));
double areaPCA = n.dotProduct(cp.crossProduct(ap));
if(areaABC < 0){areaABC = -areaABC;}
if(areaPBC < 0){areaPBC = -areaPBC;}
if(areaPCA < 0){areaPCA = -areaPCA;}
double u = areaPBC / areaABC ; // alpha
double v = areaPCA / areaABC ; // beta
double w = 1.0f - u - v ; // gamma
Then to find the color I take the new point and map it onto the image
Vect uv = (textA.mult(u)).add(textB.mult(v)).add(textC.mult(w));
int width = texture ->columns();
int height = texture ->rows();
double x = width * (uv.getX()) ; x = (int) x;
double y = height * (1-uv.getY()) ; y = (int) y;
//vector<unsigned int> c = texture -> getPixel(x,y);
//return Color(c[0]/255.0,c[1]/255.0,c[2]/255.0,0);
int row = y;
int column = x;
Magick::PixelPacket *pixels = texture->getPixels(0, 0, width, height);
Magick::Color color = pixels[width * row + column];
double range = pow(2, texture -> modulusDepth());
double r = color.redQuantum()/range ;
double g = color.greenQuantum()/range ;
double b = color.blueQuantum()/range ;
return Color(r, g, b, 0);

Related

Intersection of 3 circle not correct c++

I am trying to find the common intersection(x,y) of 3 circles using c++. But i'm not getting the proper output. What am i doing wrong in my code? Here i my program i'm using to calculate the common intersection point. Here first i have calculated the intersection of two pixels which comes from quadric equation,as (x0,y0), (x1,y1). After that considering that 3rd circle intersects at atleast one point, i have used those two intersection points in 3rd circle, whichever satisfies the 3rd circle it is considered as the common intersection point of 3 circle. Am i doing anything wrong?
vector<pix> obj; struct pix { int x; int y; };
auto p0 = obj[stoi(r[2])][stoi(r[0])];
auto p1 = obj[stoi(r[2])][stoi(r[1])];
int ax = p1.x-p0.x; int ay = p1.y-p0.y;
int bx = -ay; int by = ax;
pix pv;
pv.x = p1.x+bx; pv.y = p1.y+by;
OrigImg.copyTo(cv_ptr->image);
for(auto pi : obj[stoi(r[2])]) {
float p0pi = sqrt(pow(p0.x-pi.x,2)+pow(p0.y-pi.y,2));
float p1pi = sqrt(pow(p1.x-pi.x,2)+pow(p1.y-pi.y,2));
float pvpi = sqrt(pow(pv.x-pi.x,2)+pow(pv.y-pi.y,2));
float a1 = 2*(p1.x-p0.x);
float b1 = 2*(p1.y-p0.y);
float c1 = p0.x*p0.x-p1.x*p1.x+p0.y*p0.y-p1.y*p1.y-p0pi*p0pi+p1pi*p1pi;
float a = a1*a1+b1*b1;
float b = 2*(b1*c1 + b1*a1*p0.x - p0.y*a1*a1);
float c = c1*c1+2*c1*p0.x*a1 + a1*a1*(p0.x*p0.x + p0.y*p0.y - p0pi*p0pi);
int y0 = -(b+sqrt(b*b-4*a*c))/2*a;
int y1 = (b+sqrt(b*b-4*a*c))/2*a;
int x0 = -(b1*y0+c1)/a1;
int x1 = -(b1*y1+c1)/a1;
int x,y;
cout<<"hello"<<x0<<"\t"<<y0<<"\t"<<x1<<"\t"<<y1<<endl;
cout<<pow(x0-pv.x,2)+pow(y0-pv.y,2)<<"\t"<<pvpi*pvpi<<"\t"<<
pow(x1-pv.x,2)+pow(y1-pv.y,2)<<"\t"<<pvpi*pvpi<<endl;
if(sqrt(pow(x0-pv.x,2)+pow(y0-pv.y,2))==pvpi) {
x = x0; y = y0;
}
else if(sqrt(pow(x1-pv.x,2)+pow(y1-pv.y,2))==pvpi) {
x = x1; y = y1;
}
if(x>=0 && x<OrigImg.rows && y>=0 && y<OrigImg.cols) {
cv_ptr->image.at<cv::Vec3b>( y, x )[2] = 0;
cv_ptr->image.at<cv::Vec3b>( y, x )[1] = 0;
cv_ptr->image.at<cv::Vec3b>( y, x )[0] = 0;
}
}
}
image_pub_.publish(cv_ptr->toImageMsg());
Here p0, p1, pv are the position of 3 circles which are at different position. Here what i'm trying to do it, i have saved the pixels belonging to one object in a map obj[obj_index][pixel_index] where pixel index is index for each unique pixel belonging to that pixel and obj_index is index for each unique object.
After applying some pattern matching algorithm i get the r[0]=obj_index, r[1]=p0 index, r[2]=p1 index of object. Now what i'm trying to do it to visualize and check which pixels are present in current analysed object w.r.t previously saved object.
Here the output comes like:
hello 150492 150336 -150180 -150336
4.51763e+10 873 4.52274e+10 873

Perlin Noise getting wrong values in Y axis (C++)

Issue
I'm trying to implement the Perlin Noise algorithm in 2D with a single octave with a size of 16x16. I'm using this as heightmap data for a terrain, however it only seems to work in one axis. Whenever the sample point moves to a new Y section in the Perlin Noise grid, the gradient is very different from what I expect (for example, it often flips from 0.98 to -0.97, which is a very sudden change).
This image shows the staggered terrain in the z direction (which is the y axis in the 2D Perlin Noise grid)
Code
I've put the code that calculates which sample point to use at the end since it's quite long and I believe it's not where the issue is, but essentially I scale down the terrain to match the Perlin Noise grid (16x16) and then sample through all the points.
Gradient At Point
So the code that calculates out the gradient at a sample point is the following:
// Find the gradient at a certain sample point
float PerlinNoise::gradientAt(Vector2 point)
{
// Decimal part of float
float relativeX = point.x - (int)point.x;
float relativeY = point.y - (int)point.y;
Vector2 relativePoint = Vector2(relativeX, relativeY);
vector<float> weights(4);
// Find the weights of the 4 surrounding points
weights = surroundingWeights(point);
float fadeX = fadeFunction(relativePoint.x);
float fadeY = fadeFunction(relativePoint.y);
float lerpA = MathUtils::lerp(weights[0], weights[1], fadeX);
float lerpB = MathUtils::lerp(weights[2], weights[3], fadeX);
float lerpC = MathUtils::lerp(lerpA, lerpB, fadeY);
return lerpC;
}
Surrounding Weights of Point
I believe the issue is somewhere here, in the function that calculates the weights for the 4 surrounding points of a sample point, but I can't seem to figure out what is wrong since all the values seem sensible in the function when stepping through it.
// Find the surrounding weight of a point
vector<float> PerlinNoise::surroundingWeights(Vector2 point){
// Produces correct values
vector<Vector2> surroundingPoints = surroundingPointsOf(point);
vector<float> weights;
for (unsigned i = 0; i < surroundingPoints.size(); ++i) {
// The corner to the sample point
Vector2 cornerToPoint = surroundingPoints[i].toVector(point);
// Getting the seeded vector from the grid
float x = surroundingPoints[i].x;
float y = surroundingPoints[i].y;
Vector2 seededVector = baseGrid[x][y];
// Dot product between the seededVector and corner to the sample point vector
float dotProduct = cornerToPoint.dot(seededVector);
weights.push_back(dotProduct);
}
return weights;
}
OpenGL Setup and Sample Point
Setting up the heightmap and getting the sample point. Variables 'wrongA' and 'wrongA' is an example of when the gradient flips and changes suddenly.
void HeightMap::GenerateRandomTerrain() {
int perlinGridSize = 16;
PerlinNoise perlin_noise = PerlinNoise(perlinGridSize, perlinGridSize);
numVertices = RAW_WIDTH * RAW_HEIGHT;
numIndices = (RAW_WIDTH - 1) * (RAW_HEIGHT - 1) * 6;
vertices = new Vector3[numVertices];
textureCoords = new Vector2[numVertices];
indices = new GLuint[numIndices];
float perlinScale = RAW_HEIGHT/ (float) (perlinGridSize -1);
float height = 50;
float wrongA = perlin_noise.gradientAt(Vector2(0, 68.0f / perlinScale));
float wrongB = perlin_noise.gradientAt(Vector2(0, 69.0f / perlinScale));
for (int x = 0; x < RAW_WIDTH; ++x) {
for (int z = 0; z < RAW_HEIGHT; ++z) {
int offset = (x* RAW_WIDTH) + z;
float xVal = (float)x / perlinScale;
float yVal = (float)z / perlinScale;
float noise = perlin_noise.gradientAt(Vector2( xVal , yVal));
vertices[offset] = Vector3(x * HEIGHTMAP_X, noise * height, z * HEIGHTMAP_Z);
textureCoords[offset] = Vector2(x * HEIGHTMAP_TEX_X, z * HEIGHTMAP_TEX_Z);
}
}
numIndices = 0;
for (int x = 0; x < RAW_WIDTH - 1; ++x) {
for (int z = 0; z < RAW_HEIGHT - 1; ++z) {
int a = (x * (RAW_WIDTH)) + z;
int b = ((x + 1)* (RAW_WIDTH)) + z;
int c = ((x + 1)* (RAW_WIDTH)) + (z + 1);
int d = (x * (RAW_WIDTH)) + (z + 1);
indices[numIndices++] = c;
indices[numIndices++] = b;
indices[numIndices++] = a;
indices[numIndices++] = a;
indices[numIndices++] = d;
indices[numIndices++] = c;
}
}
BufferData();
}
Turned out the issue was in the interpolation stage:
float lerpA = MathUtils::lerp(weights[0], weights[1], fadeX);
float lerpB = MathUtils::lerp(weights[2], weights[3], fadeX);
float lerpC = MathUtils::lerp(lerpA, lerpB, fadeY);
I had the interpolation in the y axis the wrong way around, so it should have been:
lerp(lerpB, lerpA, fadeY)
Instead of:
lerp(lerpA, lerpB, fadeY)

Negative row and column in terrain following algorithm

I'm trying to do terrain following, and I get a negative camera position in the xz plane. Now I get an out of boundary exception, because the row or the col is negative. How would I transform the cell of the grid to the origin correctly, giving negative camera coordinates.
Here is the two functions
int cGrid::getHeightmapEntry(int row, int col)
{
return m_heightmap[row * 300 + col];
}
float cGrid::getHeight(float x, float z, float _width, float _depth, int _cellSpacing)
{
// Translate on xz-plane by the transformation that takes
// the terrain START point to the origin.
x = ((float)_width / 2.0f) + x;
z = ((float)_depth / 2.0f) - z;
// Scale down by the transformation that makes the
// cellspacing equal to one. This is given by
// 1 / cellspacing since; cellspacing * 1 / cellspacing = 1.
x /= (float)_cellSpacing;
z /= (float)_cellSpacing;
// From now on, we will interpret our positive z-axis as
// going in the 'down' direction, rather than the 'up' direction.
// This allows to extract the row and column simply by 'flooring'
// x and z:
float col = ::floorf(x);
float row = ::floorf(z);
if (row < 0 || col<0)
{
row = 0;
}
// get the heights of the quad we're in:
//
// A B
// *---*
// | / |
// *---*
// C D
float A = getHeightmapEntry(row, col);
float B = getHeightmapEntry(row, col + 1);
float C = getHeightmapEntry(row + 1, col);
float D = getHeightmapEntry(row + 1, col + 1);
//
// Find the triangle we are in:
//
// Translate by the transformation that takes the upper-left
// corner of the cell we are in to the origin. Recall that our
// cellspacing was nomalized to 1. Thus we have a unit square
// at the origin of our +x -> 'right' and +z -> 'down' system.
float dx = x - col;
float dz = z - row;
// Note the below compuations of u and v are unneccessary, we really
// only need the height, but we compute the entire vector to emphasis
// the books discussion.
float height = 0.0f;
if (dz < 1.0f - dx) // upper triangle ABC
{
float uy = B - A; // A->B
float vy = C - A; // A->C
// Linearly interpolate on each vector. The height is the vertex
// height the vectors u and v originate from {A}, plus the heights
// found by interpolating on each vector u and v.
height = A + Lerp(0.0f, uy, dx) + Lerp(0.0f, vy, dz);
}
else // lower triangle DCB
{
float uy = C - D; // D->C
float vy = B - D; // D->B
// Linearly interpolate on each vector. The height is the vertex
// height the vectors u and v originate from {D}, plus the heights
// found by interpolating on each vector u and v.
height = D + Lerp(0.0f, uy, 1.0f - dx) + Lerp(0.0f, vy, 1.0f - dz);
}
return height;
}
float height = m_Grid.getHeight(position.x, position.y, 49 * 300, 49 * 300, 6.1224489795918367f);
if (height != 0)
{
position.y = height + 10.0f;
}
m_Camera.SetPosition(position.x, position.y, position.z);
bool cGrid::readRawFile(std::string fileName, int m, int n)
{
// A height for each vertex
std::vector<BYTE> in(m*n);
std::ifstream inFile(fileName.c_str(), std::ios_base::binary);
if (!inFile)
return false;
inFile.read(
(char*)&in[0], // buffer
in.size());// number of bytes to read into buffer
inFile.close();
// copy BYTE vector to int vector
m_heightmap.resize(n*m);
for (int i = 0; i < in.size(); i++)
m_heightmap[i] = (float)((in[i])/255)*50.0f;
return true;
}
m_Grid.readRawFile("castlehm257.raw", 50, 50);
I infer that you’re storing a 50 by 50 matrix inside a 300 by 300 matrix, to represent a grid of 49 by 49 cells. I also infer that m_Grid is an object of type cGrid. Your code appears to contain the following errors:
Argument(2) of call m_Grid.getHeight is not a z value.
Argument(3) of call m_Grid.getHeight is inconsistent with argument(5).
Argument(4) of call m_Grid.getHeight is inconsistent with argument(5).
Implicit cast of literal float to int in argument(5) of call m_Grid.getHeight - the value will be truncated.
Try changing your function call to this:
float height = m_Grid.getHeight(position.x, position.z, 49 * cellspacing, 49 * cellspacing, cellspacing);
-- where cellspacing is as defined in your diagram.
Also, try changing parameter(5) of cGrid::getHeight from int _cellSpacing to float _cellSpacing.
(I have edited this answer a couple of times as my understanding of your code has evolved.)

Resizing a picture vc++

Is there any function that quickly resizing a picture in Visual C++? I want to made a copy of original picture that would be x times smaller. Then I would like to placed it at the center of black bitmap. The black bitmap would be in the size of first picture.
Here is original picture: https://www.dropbox.com/s/6she1kvcby53qgz/term.bmp
and this is effect that i want to receive: https://www.dropbox.com/s/8ah59z0ip6tq4wd/term2.bmp
In my program I use Pylon's libraries. The images are in CPylonImage type.
Some simple code to handle resizes portably:
For all cases the following legend applies:
w1 - the width of the original image
h1 - the height of the original image
pixels - an array of int with the pixel data
w2 - desired width
h2 - desired height
retval - this is the returned value, it is a new pixel array which contains the manipulated image.
For Linear Interpolation:
I cannot find this on my drive at present (issues with a new hdd) so have included Bilinear:
For Bilinear Interpolation:
Bilinear Interpolation function
int* resizeBilinear(int* pixels, int w1, int h1, int w2, int h2)
{
int* retval = new int[w2*h2] ;
int a, b, c, d, x, y, index ;
float x_ratio = ((float)(w1-1))/w2 ;
float y_ratio = ((float)(h1-1))/h2 ;
float x_diff, y_diff, blue, red, green ;
int offset = 0 ;
for (int i=0;i<h2;i++) {
for (int j=0;j<w2;j++) {
x = (int)(x_ratio * j) ;
y = (int)(y_ratio * i) ;
x_diff = (x_ratio * j) - x ;
y_diff = (y_ratio * i) - y ;
index = (y*w1+x) ;
a = pixels[index] ;
b = pixels[index+1] ;
c = pixels[index+w1] ;
d = pixels[index+w1+1] ;
// blue element
// Yb = Ab(1-w1)(1-h1) + Bb(w1)(1-h1) + Cb(h1)(1-w1) + Db(wh)
blue = (a&0xff)*(1-x_diff)*(1-y_diff) + (b&0xff)*(x_diff)*(1-y_diff) +
(c&0xff)*(y_diff)*(1-x_diff) + (d&0xff)*(x_diff*y_diff);
// green element
// Yg = Ag(1-w1)(1-h1) + Bg(w1)(1-h1) + Cg(h1)(1-w1) + Dg(wh)
green = ((a>>8)&0xff)*(1-x_diff)*(1-y_diff) + ((b>>8)&0xff)*(x_diff)*(1-y_diff) +
((c>>8)&0xff)*(y_diff)*(1-x_diff) + ((d>>8)&0xff)*(x_diff*y_diff);
// red element
// Yr = Ar(1-w1)(1-h1) + Br(w1)(1-h1) + Cr(h1)(1-w1) + Dr(wh)
red = ((a>>16)&0xff)*(1-x_diff)*(1-y_diff) + ((b>>16)&0xff)*(x_diff)*(1-y_diff) +
((c>>16)&0xff)*(y_diff)*(1-x_diff) + ((d>>16)&0xff)*(x_diff*y_diff);
retval[offset++] =
0xff000000 | // hardcoded alpha
((((int)red)<<16)&0xff0000) |
((((int)green)<<8)&0xff00) |
((int)blue) ;
}
}
return retval;
}
For Nearest Neighbour:
int* resizePixels(int* pixels,int w1,int h1,int w2,int h2)
{
int* retval = new int[w2*h2] ;
// EDIT: added +1 to remedy an early rounding problem
int x_ratio = (int)((w1<<16)/w2) +1;
int y_ratio = (int)((h1<<16)/h2) +1;
//int x_ratio = (int)((w1<<16)/w2) ;
//int y_ratio = (int)((h1<<16)/h2) ;
int x2, y2 ;
for (int i=0;i<h2;i++) {
for (int j=0;j<w2;j++) {
x2 = ((j*x_ratio)>>16) ;
y2 = ((i*y_ratio)>>16) ;
retval[(i*w2)+j] = pixels[(y2*w1)+x2] ;
}
}
return retval;
}
Now, the code above is designed to be portable and should work with very little modification in C++, C, C# and Java (I have used the code above for all 4 when needed), which eliminates the need for an external library and allows you to process any array of pixels, so long as you can represent them in the format for the code above.
To place the manipulated image in the middle of a black background, all you would need to do is copy the data into an array of the original at the right locations and populate all the other locations with the values for black:)
Hope this helps, as I have not time to comment it all at present, however I can if needs be at a later point today or tomorrow:)

Texture repeating

I want to repeat small 2x2 pixels texture on a bigger quad, for instance, 50x50 pixels.
Set vertices -
float X = 100, Y = 100, Width = 50, Height = 50;
float TextureLeft = 0, TextureTop = 0, TextureRight = 25, TextureBottom = 25;
Vertices[0].x = X;
Vertices[0].y = Y + Height;
Vertices[0].z = 0;
Vertices[0].rhw = 1;
Vertices[0].tu = TextureLeft;
Vertices[0].tv = TextureBottom;
Vertices[1].x = X;
Vertices[1].y = Y;
Vertices[1].z = 0;
Vertices[1].rhw = 1;
Vertices[1].tu = TextureLeft;
Vertices[1].tv = TextureTop;
Vertices[2].x = X + Width;
Vertices[2].y = Y;
Vertices[2].z = 0;
Vertices[2].rhw = 1;
Vertices[2].tu = TextureRight;
Vertices[2].tv = TextureTop;
Vertices[3].x = X;
Vertices[3].y = Y + Height;
Vertices[3].z = 0;
Vertices[3].rhw = 1;
Vertices[3].tu = TextureLeft;
Vertices[3].tv = TextureBottom;
Vertices[4].x = X + Width;
Vertices[4].y = Y;
Vertices[4].z = 0;
Vertices[4].rhw = 1;
Vertices[4].tu = TextureRight;
Vertices[4].tv = TextureTop;
Vertices[5].x = X + Width;
Vertices[5].y = Y + Height;
Vertices[5].z = 0;
Vertices[5].rhw = 1;
Vertices[5].tu = TextureRight;
Vertices[5].tv = TextureBottom;
Draw -
DrawPrimitive(D3DPT_TRIANGLELIST, 0, 6);
Problem is "glitch" in the edge between the triangles, probably because of wrong vertices coordinates and also "glitch" on quad borders.
Original texture - http://i.imgur.com/tNqYePs.png
Result - http://i.imgur.com/sgUZvqE.png
Before the call to DrawPrimitive you should setup the texture wrapping as in this article.
// For the textures other than the first one use "D3DVERTEXTEXTURESAMPLER0+index"
YourDevice->SetSamplerState(D3DVERTEXTEXTURESAMPLER0, D3DSAMP_ADDRESSU, D3DTADDRESS_WRAP);
YourDevice->SetSamplerState(D3DVERTEXTEXTURESAMPLER0, D3DSAMP_ADDRESSV, D3DTADDRESS_WRAP);
To eliminate the glitch at the diagonal you may use the single Quad instead of two triangles.
The problem at the edges is considered here. You have to add small offset to each texture coordinate. "Small" means a normalized half of the pixel. If your texture resolution is 512x512, then add (0.5/512.0) to each of the u/v coords.
If you draw 2d, you must add 0.5px to U and V coordianates when texturing. This will give you exact pixel/texel precision. Otherwise you will lose 0.5 pixels every time and texture will look blurry.