OpenGL heightmap terrain and normals with low resolution - c++

I'm did a basic terrain generation with a heightmap images in openGL, work fine, but the terrain vertices and normals are too low resolution, as in the follow images:
Image of the normals
Image of the vertices in terrain
The terrain mesh generation doesn't metter here, it's just a basic triangles generation, so let's go to the normals and heightmap generation part:
//Calculate the height of the vertex in the terrain
int Terrain::GetPixelHeight(unsigned char* data, int imageWidth, int x, int y)
{
if(x < 0 || x >= 256 || y < 0 || y >= 256) //If coordinates passes the image limit, just return 0
return 0;
else
{
unsigned char* pixelOffset = data + (x + imageWidth * y) * 1; //Get the image
unsigned char r = pixelOffset[0]; //Get R value
unsigned char g = pixelOffset[1]; //Get G value
unsigned char b = pixelOffset[2]; //Get B value
float height = (float)r + (float)g + (float)b; //Put values together to calculate the height
return height / 40; //Return the height with smooth in 40
}
}
Now the normals calculation:
//Calculate the normals based in the terrain height
float heightL = GetPixelHeight(data, y, x-1, z);
float heightR = GetPixelHeight(data, y, x+1, z);
float heightD = GetPixelHeight(data, y, x, z-1);
float heightU = GetPixelHeight(data, y, x, z+1);
glm::vec3 normalVector = glm::normalize(glm::vec3(heightL - heightR, 2.0f, heightD - heightU));
normals.push_back(normalVector.x); normals.push_back(normalVector.y); normals.push_back(normalVector.z);
Shaders are basic too, I can guarantee there are no problems with them, because is the same shaders of the others objects in game, and this objects doesn't have problems with light calculation, so i'm think the problem is the normals calculation with the heightmap.
PS: Terrain size is the same of the image, 256x256, the image just have black and white channels.

Related

I have a device reporting left handed coordinate angle and magnitude, how do I represent that as a line on the screen from the center?

The device I am using generates vectors like this;
How do I translate polar (angle and magnitude) from a left handed cordinate to a cartesian line, drawn on a screen where the origin point is the middle of a screen?
I am displaying the line on a wt32-sc01 screen using c++. There is a tft.drawline function but its references are normal pixel locations. In which case 0,0 is the upper left corner of the screen.
This is what I have so far (abbreviated)
....
int screen_height = tft.height();
int screen_width = tft.width();
// Device can read to 12m and reports in mm
float zoom_factor = (screen_width / 2.0) / 12000.0;
int originY = (int)(screen_height / 2);
int originX = (int)(screen_width / 2);
// Offset is for screen scrolling. No screen offset to start
int offsetX = 0;
int offsetY = 0;
...
// ld06 holds the reported angles and distances.
Coord coord = polarToCartesian(ld06.angles[i], ld06.distances[i]);
drawVector(coord, WHITE);
Coord polarToCartesian(float theta, float r) {
// cos() and sin() take radians
float rad = theta * 0.017453292519;
Coord converted = {
(int)(r * cos(rad)),
(int)(r * sin(rad))
};
return converted;
}
void drawVector(Coord coord, int color) {
// Cartesian relative the center of the screen factoring zoom and pan
int destX = (int)(zoom_factor * coord.x) + originX + offsetX;
int destY = originY - (int)(zoom_factor * coord.y) + offsetY;
// From the middle of the screen (origin X, origin Y) to destination x,y
tft.drawLine( originX, originY, destX, destY, color);
}
I have something drawing on the screen, but now I have to translate between a left handed coordinate system and the whole plane is rotated 90 degrees. How do I do that?
If I understood correctly, your coordinate system is with x pointing to the right and the y to the bottom and you used the formula for the standard math coordinate system where y is pointing up so multiplying your sin by -1 should do the trick (if it doesn't, try multiplying random things by -1, it often works for this kind of problems).
I assuming (from your image) your coordinate system has x going right y going up angle going from y axis clockwise and (0,0) is also center of your polar coordinates and your goniometrics accept radians then:
#include <math.h>
float x,y,ang,r;
const float deg = M_PI/180.0;
// ang = <0,360> // your angle
// r >= 0 // your radius (magnitude)
x = r*sin(ang*deg);
y = r*cos(ang*deg);

Changing a heightmap so it show 16 hieghtmaps on a 4x4 grid

Change the program to display 16 identical heightmaps arranged in a 4 x 4 grid. The edges of the heightmaps should be side by side in X and Z coordinates. However, they will not touch in the Y direction because the heights will be different.
C++
The below code is what I have already, i am just not too sure how to make it show the 16 identical heightmaps arranged in a 4x4 grid. I know it has to do with the squares on the height map, but i am very confused.
const int HEIGHTMAP_SIZE = 12;
float heights[HEIGHTMAP_SIZE + 1][HEIGHTMAP_SIZE + 1];
initDisplay();
for (unsigned int x = 0; x <= HEIGHTMAP_SIZE; x++)
{
for (unsigned int z = 0; z <= HEIGHTMAP_SIZE; z++)
{
heights[x][z] = (x % 2) * 0.5f -
z * z * 0.05f;
}
}
//TextureManager::activate("rainbow.bmp");
initHeightmapDisplayList();
void initHeightmapHeights()
{
for (unsigned int x0 = 0; x0 < HEIGHTMAP_SIZE; x0++)
{
unsigned int x1 = x0 + 1;
float tex_x0 = (float)(x0) / HEIGHTMAP_SIZE;
float tex_x1 = (float)(x1) / HEIGHTMAP_SIZE;
glBegin(GL_TRIANGLE_STRIP);
for (unsigned int z = 0; z <= HEIGHTMAP_SIZE; z++)
{
float tex_z = (float)(z) / HEIGHTMAP_SIZE;
glTexCoord2d(tex_x1, tex_z);
glVertex3d(x1, heights[x1][z], z);
glTexCoord2d(tex_x0, tex_z);
glVertex3d(x0, heights[x0][z], z);
}
glEnd();
}
}
void initHeightmapDisplayList()
{
heightmap_list.begin();
glEnable(GL_TEXTURE_2D);
TextureManager::activate("ground.bmp");
glColor3d(1.0, 1.0, 1.0);
initHeightmapHeights();
glDisable(GL_TEXTURE_2D);
heightmap_list.end();
}
I suspect that your TextureManager already has a way of doing this without direct calls to OpenGL.
What you should do is make the texture repeat itself.
glTexParamteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
By then scaling your texture coordinates up by a factor of 4, you would obtain a 4x4 grid of the original texture.

Drawing a circle in c++ using openGL

I've been trying to draw a circle in c++ using openGL. So far i have a compresses circle and it just has a random line going across the screen.
This is the function I'm using to get this shape.
void Sprite::init(int x, int y, int width, int height, Type mode, float scale) {
_x = x;
_y = y;
_width = width;
_height = height;
//generate buffer if it hasn't been generated
if (_vboID == 0) {
glGenBuffers(1, &_vboID);
}
Vertex vertexData[360];
if (mode == Type::CIRCLE) {
float rad = 3.14159;
for (int i = 0; i < 359; i++) {
vertexData[i].setPosition((rad * scale) * cos(i), (rad * scale) * sin(i));
}
}
//Tell opengl to bind our vertex buffer object
glBindBuffer(GL_ARRAY_BUFFER, _vboID);
//Upload the data to the GPU
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexData), vertexData, GL_STATIC_DRAW);
//Unbind the buffer
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
What is causing the line? Why is my circle being compressed?
Sorry if this is a dumb question or if this question doesn't belong on this website I'm very new to both c++ as well as this website.
It is difficult to be sure without testing the code myself, but I'll guess anyway.
Your weird line is probably caused by the buffer not being fully initialized. This is wrong:
Vertex vertexData[360];
for (int i = 0; i < 359; i++) {
It should be:
for (int i = 0; i < 360; i++) {
or else the position at vertexData[359] is left uninitialized and contains some far away point.
About the ellipse instead of a circle, that is probably caused by your viewport not having the same scale horizontally and vertically. If you configure the viewport plus transformation matrices to have a viewing frustum of X=-10..10, Y=-10..10, but the actual viewport is X=0..800 and the Y=0..600, for example, then the scale would be different and you'll get your image distorted.
The solution would be one of:
Create a square viewport instead of rectangular. Check your arguments to glViewport().
Define a view matrix to consider the same ratio your viewport has. You don't show how you set the view/world matrix, maybe you are not even using matrices... If that is the case, you should probably use one.
I don't understand, exactly, what you want obtain but... cos() and sin() receive a radiant argument; so, instead of cos(i) and sin(i), I suppose you need cos((2*rad*i)/360.0)) and sin((2*rad*i)/360.0)) or, semplified, cos((rad*i)/180.0)) and cos((rad*i)/180.0))
And what about the center and the radious of the circle?
(x, y) should be the center of the circle?
scale is the radious?
In this case, I suppose you should write something like (caution: not tested)
Vertex vertexData[360];
float rad = 3.14159;
if (mode == Type::CIRCLE) {
for (int i = 0; i < 359; ++i) {
float angle = (rad / 180) * i; // (thanks Rodrigo)
vertexData[i].setPosition(x + scale * cos(angle), y + scale * sin(angle));
}
}
or, loosing precision but avoidind some moltiplication,
Vertex vertexData[360];
float rad = 3.14159;
float angIncr = rad / 180.0;
if (mode == Type::CIRCLE) {
for (int i = 0, float angle = 0.0; i < 359; ++i, angle += angIncr) {
vertexData[i].setPosition(x + scale * cos(angle), y + scale * sin(angle));
}
}
But what about width and heigth?
p.s.: sorry for my bad English.
--- modified with suggestion from Rodrigo --

Rasterisation Algorithm: finding the "ST" coordinates of point in 2D quad and Inverse Projection

My goal is to render the image of a quad using the rasterisation algorithm. I have been as far as:
creating the quad in 3D
projecting the quad's vertices onto the screen using a perspective divide
converting the resulting coordinates from screen space to raster space, and comput the bounding box of the quad in raster space
looping over all pixels inside this bounding box, and finding out if the current pixel P is contained within the quad. For this I am using a simple test which consist of taking the dot between the edge AB of the quad and the vector defined between the vertex A and the point P. I repeat this process for all 4 edges and if the sign is the same, then the point is inside the quad.
I have implemented this successfully (see code below). But I am stuck with the remaining bits which I'd like to play with which is essentially finding the st or texture coordinates of my quad.
I don't know if it's possible to find the st coordinates of the current pixel P in the quad in raster space, and then convert that back into world space? Could you someone please point me in the right direction of tell me how to do this?
alternatively how can I compute the z or depth value of the pixel contained in the quad. I guess it's related to finding the st coordinates of the point in the quad, and then interpolating z values of vertices?
PS: this is NOT a homework. I do this to understand the rasterization algorithm, and precisely where I am stuck now, is the bit I don't understand which I believe in the GPU rendering pipeline involves some sort of inverse projection, but I am just lost at this point. Thanks for your help.
Vec3f verts[4]; // vertices of the quad in world space
Vec2f vraster[4]; // vertices of the quad in raster space
uint8_t outside = 0; // is the quad in raster space visible at all?
Vec2i bmin(10e8), bmax(-10e8);
for (uint32_t j = 0; j < 4; ++j) {
// transform unit quad to world position by transforming each
// one of its vertices by a transformation matrix (represented
// here by 3 unit vectors and a translation value)
verts[j].x = quads[j].x * right.x + quads[j].y * up.x + quads[j].z * forward.x + pt[i].x;
verts[j].y = quads[j].x * right.y + quads[j].y * up.y + quads[j].z * forward.y + pt[i].y;
verts[j].z = quads[j].x * right.z + quads[j].y * up.z + quads[j].z * forward.z + pt[i].z;
// project the vertices on the image plane (perspective divide)
verts[j].x /= -verts[j].z;
verts[j].y /= -verts[j].z;
// assume the image plane is 1 unit away from the eye
// and fov = 90 degrees, thus bottom-left and top-right
// coordinates of the screen are (-1,-1) and (1,1) respectively.
if (fabs(verts[j].x) > 1 || fabs(verts[j].y) > 1) outside |= (1 << j);
// convert image plane coordinates to raster
vraster[j].x = (int32_t)((verts[j].x + 1) * 0.5 * width);
vraster[j].y = (int32_t)((1 - (verts[j].y + 1) * 0.5) * width);
// compute box of the quad in raster space
if (vraster[j].x < bmin.x) bmin.x = (int)std::floor(vraster[j].x);
if (vraster[j].y < bmin.y) bmin.y = (int)std::floor(vraster[j].y);
if (vraster[j].x > bmax.x) bmax.x = (int)std::ceil(vraster[j].x);
if (vraster[j].y > bmax.y) bmax.y = (int)std::ceil(vraster[j].y);
}
// cull if all vertices are outside the canvas boundaries
if (outside == 0x0F) continue;
// precompute edge of quad
Vec2f edges[4];
for (uint32_t j = 0; j < 4; ++j) {
edges[j] = vraster[(j + 1) % 4] - vraster[j];
}
// loop over all pixels contained in box
for (int32_t y = std::max(0, bmin.y); y <= std::min((int32_t)(width -1), bmax.y); ++y) {
for (int32_t x = std::max(0, bmin.x); x <= std::min((int32_t)(width -1), bmax.x); ++x) {
bool inside = true;
for (uint32_t j = 0; j < 4 && inside; ++j) {
Vec2f v = Vec2f(x + 0.5, y + 0.5) - vraster[j];
float d = edges[j].x * v.x + edges[j].y * v.y;
inside &= (d > 0);
}
// pixel is inside quad, mark in the image
if (inside) {
buffer[y * width + x] = 255;
}
}
}

Finding center of image for rotation in opengl

So I have this piece of code, which pretty much draws various 2D textures on the screen, though there are multiple sprites that have to be 'dissected' from the texture (spritesheet). The problem is that rotation is not working properly; while it rotates, it does not rotate on the center of the texture, which is what I am trying to do. I have narrowed it down to the translation being incorrect:
glTranslatef(x + sr->x/2 - sr->w/2,
y + sr->y/2 - sr->h/2,0);
glRotatef(ang,0,0,1.f);
glTranslatef(-x + -sr->x/2 - -sr->w/2,
-y + -sr->y/2 - -sr->h/2,0);
X and Y is the position that it's being drawn to, the sheet rect struct contains the position X and Y of the sprite being drawn from the texture, along with w and h, which are the width and heights of the 'sprite' from the texture. I've tried various other formulas, such as:
glTranslatef(x, y, 0);
The below three switching the negative sign to positive (x - y to x + y)
glTranslatef(sr->x/2 - sr->w/2, sr->y/2 - sr->h/2 0 );
glTranslatef(sr->x - sr->w/2, sr->y - sr->h/2, 0 );
glTranslatef(sr->x - sr->w, sr->y - sr->w, 0 );
glTranslatef(.5,.5,0);
It might also be helpful to say that:
glOrtho(0,screen_width,screen_height,0,-2,10);
is in use.
I've tried reading various tutorials, going through various forums, asking various people, but there doesn't seem to be a solution that works, nor can I find any useful resources that explain to me how I find the center of the image in order to translate it to '(0,0)'. I'm pretty new to OpenGL so a lot of this stuff takes awhile for me to digest.
Here's the entire function:
void Apply_Surface( float x, float y, Sheet_Container* source, Sheet_Rect* sr , float ang = 0, bool flipx = 0, bool flipy = 0, int e_x = -1, int e_y = -1 ) {
float imgwi,imghi;
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,source->rt());
// rotation
imghi = source->rh();
imgwi = source->rw();
Sheet_Rect t_shtrct(0,0,imgwi,imghi);
if ( sr == NULL ) // in case a sheet rect is not provided, assume it's width
//and height of texture with 0/0 x/y
sr = &t_shtrct;
glPushMatrix();
//
int wid, hei;
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_WIDTH,&wid);
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_HEIGHT,&hei);
glTranslatef(-sr->x + -sr->w,
-sr->y + -sr->h,0);
glRotatef(ang,0,0,1.f);
glTranslatef(sr->x + sr->w,
sr->y + sr->h,0);
// Yeah, out-dated way of drawing to the screen but it works for now.
GLfloat tex[] = {
(sr->x+sr->w * flipx) /imgwi, 1 - (sr->y+sr->h *!flipy )/imghi,
(sr->x+sr->w * flipx) /imgwi, 1 - (sr->y+sr->h * flipy)/imghi,
(sr->x+sr->w * !flipx) /imgwi, 1 - (sr->y+sr->h * flipy)/imghi,
(sr->x+sr->w * !flipx) /imgwi, 1 - (sr->y+sr->h *!flipy)/imghi
};
GLfloat vertices[] = { // vertices to put on screen
x, (y + sr->h),
x, y,
(x +sr->w), y,
(x +sr->w),(y +sr->h)
};
// index array
GLubyte index[6] = { 0,1,2, 2,3,0 };
float fx = (x/(float)screen_width)-(float)sr->w/2/(float)imgwi;
float fy = (y/(float)screen_height)-(float)sr->h/2/(float)imghi;
// activate arrays
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
// pass verteices and texture information
glVertexPointer(2, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, tex);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, index);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glPopMatrix();
glDisable(GL_TEXTURE_2D);
}
Sheet container class:
class Sheet_Container {
GLuint texture;
int width, height;
public:
Sheet_Container();
Sheet_Container(GLuint, int = -1,int = -1);
void Load(GLuint,int = -1,int = -1);
float rw();
float rh();
GLuint rt();
};
Sheet rect class:
struct Sheet_Rect {
float x, y, w, h;
Sheet_Rect();
Sheet_Rect(int xx,int yy,int ww,int hh);
};
Image loading function:
Sheet_Container Game_Info::Load_Image(const char* fil) {
ILuint t_id;
ilGenImages(1, &t_id);
ilBindImage(t_id);
ilLoadImage(const_cast<char*>(fil));
int width = ilGetInteger(IL_IMAGE_WIDTH), height = ilGetInteger(IL_IMAGE_HEIGHT);
return Sheet_Container(ilutGLLoadImage(const_cast<char*>(fil)),width,height);
}
Your quad (two triangles) is centered at:
( x + sr->w / 2, y + sr->h / 2 )
You need to move that point to the origin, rotate, and then move it back:
glTranslatef ( (x + sr->w / 2.0f), (y + sr->h / 2.0f), 0.0f); // 3rd
glRotatef (0,0,0,1.f); // 2nd
glTranslatef (-(x + sr->w / 2.0f), -(y + sr->h / 2.0f), 0.0f); // 1st
Here is where I think you are getting tripped up. People naturally assume that OpenGL applies transformations in the order they appear (top-to-bottom), that is not the case. OpenGL effectively swaps the operands everytime it multiplies two matrices:
M1 x M2 x M3
~~~~~~~
(1)
~~~~~~~~~~
(2)
(1) M2 * M1
(2) M3 * (M2 * M1) --> M3 * M2 * M1 (row-major / textbook math notation)
The technical term for this is post-multiplication, it all has to do with the way matrices are implemented in OpenGL (column-major). Suffice it to say, you should generally read glTranslatef, glRotatef, glScalef, etc. calls from bottom-to-top.
With that out of the way, your current rotation does not make any sense.
You are telling GL to rotate 0 degrees around an axis: <0,0,1> (the z-axis in other words). The axis is correct, but a 0 degree rotation is not going to do anything ;)