Related
I have developed an opengl application where we draw strings of text using freetype and opengl.
I want to achieve rotation capability for the text that I put on OpenGL window.
For instance, "This is a text" string should be calculated and put into a buffer on a plain background and then refactored with a rotation value, so that the text will be visible as such below
I also have a text background that is just a regular texture with a buffer. I manually fill this background with a uint8_t buffer which can contain anything ranging from a single colour to an image buffer.
struct Background{
Color color;
Texture* bg_texture;
int x, y;
int w, h;
uint8_t* buffer;
explicit Background(int x, int y):x(x), y(y)
{
};
void create_bg_buffer();
~Background()
{
free(buffer);
}
};
void Background::create_bg_buffer()
{
int w = this->w;
int h = this->h;
if (posix_memalign((void**)&this->buffer, 128, w * h * 4) != 0)
{
VI_ERROR("ERROR::FREETYTPE: Couldn't allocate frame buffer ");
}
int c = 0;
for ( int i = 0; i < w; i++ )
{
for ( int j = 0; j < h; j++ )
{
this->buffer[ c + 0 ] = this->color.get_color_char(Utils::RED);
this->buffer[ c + 1 ] = this->color.get_color_char(Utils::GREEN);
this->buffer[ c + 2 ] = this->color.get_color_char(Utils::BLUE);
this->buffer[ c + 3 ] = 0xFF;
c += 4;
}
}
}
I want users to be able to rotate this text with it's background with a given angle. In on itself, rotating this is a tedious task. So I want to draw the text inside the backgrounds buffer itself, and then rotate it.
Please note that the way I rotate a background, for different reasons is not using an opengl function but rather taking the rectangle's middle point and rotating each point manually and passing those points to opengl with this code:
cpp
...
GLfloat vertices[32] = {
// positions // colors // texture coords
pos.TR_x, pos.TR_y, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, // top right
pos.BR_x, pos.BR_y, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, // bottom right
pos.BL_x, pos.BL_y, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, // bottom left
pos.TL_x, pos.TL_y, 1.0f, 0.1f, 0.1f, 0.1f, 0.0f, 0.0f // top left
};
unsigned int indices[] = {
0, 1, 3, // first triangle
1, 2, 3 // second triangle
};
glGenVertexArrays(1, &VAO);
glGenBuffers(1, &VBO);
glGenBuffers(1, &EBO);
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
...
Every pos stands for a rotated position, with labels indicating positions such as TR stands for top-right.
We want to use a Framebuffer for the output buffer. Then we want to use this framebuffer to be used for actual OpenGL output.
How should we alter the render_text function so that it will use the framebuffer to prepare the string from each individual character.
void Text::render_text(float angle_rad, bool has_bg)
{
if(has_bg) background->bg_texture->render(background->w, background->h, background->buffer, 1);
int start_y = ty + background->h;
start_y = ( std::abs(start_y - SCR_HEIGHT) / 2);
int total_h_index = 0;
for(auto& line: lines)
{
line.y = start_y;
line.x = tx;
total_h_index += line.total_height + LINE_GAP;
calc_pos(line.x, line.y, line.total_width, line.total_height, total_h_index);
for (c = line.text.begin(); c != line.text.end(); c++)
{
Character ch = Characters[*c];
line.char_h.push_back(ch.Size.y);
line.chars_y.push_back( line.y - (ch.Size.y - ch.Bearing.y) );
}
}
// glEnable(GL_CULL_FACE);
// glDisable(GL_BLEND);
// glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
shader.use();
glUniform3f(glGetUniformLocation(shader.ID, "textColor"), color.r, color.g, color.b);
glActiveTexture(GL_TEXTURE0);
glBindVertexArray(VAO);
GLfloat vertices[6][4] = {
{ 0.0, 1.0, 0.0, 0.0 },
{ 0.0, 0.0, 0.0, 1.0 },
{ 1.0, 0.0, 1.0, 1.0 },
{ 0.0, 1.0, 0.0, 0.0 },
{ 1.0, 0.0, 1.0, 1.0 },
{ 1.0, 1.0, 1.0, 0.0 }
};
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(vertices), vertices); // Be sure to use glBufferSubData and not glBufferData
glBindBuffer(GL_ARRAY_BUFFER, 0);
GLint transition_loc = glGetUniformLocation(shader.ID, "transparency");
glUniform1f(transition_loc, 1.0f);
for(auto& line: lines)
{
GLfloat char_x = 0.0f;
std::string str = line.text;
glm::mat4 transOriginM = glm::translate(glm::mat4(1.0f), glm::vec3(line.x, line.y, 0));
glm::mat4 rotateM = glm::rotate(glm::mat4(1.0f), glm::radians(-angle_rad), glm::vec3(0.0f, 0.0f, 1.0f));
int e = 0;
std::vector<glm::vec2> rotated_pos = calc_rotation(line.chars_x, line.chars_y, -angle_rad, line.total_width);
for (c = str.begin(); c != str.end(); c++)
{
Character ch = Characters[*c];
GLfloat w = ch.Size.x;
GLfloat h = ch.Size.y;
GLfloat xrel = rotated_pos[e].x ; // char_x
GLfloat yrel = rotated_pos[e].y;
// Now advance cursors for next glyph (note that advance is number of 1/64 pixels)
e++; // Bitshift by 6 to get value in pixels (2^6 = 64 (divide amount of 1/64th pixels by 64 to get amount of pixels))
glm::mat4 transRelM = glm::translate(glm::mat4(1.0f), glm::vec3(xrel, yrel, 0));
glm::mat4 scaleM = glm::scale(glm::mat4(1.0f), glm::vec3(w, h, 1.0f));
// Keep the translation matrix that sets the position of the text before the rotation matrix
glm::mat4 modelM = transOriginM * transRelM * rotateM * scaleM;
GLint model_loc = glGetUniformLocation(shader.ID, "model");
glUniformMatrix4fv(model_loc, 1, GL_FALSE, glm::value_ptr(modelM));
// Render glyph texture over quad
glBindTexture(GL_TEXTURE_2D, ch.TextureID);
// Render quad
glDrawArrays(GL_TRIANGLES, 0, 6);
}
}
As of now, "Adding a character or text" is completely independent from the background operation.
They are just positioned in a way, so it looks like it has a background.
Our aim is to use a single output buffer that will hold both background color and freetype text data.
Following is how we handle the texture and texture rotation mechanism :
#define _VERTICIZE_X(number, global) _VERTICIZE(number, global) - 1
#define _VERTICIZE_Y(number, global) _VERTICIZE(number, global) + 1
namespace OpenGL
{
Texture::Texture(int x, int y, int w, int h, int gw, int gh, float angle)
{
Utils::Point rotatedPoints[4] = {
{x, y},
{x + w, y},
{x, y + h},
{x + w, y + h},
};
Utils::RotateRectangle(rotatedPoints, angle);
pos.TL_x = _VERTICIZE_X(rotatedPoints[0].x, gw); pos.TL_y = -_VERTICIZE_Y(rotatedPoints[0].y, gh);
pos.TR_x = _VERTICIZE_X(rotatedPoints[1].x, gw); pos.TR_y = -_VERTICIZE_Y(rotatedPoints[1].y, gh);
pos.BL_x = _VERTICIZE_X(rotatedPoints[2].x, gw); pos.BL_y = -_VERTICIZE_Y(rotatedPoints[2].y, gh);
pos.BR_x = _VERTICIZE_X(rotatedPoints[3].x, gw); pos.BR_y = -_VERTICIZE_Y(rotatedPoints[3].y, gh);
}
int Texture::init(float alpha, std::string* filter, Utils::Color proj_filt)
{
shader = Shader("./src/opengl/shaders/texture_shaders/texture.vs", "./src/opengl/shaders/texture_shaders/texture.fs");
void RotateRectangle(Point (&points)[4], float angle) {
// Calculate the center point
Point center = { 0 };
for (int i = 0; i < 4; i++) {
center.x += points[i].x;
center.y += points[i].y;
}
center.x /= 4;
center.y /= 4;
// Rotate each point
float angleRadians = angle * M_PI / 180.0f;
float s = sin(angleRadians);
float c = cos(angleRadians);
for (int i = 0; i < 4; i++) {
// Subtract the center point to get a vector from the center to the point
Point vector = { points[i].x - center.x, points[i].y - center.y };
// Rotate the vector
float x = vector.x;
float y = vector.y;
vector.x = x * c - y * s;
vector.y = x * s + y * c;
// Add the center point back to the rotated vector to get the new point
points[i].x = vector.x + center.x;
points[i].y = vector.y + center.y;
}
}
How can we use a framebuffer so that all OpenGL and FreeType operation are going to be executed in a single output space, and following that depending our way we can rotate the whole text using this single output framebuffer ?
I'm a complete beginner to OpenGL programming and am trying to follow the Breakout tutorial at learnopengl.com but would like to draw the ball as an actual circle, instead of using a textured quad like Joey suggests. However, every result that Google throws back at me for "draw circle opengl 3.3" or similar phrases seems to be at least a few years old, and using even-older-than-that versions of the API :-(
The closest thing that I've found is this SO question, but of course the OP just had to use a custom VertexFormat object to abstract some of the details, without sharing his/her implementation of such! Just my luck! :P
There's also this YouTube tutorial that uses a seemingly-older version of the API, but copying the code verbatim (except for the last few lines which is where the code looks old) still got me nowhere.
My version of SpriteRenderer::initRenderData() from the tutorial:
void SpriteRenderer::initRenderData() {
GLuint vbo;
auto attribSize = 0;
GLfloat* vertices = nullptr;
// Determine whether this sprite is a circle or
// quad and setup the vertices array accordingly
if (!this->isCircle) {
attribSize = 4;
vertices = new GLfloat[24] {...} // works for rendering quads
} else {
// This code is adapted from the YouTube tutorial that I linked
// above and is where things go pear-shaped for me...or at least
// **not** circle-shaped :P
attribSize = 3;
GLfloat x = 0.0f;
GLfloat y = 0.0f;
GLfloat z = 0.0f;
GLfloat r = 100.0f;
GLint numSides = 6;
GLint numVertices = numSides + 2;
GLfloat* xCoords = new GLfloat[numVertices];
GLfloat* yCoords = new GLfloat[numVertices];
GLfloat* zCoords = new GLfloat[numVertices];
xCoords[0] = x;
yCoords[0] = y;
zCoords[0] = z;
for (auto i = 1; i < numVertices; i++) {
xCoords[i] = x + (r * cos(i * (M_PI * 2.0f) / numSides));
yCoords[i] = y + (r * sin(i * (M_PI * 2.0f) / numSides));
zCoords[i] = z;
}
vertices = new GLfloat[numVertices * 3];
for (auto i = 0; i < numVertices; i++) {
vertices[i * 3] = xCoords[i];
vertices[i * 3 + 1] = yCoords[i];
vertices[i * 3 + 2] = zCoords[i];
}
}
// This is where I go back to the learnopengl.com code. Once
// again, the following works for quads but not circles!
glGenVertexArrays(1, &vao);
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, 24 * sizeof(
GLfloat), vertices, GL_STATIC_DRAW);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, attribSize, GL_FLOAT, GL_FALSE,
attribSize * sizeof(GLfloat), (GLvoid*)0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
And here's the SpriteRenderer::DrawSprite() method (the only difference from the original being lines 24 - 28):
void SpriteRenderer::Draw(vec2 position, vec2 size, GLfloat rotation, vec3 colour) {
// Prepare transformations
shader.Use();
auto model = mat4(1.0f);
model = translate(model, vec3(position, 0.0f));
model = translate(model, vec3(0.5f * size.x, 0.5f * size.y, 0.0f)); // Move origin of rotation to center
model = rotate(model, rotation, vec3(0.0f, 0.0f, 1.0f)); // Rotate quad
model = translate(model, vec3(-0.5f * size.x, -0.5f * size.y, 0.0f)); // Move origin back
model = scale(model, vec3(size, 1.0f)); // Lastly, scale
shader.SetMatrix4("model", model);
// Render textured quad
shader.SetVector3f("spriteColour", colour);
glActiveTexture(GL_TEXTURE0);
texture.Bind();
glBindVertexArray(vao);
if (!isCircular) {
glDrawArrays(GL_TRIANGLES, 0, 6);
} else {
glDrawArrays(GL_TRIANGLE_FAN, 0, 24); // also tried "12" and "8" for the last param, to no avail
}
glBindVertexArray(0);
}
And finally, the shaders (different to the ones used for quads):
// Vertex shader
#version 330 core
layout (location = 0) in vec3 position;
uniform mat4 model;
uniform mat4 projection;
void main() {
gl_Position = projection * model *
vec4(position.xyz, 1.0f);
}
// Fragment shader
#version 330 core
out vec4 colour;
uniform vec3 spriteColour;
void main() {
colour = vec4(spriteColour, 1.0);
}
P.S. I know I could just use a quad but I'm trying to learn how to draw all primitives in OpenGL, not just quads and triangles (thanks anyway Joey)!
P.P.S I just realised that the learnopengl.com site has a whole section devoted to debugging OpenGL apps, so I set that up but to no avail :-( I don't think the error handling is supported by my driver (Intel UHD Graphics 620 latest driver) since the GL_CONTEXT_FLAG_DEBUG_BIT was not set after following the instructions:
Requesting a debug context in GLFW is surprisingly easy as all we have to do is pass a hint to GLFW that we'd like to have a debug output context. We have to do this before we call glfwCreateWindow:
glfwWindowHint(GLFW_OPENGL_DEBUG_CONTEXT, GL_TRUE);
Once we initialize GLFW we should have a debug context if we're using OpenGL version 4.3 or higher, or else we have to take our chances and hope the system is still able to request a debug context. Otherwise we have to request debug output using its OpenGL extension(s).
To check if we successfully initialized a debug context we can query OpenGL:
GLint flags; glGetIntegerv(GL_CONTEXT_FLAGS, &flags);
if (flags & GL_CONTEXT_FLAG_DEBUG_BIT) {
// initialize debug output
}
That if statement is never entered into!
Thanks to #Mykola's answer to this question I have gotten half-way there:
numVertices = 43;
vertices = new GLfloat[numVertices];
auto i = 2;
auto x = 0.0f,
y = x,
z = x,
r = 0.3f;
auto numSides = 21;
auto TWO_PI = 2.0f * M_PI;
auto increment = TWO_PI / numSides;
for (auto angle = 0.0f; angle <= TWO_PI; angle += increment) {
vertices[i++] = r * cos(angle) + x;
vertices[i++] = r * sin(angle) + y;
}
Which gives me .
Two questions I still have:
Why is there an extra line going from the centre to the right side and how can I fix it?
According to #user1118321's comment on a related SO answer, I should be able to prepend another vertex to the array at (0, 0) and use GL_TRIANGLE_FAN instead of GL_LINE_LOOP
to get a coloured circle. But this results in no output for me :-( Why?
I'm drawing a 10x10 grid of squares at a depth of 0 and trying to highlight the one the mouse is over. I've tried following the tutorial here: http://antongerdelan.net/opengl/raycasting.html
but I don't know if I did it right. I end up with a vector at the end, but I'm not sure what to do with it.
Here's a screenshot of the squares (not sure how it helps..)
http://postimg.org/image/dau330qwt/2
/* Enable attribute index 1 as being used */
glEnableVertexAttribArray(1);
float camera_z = 50;
float camera_x = 0;
float camera_y = 0;
GLuint MatrixID = glGetUniformLocation(program, "MVP");
GLuint ColorID = glGetUniformLocation(program, "input_color");
int mouse_x;
int mouse_y;
while (1) {
int window_width;
int window_height;
SDL_GetWindowSize(win, &window_width, &window_height);
glm::mat4 Projection = glm::perspective(45.0f, ((float)window_width) / window_height, 0.1f, 100.0f);
// printf("Camera at %f %f\n", camera_x, camera_y);
glm::mat4 View = glm::lookAt(glm::vec3(camera_x,camera_y,camera_z), // camera position
glm::vec3(camera_x,camera_y,0), // looking at
glm::vec3(0,1,0)); // up
int map_width = map.width();
int map_height = map.height();
/* Make our background black */
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
// go through my 10x10 map and
for (int i = 0; i < map_width; i++) {
for ( int j = 0; j < map_height; j++) {
glm::mat4 Model = glm::translate(glm::mat4(1.0f), glm::vec3(i, j, 0.0f));
glm::mat4 MVP = Projection * View * Model;
glm::vec3 color = random_color();
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
glUniform3fv(ColorID, 1, &color[0]);
glDrawArrays(GL_LINE_LOOP, 0, 4);
}
}
/* Swap our buffers to make our changes visible */
SDL_GL_SwapWindow(win);
// printf("Window dimensions %d x %d\n", window_width, window_height);
float normalized_mouse_x = (2.0f * mouse_x) / window_width - 1.0f;
float normalized_mouse_y = 1.0f - (2.0f * mouse_y) / window_height;
printf("Normalized mouse position %f x %f\n", normalized_mouse_x, normalized_mouse_y);
glm::vec3 normalized_mouse_vector = glm::vec3(normalized_mouse_x, normalized_mouse_y, 1.0f);
glm::vec4 ray_clip = glm::vec4 (normalized_mouse_vector.x, normalized_mouse_vector.y, -1.0, 1.0);
glm::vec4 ray_eye = glm::inverse(Projection) * ray_clip;
ray_eye = glm::vec4(ray_eye.xy(), -1.0, 0.0);
glm::vec3 ray_world = (glm::inverse(View) * ray_eye).xyz();
ray_world = glm::normalize(ray_world);
// this prints out values like: World ray: 0.000266, 0.000382, 1.000000
printf("World ray: %f, %f, %f\n", ray_world.x, ray_world.y, ray_world.z);
// l = -(camera_z / ray_world.z)
float l = -(camera_z / ray_world.z);
float mouse_world_x = camera_x + l * ray_world.x;
float mouse_world_y = camera_y + l * ray_world.y;
printf("mouse world %f, %f\n", mouse_world_x, mouse_world_y);
}
Updated with code from BDL's comment. The output I get now is:
Normalized mouse position 0.087500 x 0.145833
World ray: 0.065083, 0.081353, 499.000000
World ray: 0.000130, 0.000163, 1.000000
mouse world -0.006521, -0.008152
I'm expecting the "mouse world" line to have numbers in the 1-10 range, not in the .00x range, though, based on the screenshot above showing a grid of squares with x and y ranging from 0-10.
Thanks for looking.
The intersection between a given ray r, starting at point C (in this case the camera position) with a x/y plane with z=0 can be calculated as follows:
C ... Camera position [cx,cy,cz]
r ... ray direction [rx,ry,rz]
We are searching for the point on the ray that has z=0
C + l*r = [x,y,0]
=>
cz + l*rz = 0
l * rz = -cz
l = -(cz / rz)
The xy-coordinates of the intersection are now:
x = cx + l * rx
y = cy + l * ry
What is left to do is to check in which rectangle this (x,y) coordinates are located.
Usually I draw a square with a texture like this:
Create a VBO with 4 coordinates (A,B,C,D for the square)
Create a EBO with 4 indices (A,C,D and B,C,D) telling that I want to draw a square out of 2 triangles.
Draw this elements with a texture
Isn't there an easiest way without having a EBO array?
Because it is not very handy to use... If I want to use like this:
VAO = [-0.8f, 0.5f, 0.0f, ...]
EBO = [0, 1, 3, 1, 2, 3, ...]
Then I need to remove a square from my VAO... then I also need to remove the indices from my EBO array and re-arrange it.
Is there a better way to do this?
If you really only want to draw a square with a texture on it, you should consider make a new empty VAO, and just call glDrawArrays(GL_TRIANGLE_STRIP, 0,3);
The vertex shader then looks like this:
out vec2 mapping;
void main()
{
float size = 1.0f;
vec2 offset;
switch(gl_VertexID)
{
case 0:
//Bottom-left
mapping = vec2(0.0f, 0.0f);
offset = vec2(-size, -size);
break;
case 1:
//Top-left
mapping = vec2(0.0f, 1.0f);
offset = vec2(-size, size);
break;
case 2:
//Bottom-right
mapping = vec2(1.0, 0.0);
offset = vec2(size, -size);
break;
case 3:
//Top-right
mapping = vec2(1.0, 1.0);
offset = vec2(size, size);
break;
}
gl_Position = vec4(offset, 0.0f, 1.0f);
}
The mapping variable tells the fragmentshader what the texture coordinates are.
Isn't there an easiest way without having a EBO array?
Duplicate your vertices & use glDrawArrays().
You can use DrawArray to plot the indices.
Something like this:
Vertex2D* vertex = (Vertex2D*) vbo->lock();
vertex[0].x = x[0]; vertex[0].y = y[0]; vertex[0].u = u[0]; vertex[0].v = v[0]; vertex[0].color = color;
vertex[1].x = x[0]; vertex[1].y = y[1]; vertex[1].u = u[0]; vertex[1].v = v[1]; vertex[1].color = color;
vertex[2].x = x[1]; vertex[2].y = y[1]; vertex[2].u = u[1]; vertex[2].v = v[1]; vertex[2].color = color;
vertex[3].x = x[1]; vertex[3].y = y[0]; vertex[3].u = u[1]; vertex[3].v = v[0]; vertex[3].color = color;
vbo->unlock();
shader->bind();
vbo->bind();
vao->bind();
tex->bind();
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
tex->unbind();
vao->unbind();
vbo->unbind();
shader->unbind();
void drawTire(void)
{
GLint num_of_tri = 32;
GLfloat vertex[3];
const GLfloat delta_angle = 2.0*PI/float(num_of_tri);
//Draw Front tire
glBegin(GL_TRIANGLE_FAN);
glColor3f(0.5, 0.5, 0.5);
vertex[0] = vertex[1] = vertex[2] = 0.0;
glVertex3fv(vertex);
for(int i = 0; i < num_of_tri ; i++)
{
vertex[0] = cos(delta_angle*i) * wheelRadius; //wheel Radius is 1.0
vertex[1] = sin(delta_angle*i) * wheelRadius;
vertex[2] = 0.0;
glVertex3fv(vertex);
}
vertex[0] = 1.0 * wheelRadius;
vertex[1] = 0.0 * wheelRadius;
vertex[2] = 0.0;
glVertex3fv(vertex);
glEnd();
//Draw Back Tire
const GLfloat depth = -wheelRadius/1.5;
glBegin(GL_TRIANGLE_FAN);
glColor3f(1.0, 0.0, 0.0);
vertex[0] = vertex[1] = 0.0;
vertex[2] = depth;
glVertex3fv(vertex);
for(int i = 0; i < num_of_tri ; i++)
{
vertex[0] = cos(delta_angle*i) * wheelRadius;
vertex[1] = sin(delta_angle*i) * wheelRadius;
vertex[2] = depth;
glVertex3fv(vertex);
}
vertex[0] = 1.0 * wheelRadius;
vertex[1] = 0.0 * wheelRadius;
vertex[2] = depth;
glVertex3fv(vertex);
glEnd();
//Connect Front&Back
glBegin(GL_QUADS);
glColor3f(0.0, 0.0, 0.0);
for(int i = 0; i < num_of_tri; i++)
{
vertex[0] = cos(delta_angle*i) * wheelRadius;
vertex[1] = sin(delta_angle*i) * wheelRadius;
vertex[2] = 0;
glVertex3fv(vertex);
vertex[0] = cos(delta_angle*i) * wheelRadius;
vertex[1] = sin(delta_angle*i) * wheelRadius;
vertex[2] = depth;
glVertex3fv(vertex);
vertex[0] = cos(delta_angle*((i + 1)%num_of_tri)) * wheelRadius;
vertex[1] = sin(delta_angle*((i + 1)%num_of_tri)) * wheelRadius;
vertex[2] = depth;
glVertex3fv(vertex);
vertex[0] = cos(delta_angle*((i + 1)%num_of_tri)) * wheelRadius;
vertex[1] = sin(delta_angle*((i + 1)%num_of_tri)) * wheelRadius;
vertex[2] = 0;
glVertex3fv(vertex);
}
glEnd();
glFlush();
}
I'm using the above code to draw a (kind of) 3D car wheel. This code appears to work.
This is my init function:
void init(void)
{
glClearColor (1.0, 1.0, 1.0, 0.0);
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
gluPerspective(45.0f, (GLfloat)1366/(GLfloat)768, 0.1f, 100.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0.0, 0.0, 5.0, 0, 0, 0, 0, 1, 0);
}
This is my display function:
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
drawTire();
glFlush();
}
The result is messed up car wheel as can be seen here: http://postimg.org/image/fuf9o75ub/
The front of the tire (the side that the camera should be looking at) is gray. The camera somehow looks at the back of the tire, which is red.
Also, the tire side (the one the that touches the ground. I have no idea how to name it) is also shown, which is weird (the black color).
Why does gluPerspective messes up 3D object, and how can one fix this? (I have tried changing the fov... result were almost the same)
Right now, you aren't using depth testing. This means that the last primitive rendered is the one "on top", since without depth testing, OpenGL has no way of knowing the 'depth' of a pixel after it has been rendered.
To solve this problem, OpenGL uses a depth buffer, which is a hidden screen-sized buffer that stores how far away each pixel is from the camera. With depth testing enabled, when OpenGL renders a fragment, it first checks the depth of the fragment and compares it to the value in the depth buffer. If the fragment's depth value is smaller than the stored value (note 1), then OpenGL concludes that the fragment is in front of an already rendered object and writes the fragment. Otherwise, its behind an object and ignores the fragment.
To use depth testing, you first need to make sure you've allocated a depth buffer when you created your context. This depends on what windowing library you are using, but usually they give you a depth buffer by default.
You then need call glEnable(GL_DEPTH_TEST) to begin using depth testing.
Additionally, you need to clear the depth values in the depth buffer when you re-render your scene. Change glClear(GL_COLOR_BUFFER_BIT) to glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT).
And that's all you need to use depth testing.
(note 1): More specfically, it uses whatever function was set glDepthFunc, though 99% of the time this is GL_LESS or GL_LEQUAL.