I've learned how to draw a cube using OpenGL from various tutorials.
For a cube, we consider each face to be composed of two triangles, and then appropriately set up the vertex and color buffers. These buffers are then sent to the shader code.
How do we similarly draw a sphere and cylinder? All tutorials online focus on drawing cubes.
Setting up vertex buffer for a sphere or cylinder doesn't seem trivial; I'm unable to "construct" them from triangles as we do for cubes.
Here is some code that I use when drawing spheres.
Note: This code uses C++, with the GLM math library.
// Calc The Vertices
for (int i = 0; i <= Stacks; ++i){
float V = i / (float) Stacks;
float phi = V * glm::pi <float> ();
// Loop Through Slices
for (int j = 0; j <= Slices; ++j){
float U = j / (float) Slices;
float theta = U * (glm::pi <float> () * 2);
// Calc The Vertex Positions
float x = cosf (theta) * sinf (phi);
float y = cosf (phi);
float z = sinf (theta) * sinf (phi);
// Push Back Vertex Data
vertices.push_back (glm::vec3 (x, y, z) * Radius);
}
}
// Calc The Index Positions
for (int i = 0; i < Slices * Stacks + Slices; ++i){
indices.push_back (i);
indices.push_back (i + Slices + 1);
indices.push_back (i + Slices);
indices.push_back (i + Slices + 1);
indices.push_back (i);
indices.push_back (i + 1);
}
This algorithm creates what is called a UV Sphere.
The 'Slices' and 'Stacks' are the number of subdivisions on the X and Y axis.
For cylinders, it is convenient to work in cylindrical coordinates: (angle, radius, height). You will compute two polygons (constant angle increment, fixed radius, two height values) and create: two sets of triangles for the basis and a set of rectangles (split in two) for the lateral surface.
For spheres, you will use spherical coordinates: (inclination, elevation, radius). By varying the two angles (one at a time), you will describe parallels and meridians on the sphere. These define a meshing, such that every tile is a quadrilateral (except at the poles); split along a diagonal to get triangles.
Related
I'm coding raytracer for linux terminal on C++, first I decided to describe the sphere, here is class and algorithm:
class Sphere
{
public:
float radius;
vector3 center;
bool is_intersect(vector3 camera, vector3 ray)
{
// vector from center to camera
vector3 v = center - camera;
// module of vector
float abs_v = v.length();
// ray must be normalized (in main)
float pr_v_on_ray = ray.dot_product(v);
float l2 = abs_v * abs_v - pr_v_on_ray * pr_v_on_ray;
return l2 - radius * radius <= 0;
}
};
algorithm
vector2 and vector3 is self-written types for 2D and 3D vectors with all standard vectors operations (like normalization, module, dot product and another).
I'm creating sphere with center(0,0,0) and some Radius and all work:
// because terminal pixels not square
float distortion = (8.0 / 16) * (width / height);
Sphere sphere = {0.5, vector3(0,0,0)};
for (int i = 0; i < width; ++i)
{
for (int j = 0; j < height; ++j)
{
vector2 xy = (vector2((float)i, (float)j) / vector2(width, height))
* vector2(2,2) - vector2(1,1); // x,y Є [-1.0; 1.0]
xy.x *= distortion;
vector3 camera = vector3(0,0,1);
// ray from camera
vector3 ray = vector3(xy.x, xy.y, -1).normalize();
if (sphere.is_intersect(camera, ray)) mvaddch(j,i, '#');
result1-ok
But, when i change coordinates of center distortion appears:
Sphere sphere = {0.5, vector3(-0.5,-0.5,0)};
result2-distortion
Do I understand correctly algorithm of ray "shot"? If i need to "shot" ray from (1,2,3) to (5,2,1) point, then ray coordinates is (5-1,2-2,1-3) = (4,0,-2)?
I understand ray.x and ray.y is all pixels on screen, but what about ray.z?
I don't understand how camera's coordinates work. (x,y,z) is offset relative to the origin, and if i change z then size of sphere projection changes, its work, but if i change x or y all going bad. How to look my sphere from all 6 sides? (I will add rotation matrices if understand how to camera work)
What causes distortion when changing the coordinates of the center of the sphere?
My final target is camera which rotate around sphere. (I will add light later)
Sorry for my bad English, thank you for your patience.
I'm trying to draw a filled in circle, but when I draw this, it only shows in wireframe, here is the code I'm using to draw:
void render_circle(Vec2 position, float radius, Vec4 colour) {
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glColor4f(colour.x, colour.y, colour.z, colour.w);
glBegin(GL_LINE_LOOP);
int num_segments = 30; //#todo: make this scale for larger radius
for(int i = 0; i < num_segments; i++) {
float theta = 2.0f * math_pi * (float)i / (float)num_segments;
float x = radius * cosf(theta);
float y = radius * sinf(theta);
glVertex2f(position.x + x, position.y + y);
}
glEnd();
}
GL_LINE_LOOP is a line primitive type. If you want to draw a filled polygon, then you have to use a polygon primitive type. For instance GL_TRINAGLE_FAN.
It is only possible to correctly draw convex geometry. Concave polygons may not be represented correctly, by a primitive. A possibility to deal with this, is to split concave polygons into convex parts.
My goal is to render the image of a quad using the rasterisation algorithm. I have been as far as:
creating the quad in 3D
projecting the quad's vertices onto the screen using a perspective divide
converting the resulting coordinates from screen space to raster space, and comput the bounding box of the quad in raster space
looping over all pixels inside this bounding box, and finding out if the current pixel P is contained within the quad. For this I am using a simple test which consist of taking the dot between the edge AB of the quad and the vector defined between the vertex A and the point P. I repeat this process for all 4 edges and if the sign is the same, then the point is inside the quad.
I have implemented this successfully (see code below). But I am stuck with the remaining bits which I'd like to play with which is essentially finding the st or texture coordinates of my quad.
I don't know if it's possible to find the st coordinates of the current pixel P in the quad in raster space, and then convert that back into world space? Could you someone please point me in the right direction of tell me how to do this?
alternatively how can I compute the z or depth value of the pixel contained in the quad. I guess it's related to finding the st coordinates of the point in the quad, and then interpolating z values of vertices?
PS: this is NOT a homework. I do this to understand the rasterization algorithm, and precisely where I am stuck now, is the bit I don't understand which I believe in the GPU rendering pipeline involves some sort of inverse projection, but I am just lost at this point. Thanks for your help.
Vec3f verts[4]; // vertices of the quad in world space
Vec2f vraster[4]; // vertices of the quad in raster space
uint8_t outside = 0; // is the quad in raster space visible at all?
Vec2i bmin(10e8), bmax(-10e8);
for (uint32_t j = 0; j < 4; ++j) {
// transform unit quad to world position by transforming each
// one of its vertices by a transformation matrix (represented
// here by 3 unit vectors and a translation value)
verts[j].x = quads[j].x * right.x + quads[j].y * up.x + quads[j].z * forward.x + pt[i].x;
verts[j].y = quads[j].x * right.y + quads[j].y * up.y + quads[j].z * forward.y + pt[i].y;
verts[j].z = quads[j].x * right.z + quads[j].y * up.z + quads[j].z * forward.z + pt[i].z;
// project the vertices on the image plane (perspective divide)
verts[j].x /= -verts[j].z;
verts[j].y /= -verts[j].z;
// assume the image plane is 1 unit away from the eye
// and fov = 90 degrees, thus bottom-left and top-right
// coordinates of the screen are (-1,-1) and (1,1) respectively.
if (fabs(verts[j].x) > 1 || fabs(verts[j].y) > 1) outside |= (1 << j);
// convert image plane coordinates to raster
vraster[j].x = (int32_t)((verts[j].x + 1) * 0.5 * width);
vraster[j].y = (int32_t)((1 - (verts[j].y + 1) * 0.5) * width);
// compute box of the quad in raster space
if (vraster[j].x < bmin.x) bmin.x = (int)std::floor(vraster[j].x);
if (vraster[j].y < bmin.y) bmin.y = (int)std::floor(vraster[j].y);
if (vraster[j].x > bmax.x) bmax.x = (int)std::ceil(vraster[j].x);
if (vraster[j].y > bmax.y) bmax.y = (int)std::ceil(vraster[j].y);
}
// cull if all vertices are outside the canvas boundaries
if (outside == 0x0F) continue;
// precompute edge of quad
Vec2f edges[4];
for (uint32_t j = 0; j < 4; ++j) {
edges[j] = vraster[(j + 1) % 4] - vraster[j];
}
// loop over all pixels contained in box
for (int32_t y = std::max(0, bmin.y); y <= std::min((int32_t)(width -1), bmax.y); ++y) {
for (int32_t x = std::max(0, bmin.x); x <= std::min((int32_t)(width -1), bmax.x); ++x) {
bool inside = true;
for (uint32_t j = 0; j < 4 && inside; ++j) {
Vec2f v = Vec2f(x + 0.5, y + 0.5) - vraster[j];
float d = edges[j].x * v.x + edges[j].y * v.y;
inside &= (d > 0);
}
// pixel is inside quad, mark in the image
if (inside) {
buffer[y * width + x] = 255;
}
}
}
Let's say there is a grid terrain for a game composed of tiles made of two triangles - made from four vertices. How would we find the Y (up) position of a point between the four vertices?
I have tried this:
float diffZ1 = lerp(heights[0], heights[2], zOffset);
float diffZ2 = lerp(heights[1], heights[3], zOffset);
float yPosition = lerp(diffZ1, diffZ2, xOffset);
Where z/yOffset is the z/y offset from the first vertex of the tile in percent / 100. This works for flat surfaces but not so well on bumpy terrain.
I expect this has something to do with the terrain being made from triangles where the above may work on flat planes. I'm not sure, but does anybody know what's going wrong?
This may better explain what's going on here:
In the code above "heights[]" is an array of the Y coordinate of surrounding vertices v0-3.
Triangle 1 is made of vertex 0, 2 and 1.
Triangle 2 is made of vertex 1, 2 and 3.
I wish to find coordinate Y of p1 when its x,y coordinates lay between v0-3.
So I have tried determining which triangle the point is between through this function:
bool PointInTriangle(float3 pt, float3 pa, float3 pb, float3 pc)
{
// Compute vectors
float2 v0 = pc.xz - pa.xz;
float2 v1 = pb.xz - pa.xz;
float2 v2 = pt.xz - pa.xz;
// Compute dot products
float dot00 = dot(v0, v0);
float dot01 = dot(v0, v1);
float dot02 = dot(v0, v2);
float dot11 = dot(v1, v1);
float dot12 = dot(v1, v2);
// Compute barycentric coordinates
float invDenom = 1.0f / (dot00 * dot11 - dot01 * dot01);
float u = (dot11 * dot02 - dot01 * dot12) * invDenom;
float v = (dot00 * dot12 - dot01 * dot02) * invDenom;
// Check if point is in triangle
return (u >= 0.0f) && (v >= 0.0f) && (u + v <= 1.0f);
}
This isn't giving me the results I expected
I am then trying to find the y coordinate of point p1 inside each triangle:
// Position of point p1
float3 pos = input[0].PosI;
// Calculate point and normal for triangles
float3 p1 = tile[0];
float3 n1 = (tile[2] - p1) * (tile[1] - p1); // <-- Error, cross needed
// = cross(tile[2] - p1, tile[1] - p1);
float3 p2 = tile[3];
float3 n2 = (tile[2] - p2) * (tile[1] - p2); // <-- Error
// = cross(tile[2] - p2, tile[1] - p2);
float newY = 0.0f;
// Determine triangle & get y coordinate inside correct triangle
if(PointInTriangle(pos, tile[0], tile[1], tile[2]))
{
newY = p1.y - ((pos.x - p1.x) * n1.x + (pos.z - p1.z) * n1.z) / n1.y;
}
else if(PointInTriangle(input[0].PosI, tile[3], tile[2], tile[1]))
{
newY = p2.y - ((pos.x - p2.x) * n2.x + (pos.z - p2.z) * n2.z) / n2.y;
}
Using the following to find the correct triangle:
if((1.0f - xOffset) <= zOffset)
inTri1 = true;
And correcting the code above to use the correct cross function seems to have solved the problem.
Because your 4 vertices may not be on a plane, you should consider each triangle separately. First find the triangle that the point resides in, and then use the following StackOverflow discussion to solve for the Z value (note the different naming of the axes). I personally like DanielKO's answer much better, but the accepted answer should work too:
Linear interpolation of three 3D points in 3D space
EDIT: For the 2nd part of your problem (finding the triangle that the point is in):
Because the projection of your tiles onto the xz plane (as you define your coordinates) are perfect squares, finding the triangle that the point resides in is a very simple operation. Here I'll use the terms left-right to refer to the x axis (from lower to higher values of x) and bottom-top to refer to the z axis (from lower to higher values of z).
Each tile can only be split in one of two ways. Either (A) via a diagonal line from the bottom-left corner to the top-right corner, or (B) via a diagonal line from the bottom-right corner to the top-left corner.
For any tile that's split as A:
Check if x' > z', where x' is the distance from the left edge of the tile to the point, and z' is the distance from the bottom edge of the tile to the point. If x' > z' then your point is in the bottom-right triangle; otherwise it's in the upper-left triangle.
For any tile that's split as B: Check if x" > z', where x" is the distance from the right edge of your tile to the point, and z' is the distance from the bottom edge of the tile to the point. If x" > z' then your point is in the lower-left triangle; otherwise it's in the upper-right triangle.
(Minor note: Above I assume your tiles aren't rotated in the xz plane; i.e. that they are aligned with the axes. If that's not correct, simply rotate them to align them with the axes before doing the above checks.)
I'm trying to draw Steiner's Roman Surface in OpenGL, and I'm having some trouble getting the right normals so that the surface lights up correctly. I used the parametric equation from Wikipedia : http://en.wikipedia.org/wiki/Roman_surface. For the normals, I did a partial differentiation with respect to theta, then phi, then crossed the partial differentials to get the normal.
This doesn't allow the surface to light up properly because the Roman Surface is a non-orientable surface. Hence, I was wondering if there's a way to get the right normals out so that the surface can light up correctly. I've tried negating the normals, for the whole surface, and part of the surface(negating for the 1st and last quarter of n), but it doesn't seem to work.
My current code is as follows:
double getRad(double deg, double n){
return deg * M_PI / n;
}
int n = 24;
for(int i = 0; i < n; i++){
for(int j = 0; j < 2*n; j++){
glBegin(GL_POLYGON);
double x = -pow(r,4) * cos(2*getRad(i+0.5,n)) * pow(cos(getRad(j+0.5,n)),2) * cos(2*getRad(j+0.5,n)) * sin(getRad(i+0.5,n)) - 2 * pow(r,4) * pow(cos(getRad(i+0.5,n)),2) * pow(cos(getRad(j+0.5,n)),2) * sin(getRad(i+0.5,n)) * pow(sin(getRad(j+0.5,n)),2);
double y = pow(r,4) * cos(getRad(i+0.5,n)) * cos(2*getRad(i+0.5,n)) * pow(cos(getRad(j+0.5,n)),2) * cos(2*getRad(j+0.5,n)) - 2 * pow(r,4) * cos(getRad(i+0.5,n)) * pow(cos(getRad(j+0.5,n)),2) * pow(sin(getRad(i+0.5,n)),2) * pow(sin(getRad(j+0.5,n)),2);
double z = -pow(r,4) * pow(cos(getRad(i+0.5,n)),2) * cos(getRad(j+0.5,n)) * cos(2*getRad(j+0.5,n)) * sin(getRad(j+0.5,n)) - pow(r,4) * cos(getRad(j+0.5,n)) * cos(2*getRad(j+0.5,n)) * pow(sin(getRad(i+0.5,n)),2) * sin(getRad(j+0.5,n));
glNormal3d(x, y, z);
glVertex3d(r*r*cos(getRad(i,n))*cos(getRad(j,n))*sin(getRad(j,n)),r*r*sin(getRad(i,n))*cos(getRad(j,n))*sin(getRad(j,n)),r*r*cos(getRad(i,n))*sin(getRad(i,n))*cos(getRad(j,n))*cos(getRad(j,n)));
glVertex3d(r*r*cos(getRad(i+1,n))*cos(getRad(j,n))*sin(getRad(j,n)),r*r*sin(getRad(i+1,n))*cos(getRad(j,n))*sin(getRad(j,n)),r*r*cos(getRad(i+1,n))*sin(getRad(i+1,n))*cos(getRad(j,n))*cos(getRad(j,n)));
glVertex3d(r*r*cos(getRad(i+1,n))*cos(getRad(j+1,n))*sin(getRad(j+1,n)),r*r*sin(getRad(i+1,n))*cos(getRad(j+1,n))*sin(getRad(j+1,n)),r*r*cos(getRad(i+1,n))*sin(getRad(i+1,n))*cos(getRad(j+1,n))*cos(getRad(j+1,n)));
glVertex3d(r*r*cos(getRad(i,n))*cos(getRad(j+1,n))*sin(getRad(j+1,n)),r*r*sin(getRad(i,n))*cos(getRad(j+1,n))*sin(getRad(j+1,n)),r*r*cos(getRad(i,n))*sin(getRad(i,n))*cos(getRad(j+1,n))*cos(getRad(j+1,n)));
glEnd();
glFlush();
}
}
In the case you're dealing with nonorientable surfaces (like Steiner's Romans, or the famous Möbius strip) you have to possiblilities: Enable double sided lighting
glLightModeli(GL_LIGHT_MODEL_TWO_SIDE, GL_TRUE);
or you enable face culling and render the surface with two passes (front facing and back facing) – you'll have to negate the normals for the backface pass.
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK); // backside faces are NOT rendered
draw_with_positive_normals();
glCullFace(GL_FRONT);
draw_with_negative_normals();
You would probably get better results by splitting the polygon into two triangles - each would then be guaranteed to be planar. Further, you could can generate the normals from each triangle, or smooth them between neighboring triangles.
The other trick is to pre-generate your points into an array and then referencing the array in the glVertex call. That way you have more options about how to generate normals.
Also, you can render the normals themselves with a glBegin(GL_LINES) ... glEnd() sequence.
For every triangle you generate create one with the same coordinates/normals but wound/flipped the other way.