I have recently been trying to render a 3D sphere in OpenGL using triangles. I have been testing and modifying code from various websites and have finally found a winning combination. The only problem is that there are visible gaps in the sphere. Any thoughts on what would be causing this?
Code to render sphere
float Slices = 30;
float Stacks = 60;
float Radius = 20.0;
for (int i = 0; i <= Stacks; ++i){
float V = i / (float) Stacks;
float phi = V * glm::pi <float> ();
for (int j = 0; j <= Slices; ++j){
float U = j / (float) Slices;
float theta = U * (glm::pi <float> () * 4);
float x = cosf (theta) * sinf (phi);
float y = cosf (phi);
float z = sinf (theta) * sinf (phi);
x *= Radius;
y *= Radius;
z *= Radius;
Vertex *v = new Vertex {{x,y,z}, //Position
{255,0,0}}; //Color
screenToBuffer(v, 1);
delete []v;
}
}
Problem
Try and set it to GL_TRIANGLE_STRIP​
What might be the problem is that it considers each group of three vertices to be only one triangle.
Like so
Indices: 0 1 2 3 4 5 ...
Triangles: {0 1 2} {3 4 5}
The GL_TRIAGLE_STRIP will do this.
Indices: 0 1 2 3 4 5 ...
Triangles: {0 1 2}
{1 2 3} drawing order is (2 1 3) to maintain proper winding
{2 3 4}
{3 4 5}
See this answer for a proper way to do it.
https://stackoverflow.com/a/7958376/1943599
Related
Consider the fixed transformation pipeline of OpenGL, with the following parameters:
GL_MODELVIEW_MATRIX
0,175,303.109,0,688.503,-2741.84,1583,0,29.3148,5.52094,-3.18752,0,-87.4871,731.309,-1576.92,1
GL_PROJECTION_MATRIX
2.09928,0,0,0,0,3.73205,0,0,0,0,-1.00658,-1,0,0,-43.9314,0
GL_VIEWPORT
0,0,1920,1080
When I draw the faces of the unit cube I get the following:
By looking at the picture, I would expect half of the vertices to have pixel y-coordinate above 1080, and the other half to have a negative y-coordinate.
Instead, with gluProject, all vertices have y > 1080:
model coordinate 0 0 0 -> screen coordinate 848.191 1474.61 0.989359
model coordinate 1 0 0 -> screen coordinate 821.586 1973.88 0.986045
model coordinate 0 1 0 -> screen coordinate -198317 667165 4.61719
model coordinate 1 1 0 -> screen coordinate -2957.48 12504.1 1.07433
model coordinate 0 0 1 -> screen coordinate 885.806 1479.77 0.989388
model coordinate 1 0 1 -> screen coordinate 868.195 1979.01 0.986088
model coordinate 0 1 1 -> screen coordinate -438501 1.39841e+06 8.60228
model coordinate 1 1 1 -> screen coordinate -3191.35 12592.4 1.07507
I could successfully reproduce the gluProject() results with my "custom" calculations.
Why the y-coordinate of all vertices is above 1080?
P.S. To draw the cube I rely on:
glBegin(GL_QUADS);
for(int d = 0; d < 3; ++d)
for(int s = 0; s < 2; ++s)
for(int v = 0; v < 4; ++v)
{
const int a = (v & 1) ^ (v >> 1 & 1);
const int b = v >> 1 & 1;
const int d0 = d;
const int d1 = (d + 1) % 3;
const int d2 = (d + 2) % 3;
double p[3];
p[d] = s;
p[d1] = a;
p[d2] = b;
glColor3dv(p);
glVertex3dv(p);
}
glEnd();
I found the answer, in part thanks to this post.
The explanation is that the 4 vertices that have y < 0 in screen space, are also behind the camera, and so have w_clip < 0.
Perspective division (y_clip/w_clip) produces in turn a positive value in device independent coordinates and screen space.
let say I have a total number
tN = 12
and a set of elements
elem = [1,2,3,4]
and a prob for each element to be taken
prob = [0.0, 0.5, 0.75, 0.25]
i need to get a random multiset of these elements, such as
the taken elements reflects the prob
the sum of each elem is tN
with the example above, here's some possible outcome:
3 3 2 4
2 3 2 3 2
3 4 2 3
2 2 3 3 2
3 2 3 2 2
at the moment, maxtN will be 64, and elements the one above (1,2,3,4).
is this a Knapsack problem? how would you easily resolve it? both "on the fly" or "pre-calculate" approch will be allowed (or at least, depends by the computation time). I'm doing it for a c++ app.
Mission: don't need to have exactly the % in the final seq. Just to give more possibility to an elements to be in the final seq due to its higher prob. In few words: in the example, i prefer get seq with more 3-2 rather than 4, and no 1.
Here's an attempt to select elements with its prob, on 10 takes:
Randomizer randomizer;
int tN = 12;
std::vector<int> elem = {2, 3, 4};
std::vector<float> prob = {0.5f, 0.75f, 0.25f};
float probSum = std::accumulate(begin(prob), end(prob), 0.0f, std::plus<float>());
std::vector<float> probScaled;
for (size_t i = 0; i < prob.size(); i++)
{
probScaled.push_back((i == 0 ? 0.0f : probScaled[i - 1]) + (prob[i] / probSum));
}
for (size_t r = 0; r < 10; r++)
{
float rnd = randomizer.getRandomValue();
int index = 0;
for (size_t i = 0; i < probScaled.size(); i++)
{
if (rnd < probScaled[i])
{
index = i;
break;
}
}
std::cout << elem[index] << std::endl;
}
which gives, for example, this choice:
3
3
2
2
4
2
2
4
3
3
Now i just need to build a multiset which sum up to tN. Any tips?
I am implementing perspective from scratch for an academic project. I am using "Computer Graphics: principles and practices", by Foley, van Dam, Feiner and Hughes (second edition in C).
I just followed the book by implementing all the matrices transformations needed to traslate, rotate, shear, scale, project, transform from perspective to parallel canonical view volumes and for clipping. The book apparently uses a right-handed coordinate system. However, I ended up with primitives appearing in a left-handed coordinate system and I cannot explain why.
Here are the matrices that I used:
Translation:
1, 0, 0, dx
0, 1, 0, dy
0, 0, 1, dz
0, 0, 0, 1
Rotation (to align a coordinate system (rx, ry, rz) to XYZ):
rx1, rx2, rx3, 0
ry1, ry2, ry3, 0
rz1, rz2, rz3, 0
0 , 0 , 0 , 1
Scale:
sx, 0 , 0 , 0
0 , sy, 0 , 0
0 , 0 , sz, 0
0 , 0 , 0 , 1
Shear XY:
1, 0, shx, 0
0, 1, shy, 0
0, 0, 1 , 0
0, 0, 0 , 1
Projecting onto a plane at z = d, with PRP at origin, looking in the positive z direction:
1, 0, 0 , 0
0, 1, 0 , 0
0, 0, 1 , 0
0, 0, 1/d, 0
Then given VRP, VPN, PRP, VUP, f and b (and the direction of projection dop), reduce the space to the canonical viewing volume for perspective using P:
rz = VPN / |VPN|
rx = (VUP x rz) / |VUP x rz|
ry = rz x rx
P = ScaleUniform(-1 / (vrp1Z + b)) *
Scale(-2 * vrp1Z / deltaU, -2 * vrp1Z / deltaV, 1) *
Shear(-dopX / dopZ, -dopY / dopZ) *
T(PRP) *
R(rx, ry, rz) *
T(-VRP)
Where vrp1 is ShearXY * T(-PRP) * (0, 0, 0, 1), deltaU and deltaV the width and height of the viewing window. dop is computed as CW - PRP, where CW is the center of the viewing window.
Then Projection(d) * P gives me the projection matrix.
I projected simple lines representing the unit vectors on x, y and z, but the representation that I obtained drawn on the screen was clearly a left handed coordinate system... Now I need to work in a right handed coordinate system, so is there a way to know where I did wrong?
Here is the code I used:
As you can see, the Z component of the scale matrix is of opposite sign, since clipping wasn't working properly because something was right-handed and something left-handed, but I couldn't discern what exactly, so I swapped the sign of the scale because it wasn't needed in a left-hand system.
Vector rz = vpn.toUnitVector();
Vector rx = vup.cross(rz).toUnitVector();
Vector ry = rz.cross(rx).toUnitVector();
Vector cw = viewWindow.getCenter();
Vector dop = cw - prp;
Matrix t1 = Matrix::traslation(-vrp[X], -vrp[Y], -vrp[Z]);
Matrix r = Matrix::rotation(rx, ry, rz);
Matrix t2 = Matrix::traslation(-prp[X], -prp[Y], -prp[Z]);
Matrix partial = t2 * r * t1;
Matrix shear = Matrix::shearXY(-dop[X] / dop[Z], -dop[Y] / dop[Z]);
Matrix inverseShear = Matrix::shearXY(dop[X] / dop[Z], dop[Y] / dop[Z]);
Vector vrp1 = shear * t2 * Vector(0, 0, 0, 1);
Matrix scale = Matrix::scale(
2 * vrp1[Z] / ((viewWindow.xMax - viewWindow.xMin) * (vrp1[Z] + b)),
2 * vrp1[Z] / ((viewWindow.yMax - viewWindow.yMin) * (vrp1[Z] + b)),
1 / (vrp1[Z] + b)); // HERE <--- WAS NEGATIVE
Matrix inverseScale = Matrix::scale(
((viewWindow.xMax - viewWindow.xMin) * (vrp1[Z] + b)) / (2 * vrp1[Z]),
((viewWindow.yMax - viewWindow.yMin) * (vrp1[Z] + b)) / (2 * vrp1[Z]),
(vrp1[Z] + b));
float zMin = -(vrp1[Z] + f) / (vrp1[Z] + b);
Matrix parallel = Perspective::toParallelCvv(zMin);
Matrix inverseParallel = Perspective::inverseToParallelCvv(zMin);
Matrix perspective = Perspective::copAtOrigin(-vrp1[Z]);
projection = perspective * shear * partial;
canonicalView = parallel * scale * shear * partial;
canonicalToProjection = perspective * inverseScale * inverseParallel;
I have been tasked with writing a linear program that will tell the user where to add weight onto a cylindrical drum in order to balance the center of gravity. The weights are 2 lbs and 5 lbs, and a Maximum of 10 lbs can be added into one location. The 2 lb weights are 2" tall and the 5 lb weights are 6" tall. I think the best way to go about this is to use polar coordinates and assume a perfect cyinder for now, as it will be within 1% of perfect. I also think I should start only changing the X and Y axis and keep the Z axis at 0 for now. Any tips to head me in the right direction would be appreciated.
!Drum weight problem;
!sets;
Sets:
Weight: Pounds, Height;
Location: X, Y, Angle;
Set(Weight, Location): PX, PY, PAngle;
Endsets
!data;
Data:
Weight = W1 W2 W3 W4;
Location = L1 L2 L3 L4;
!attribute values;
Pounds = 2 4 5 10;
Height = 2 4 6 12;
X = 0 1 2 3;
Y = 0 1 2 3;
Angle = 0 90 180 270;
Enddata
!objective;
Min = #MIN(Set(I, J): Weight (I, J), Location (K, L, M);
!constraints;
#FOR( Weight(I): [Weight_row]
Pounds >= 2;
Height >= 2;
#FOR( Location(J): [Location_row]
X >=0;
Y >=0;
Angle >=0;
End
i've been trying to implement color picking and it just aint working right. the problem is that if initially paint my model in the different colors that are used for the picking (i mean, i give each triangle different color, which is his id color), it works fine (without texture or anything .. ), but if i put texture of the model, and that when the mouse is clicked i paint the model by giving each triangle a different color, it doesnt work..
here is the code:
public int selection(int x, int y) {
GL11.glDisable(GL11.GL_LIGHTING);
GL11.glDisable(GL11.GL_TEXTURE_2D);
IntBuffer viewport = BufferUtils.createIntBuffer(16);
ByteBuffer pixelbuff = BufferUtils.createByteBuffer(16);
GL11.glGetInteger(GL11.GL_VIEWPORT, viewport);
this.render(this.mesh);
GL11.glReadPixels(x, y, 1, 1, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, pixelbuff);
for (int m = 0; m < 3; m++)
System.out.println(pixelbuff.get(m));
GL11.glEnable(GL11.GL_TEXTURE_2D);
GL11.glEnable(GL11.GL_LIGHTING);
return 0;
}
public void render(GL_Mesh m, boolean inPickingMode)
{
GLMaterial[] materials = m.materials; // loaded from the .mtl file
GLMaterial mtl;
GL_Triangle t;
int currMtl = -1;
int i = 0;
// draw all triangles in object
for (i=0; i < m.triangles.length; ) {
t = m.triangles[i];
// activate new material and texture
currMtl = t.materialID;
mtl = (materials != null && materials.length>0 && currMtl >= 0)? materials[currMtl] : defaultMtl;
mtl.apply();
GL11.glBindTexture(GL11.GL_TEXTURE_2D, mtl.textureHandle);
// draw triangles until material changes
for ( ; i < m.triangles.length && (t=m.triangles[i])!=null && currMtl == t.materialID; i++) {
drawTriangle(t, i, inPickingMode);
}
}
}
private void drawTriangle(GL_Triangle t, int i, boolean inPickingMode) {
if (inPickingMode) {
byte[] triColor = this.triangleToColor(i);
GL11.glColor3ub((byte)triColor[2], (byte)triColor[1], (byte)triColor[0]);
}
GL11.glBegin(GL11.GL_TRIANGLES);
GL11.glTexCoord2f(t.uvw1.x, t.uvw1.y);
GL11.glNormal3f(t.norm1.x, t.norm1.y, t.norm1.z);
GL11.glVertex3f( (float)t.p1.pos.x, (float)t.p1.pos.y, (float)t.p1.pos.z);
GL11.glTexCoord2f(t.uvw2.x, t.uvw2.y);
GL11.glNormal3f(t.norm2.x, t.norm2.y, t.norm2.z);
GL11.glVertex3f( (float)t.p2.pos.x, (float)t.p2.pos.y, (float)t.p2.pos.z);
GL11.glTexCoord2f(t.uvw3.x, t.uvw3.y);
GL11.glNormal3f(t.norm3.x, t.norm3.y, t.norm3.z);
GL11.glVertex3f( (float)t.p3.pos.x, (float)t.p3.pos.y, (float)t.p3.pos.z);
GL11.glEnd();
}
as you can see, i have a selection function that's called everytime the mouse is clicked, i then disable the lightining and the texture, and then i render the scene again in the unique colors, and then read the pixles buffer, and the call of:
GL11.glReadPixels(x, y, 1, 1, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, pixelbuff);
gives me wrong values .. and its driving me nutz !
btw, the main render function is render(mesh m, boolean inPickingMode) as u can see, you can also see that there is texture on the model before the mouse clicking ..
there are several problems with the example.
First, you're not clearing the color and depth-buffer when clicking the mouse (that causes the scene with color polygons to be mixed into the scene with textured polygons - and then it doesn't work). you need to call:
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
Second, it is probably a bad idea to use materials when color-picking. I'm not familiar with the GLMaterial class, but it might enable GL_COLOR_MATERIAL or some other stuff, which modifies the final color, even if lighting is disabled. Try this:
if(!inPickingMode) { // === add this line ===
// activate new material and texture
currMtl = t.materialID;
mtl = (materials != null && materials.length>0 && currMtl >= 0)? materials[currMtl] : defaultMtl;
mtl.apply();
GL11.glBindTexture(GL11.GL_TEXTURE_2D, mtl.textureHandle);
} // === and this line ===
Next, and that is not related to color picking, you call glBegin() too often for no good reason. You can call it in render(), before the triangle drawing loop (but that shouldn't change how the result looks like):
GL11.glBegin(GL11.GL_TRIANGLES);
// draw triangles until material changes
for ( ; i < m.triangles.length && (t=m.triangles[i])!=null && currMtl == t.materialID; i++) {
drawTriangle(t, i, inPickingMode);
}
GL11.glEnd();
--- now i am answering a little beyond the original question ---
The thing about color picking is, that the renderer has only limited number of bits to represent the colors (like as little as 5 bits per channel), so you need to use colors that do not have these bits set. It might be a bad idea to do this on a mobile device.
If your objects are simple enough (can be represented by, say a sphere, for picking), it might be a good idea to use raytracing for picking objects. It is pretty simple, the idea is that you take inverse of modelview-projection matrix, and transform points (mouse_x, mouse_y, -1) and (mouse_x, mouse_y, +1) by it, which will give you position of mouse at the near and at the far view plane, in object space. All you need to do is to subtract them to get direction of ray (origin is at the near plane), and you can pick your objects using this ray (google ray - sphere intersection).
float[] mvp = new float[16]; // this is your modelview-projection
float mouse_x, mouse_y; // those are mouse coordinates (in -1 to +1 range)
// inputs
float[] mvp_inverse = new float[16];
Matrix.invertM(mvp_inverse, 0, mvp, 0);
// inverse the matrix
float nearX = mvp_inverse[0 * 4 + 0] * mouse_x +
mvp_inverse[1 * 4 + 0] * mouse_y +
mvp_inverse[2 * 4 + 0] * -1 +
mvp_inverse[3 * 4 + 0];
float nearY = mvp_inverse[0 * 4 + 1] * mouse_x +
mvp_inverse[1 * 4 + 1] * mouse_y +
mvp_inverse[2 * 4 + 1] * -1 +
mvp_inverse[3 * 4 + 1];
float nearZ = mvp_inverse[0 * 4 + 2] * mouse_x +
mvp_inverse[1 * 4 + 2] * mouse_y +
mvp_inverse[2 * 4 + 2] * -1 +
mvp_inverse[3 * 4 + 2];
float nearW = mvp_inverse[0 * 4 + 3] * mouse_x +
mvp_inverse[1 * 4 + 3] * mouse_y +
mvp_inverse[2 * 4 + 3] * -1 +
mvp_inverse[3 * 4 + 3];
// transform the near point
nearX /= nearW;
nearY /= nearW;
nearZ /= nearW;
// dehomogenize the coordinate
float farX = mvp_inverse[0 * 4 + 0] * mouse_x +
mvp_inverse[1 * 4 + 0] * mouse_y +
mvp_inverse[2 * 4 + 0] * +1 +
mvp_inverse[3 * 4 + 0];
float farY = mvp_inverse[0 * 4 + 1] * mouse_x +
mvp_inverse[1 * 4 + 1] * mouse_y +
mvp_inverse[2 * 4 + 1] * +1 +
mvp_inverse[3 * 4 + 1];
float farZ = mvp_inverse[0 * 4 + 2] * mouse_x +
mvp_inverse[1 * 4 + 2] * mouse_y +
mvp_inverse[2 * 4 + 2] * +1 +
mvp_inverse[3 * 4 + 2];
float farW = mvp_inverse[0 * 4 + 3] * mouse_x +
mvp_inverse[1 * 4 + 3] * mouse_y +
mvp_inverse[2 * 4 + 3] * +1 +
mvp_inverse[3 * 4 + 3];
// transform the far point
farX /= farW;
farY /= farW;
farZ /= farW;
// dehomogenize the coordinate
float rayX = farX - nearX, rayY = farY - nearY, rayZ = farZ - nearZ;
// ray direction
float orgX = nearX, orgY = nearY, orgZ = nearZ;
// ray origin
And finally - a debugging suggestion: try to render with inPickingMode set to true so you can see what is it that you are actually drawing, on screen. If you see texture or lighting, then something went wrong.