Is there a formula that generates a set of coordinates of triangles whose vertices are located on a sphere?
I am probably looking for something that does something similar to gluSphere. Yet, I need to color the different triangles in specfic colors so that it seems I can't use gluSphere.
Also: I do understand that gluSphere draws edges along lines with equal longitudes and lattitudes which entails the triangles being small at the poles compared to their size at the equator. Now, if such a formula would generate the triangles such that their difference in size is minimized, that would be great.
To calculate the normals and the uv map.
Fortunately there is an amazing trick for calculating the normals, on a sphere. If you think about it, the normals on a sphere are indeed nothing more than the direction from the centre of the sphere, to that point!! Furthermore, if you think it through, that means the normals literally equal the point! i.e., it's the same vector! - just don't forget to normalise the length, for the normal.
You can win bar bets on that one: "is there a shape where all the normals happen to be exactly ... equal to the vertices?" At first glance you'd think, that's impossible, no such coincidental shape could exist. But of course the answer is simply "a sphere with radius one!" Heh!
Regarding the UVs. It is relatively easy on a sphere, assuming you're projecting to 2D in the "obvious" manner, a "rectangle-style" map projection. In that case the u and v is basically just the longitude / latitude of any point, normalised to 0,1.
Hope it helps!
Here's the all-time-classic web page that beautifully explains how to build an icosphere .. http://blog.andreaskahler.com/2009/06/creating-icosphere-mesh-in-code.html
Start with a unit icosahedron. Then apply muliple homogenous subdivisions of the triangles, normalizing the resulting vertices distance to the origin.
Related
How can I generate a circular grid, made of tiles with uniform area/whose vertices are uniformly distributed?
I'll need to apply the Laplacian operator to the grid at each frame of my program.
Applying the Laplacian was easy with a rectangular grid made of rectangular tiles whose locations were specified in cartesian coordinates, since for a tile at (i,j), I knew the positions of its neighboring tiles to be (i-1,j), (i,j-1), (i+1,j), and (i,j+1).
While I'd like to use polar coordinates, I'm not sure whether querying a tile's neighborhood would be as easy.
I'm working in OpenGl, and could either render triangles or points. Triangles seem more efficient (and have the nice effect of filling the area between their vertices), but seem more amenable to cartesian coordinates. Perhaps I could render points and then polar coordinates would work fine?
The other concern is the density of tiles. I want waves traveling on the surface of this mesh to have the same resolution whether they're at the center or not.
So the two main concerns are: generating the mesh in a way that allows for easy querying of a tiles' neighborhood, and in a way that preserves a uniform density distribution of tiles.
I think you're asking for something impossible.
However, this is a technique for remapping a regular square 2D grid into a circle shape with a relatively low amount of warping. It might suffice for your problem.
You might want to have a look at this paper, it has been written to sample spheres but you might be able to adapt it for a circle.
An option can be to use a polar grid with a constant angular step but varying radial steps, so that all cells have the same area, i.e. (R+dR)²-R²=Cst, giving dR as a function of R.
You may want to reduce the anisotropy (some cells becoming very elongated) by changing the number of cells every now and then (f.i. by doubling). This will introduce singularities in the mesh, i.e. cells with five vertices instead of four.
See the figures in https://mathematica.stackexchange.com/questions/78806/ndsolve-and-fem-support-for-non-conformal-meshes-of-a-disk-with-kernel-crash
I've been searching for vector graphics and flash for quite some time but I haven't really found what I was looking for. Can anyone tell me exactly what area of mathematics is required for building vector images in 3D space? Is this just vector math? I saw some C++ libraries for it but I wasn't sure if it was the sort of vectors meant to for smaller file size like flash images are. Thanks in advance.
If you're wanting to do something from scratch (there are plenty of open-source libraries out there if you don't), keep in mind that "vector graphics" (this is different than the idea of a 3D space vector) themselves are typically based on parametric curves like Bezier curves, which are essentially 3rd degree polynomials for each x, y, and/or z point parameterized from a value t that goes from 0 to 1. Now projecting the texture-map image you create with those curves (i.e., the so-called "vector graphics" image) onto triangle polygon via uv coordinates would involve some interpolation, which is fairly straight forward linear algebra, as you would utilize the barycentric coordinate of the 3D point on the surface of the triangle polygon in order to calculate the uv point you want to look-up from the texture.
So essentially the steps are:
Create the parametric-curve based image (i.e, the "vector graphic") and make a texture map out of it
That texture map will have uv coordinates
When you rasterize the 3D triangle polygon, you will get a barycentric coordinate on the surface of the triangle from the actual 3D points of the triangle polygon. Those points of the polygon should also have UV coordinates assigned to them.
Use the barycentric coordinates to calculate the uv coordinate on the texture map.
When you get that color from the texture map, then shade the triangle (i.e, calculate lighting, etc. if that's what you're doing, or just save that color of the pixel if there is no lighting).
Please note I haven't gotten into antialiasing, that's a completely different beast. Best thing if you don't know what you're doing there is to simply brute-force antialias through super-sampling (i.e., render a really big image and then average pixels to shrink it back to the desired size).
If you've taken multivariable calculus, the concepts behind parametric curves and surfaces should be familiar, and a basic understanding of linear algebra would be necessary in order to work with barycentric coordinates and linear interpolation from 3D vectors.
I'm trying to calculate smooth normals for a cone. In looking around for code samples and explanations, I consistently come across directions for face normals. I've posted a couple pictures below of what I'm doing. The first -- which basically just normalizes the vertex position -- gives me decently smooth shading, but the edges are "missing" and the bottom face isn't solid. The second has edges, but the shading is flat (face normals) and my light isn't reflecting off of them correctly.
The cone is built out of GL_TRIANGLES.
Click the images for larger versions.
(source: bantherewind.com)
(source: bantherewind.com)
At any point on the surface of a cone except the apex, there are two obvious kinds of tangent vectors: one tangent to the cross-sectional circle, or one up the slope. If you express the surface as a parametric equation with two parameters, you can get these tangent vectors as the two partial derivatives. Take the cross product of the tangents, and you get a normal vector. The order of the product determines whether the normal points inward or outward. Of course, the bottom face must be handled separately.
In addition to the answer by JWWalker I'd like to point out, that a vertex is a whole tuple of vector, that among other things includes position and normal. So if you have different normals at a single position, you got there different and multiple vertices.
In the case of the cone this is important, because the tip of the cone is not one single vertex, but a whole set of them (one tip vertex for each triangle the cone's coat. And then for the base circle you got at each position two vertices, the one for the triangle to the tip, and one for the base surface.
Both the tip and the edge are discontinuities and hence call for a be drawn using separate vertices.
I am new to openGL and I am reading the redbook. Now, as an exercise I want to manually draw a sphere. For that I am dividing the sphere into slices and stacks, and thus I get multiple rectangles, but near the poles of the sphere I get triangles. (hope this was clear what I am doing). Now I know that if you draw a polygon with GL_POLYGON and it happens to intersect itself, the behavior is undefined. My question is this: given three points v1, v2, v3 which are not on one line, is it undefined behavior to do this:
glBegin(GL_POLYGON)
vertex v1
vertex v1
vertex v2
vertex v3
glEnd();
This may be combining two unrelated questions into one, but I am wondering also this: if I choose to divide the rectangles in my sphere routine into triangles, does it matter how I do it, that is, by which diagonal I divide the rect into two triangles? I am guessing that for drawing a single-colored sphere it won't matter, but I don't know about textures, shaders, lighting etc.
When I was doing openGL stuff, I quickly stuck to using just triangles. They are special in that a triangle is not ambiguous in any way.
You example though, I would imagine this will work, though probably with artefacts.
how you split a rectangle shouldn't matter, just as long as you pay attention to which way your triangles are wound, which way you define the points as this is what dictates their front and back.
But definitely stick to triangles, images these four points of a square
(0,0,0) (1,0,0) (1,0,1) (0,0,1)
Fairly easy to see it is a flat square, but what if I change them to
(0,1,0) (1,0,0) (1,1,1) (0,0,1)
what do you have now? it could be drawn like a valley or like a hill. if I defined this shape with triangles, you know exactly what I am describing
(0,1,0) (1,0,0) (1,1,1)
(1,1,1) (1,0,1) (0,1,0)
A hill like shape
ok... so I side tracked a little here... my point is, I don't know what your code would do in practice, but I don't think you should use it any way. and how you split up rectangles shouldn't matter as long as your triangles are described the right way around.
This is no problem whatsoever. OpenGL always has to be able to deal with the possibility of rasterising geometry where multiple vertices fall in the same location since even different input points may end up as the same output point depending on your modelview and projection matrices (or your geometry and/or vertex shader if you're on the programmable pipeline). It is designed to deal in the mathematically correct way under a wide variety of edge cases.
OpenGL's primary test for whether to paint a pixel with geometry is whether its centre falls within the mathematical bounds of the primitive being drawn*. So OpenGL can render polygons that paint non-continuous sets of pixels (which generally happens when they become almost vanishingly thin) or that paint no pixels whatsoever (which tends to be when they end up really small, but they may technically be of arbitrarily large size as long as they squeeze between pixel centres).
The exact tests used at the hardware level may vary from vendor to vendor and are guaranteed to be correct only for geometry that is convex on screen — which is why most people say stick to triangles, since they're unavoidably convex.
(*) a separate screen-oriented test being applied to pixels exactly on a boundary to ensure they're attributed to only exactly one polygon where polygons meet along a common edge
Greetings all,
As seen in the image , I draw lots of contours using GL_LINE_STRIP.
But the contours look like a mess and I wondering how I can make this look good.(to see the depth..etc )
I must render contours so , i have to stick with GL_LINE_STRIP.I am wondering how I can enable lighting for this?
Thanks in advance
Original image
http://oi53.tinypic.com/287je40.jpg
Lighting contours isn't going to do much good, but you could use fog or manually set the line colors based on distance (or even altitude) to give a depth effect.
Updated:
umanga, at first I thought lighting wouldn't work because lighting is based on surface normal vectors - and you have no surfaces. However #roe pointed out that normal vectors are actually per vertex in OpenGL, and as such, any POLYLINE can have normals. So that would be an option.
It's not entirely clear what the normal should be for a 3D line, as #Julien said. The question is how to define normals for the contour lines such that the resulting lighting makes visual sense and helps clarify the depth?
If all the vertices in each contour are coplanar (e.g. in the XY plane), you could set the 3D normal to be the 2D normal, with 0 as the Z coordinate. The resulting lighting would give a visual sense of shape, though maybe not of depth.
If you know the slope of the surface (assuming there is a surface) at each point along the line, you could use the surface normal and do a better job of showing depth; this is essentially like a hill-shading applied only to the contour lines. The question then is why not display the whole surface?
End of update
+1 to Ben's suggestion of setting the line colors based on altitude (is it topographic contours?) or based on distance from viewer. You could also fill the polygon surrounded by each contour with a similar color, as in http://en.wikipedia.org/wiki/File:IsraelCVFRtopography.jpg
Another way to make the lines clearer would be to have fewer of them... can you adjust the density of the contours? E.g. one contour line per 5ft height difference instead of per 1ft, or whatever the units are. Depending on what it is you're drawing contours of.
Other techniques for elucidating depth include stereoscopy, and rotating the image in 3D while the viewer is watching.
If your looking for shading then you would normally convert the contours to a solid. The usual way to do that is to build a mesh by setting up 4 corner points at zero height at the bounds or beyond then dropping the contours into the mesh and getting the mesh to triangulate the coords in. Once done you then have a triangulated solid hull for which you can find the normals and smooth them over adjacent faces to create smooth terrain.
To triangulate the mesh one normally uses the Delaunay algorithm which is a bit of a beast but there does exist libraries for doing it. The best of which I know of is the ones based on Guibas as Stolfi papers since its pretty optimal.
To generate the normals you do a simple cross product and ensure the facing is correct and manually renormalize them before feeding into the glNormal.
The in the old days you used to make a glList out of the result but the newer way is to make a vertex array. If you want to be extra flash then you can look for coincident planar faces and optimize the mesh down for faster redraw but thats a bit of a black art - good for games, not so good for CAD.
(thx for bonus last time)