Math Behind Flash Vector Graphics? - c++

I've been searching for vector graphics and flash for quite some time but I haven't really found what I was looking for. Can anyone tell me exactly what area of mathematics is required for building vector images in 3D space? Is this just vector math? I saw some C++ libraries for it but I wasn't sure if it was the sort of vectors meant to for smaller file size like flash images are. Thanks in advance.

If you're wanting to do something from scratch (there are plenty of open-source libraries out there if you don't), keep in mind that "vector graphics" (this is different than the idea of a 3D space vector) themselves are typically based on parametric curves like Bezier curves, which are essentially 3rd degree polynomials for each x, y, and/or z point parameterized from a value t that goes from 0 to 1. Now projecting the texture-map image you create with those curves (i.e., the so-called "vector graphics" image) onto triangle polygon via uv coordinates would involve some interpolation, which is fairly straight forward linear algebra, as you would utilize the barycentric coordinate of the 3D point on the surface of the triangle polygon in order to calculate the uv point you want to look-up from the texture.
So essentially the steps are:
Create the parametric-curve based image (i.e, the "vector graphic") and make a texture map out of it
That texture map will have uv coordinates
When you rasterize the 3D triangle polygon, you will get a barycentric coordinate on the surface of the triangle from the actual 3D points of the triangle polygon. Those points of the polygon should also have UV coordinates assigned to them.
Use the barycentric coordinates to calculate the uv coordinate on the texture map.
When you get that color from the texture map, then shade the triangle (i.e, calculate lighting, etc. if that's what you're doing, or just save that color of the pixel if there is no lighting).
Please note I haven't gotten into antialiasing, that's a completely different beast. Best thing if you don't know what you're doing there is to simply brute-force antialias through super-sampling (i.e., render a really big image and then average pixels to shrink it back to the desired size).
If you've taken multivariable calculus, the concepts behind parametric curves and surfaces should be familiar, and a basic understanding of linear algebra would be necessary in order to work with barycentric coordinates and linear interpolation from 3D vectors.

Related

Simple Texture Mapping for a generic triangle mesh

Suppose that we have a triangle mesh without information about normals and texture coordinates.
(Basically an OBJ file with only vertices and face elements).
The objective is to show something decent using Opengl with a program written in C.
To calculate the normals of every triangle is easy...
But what about texture mapping?
Can anyone recommend me a simple algorithm/documentation/resource to map the normalized UV coordinates of an image to a generic mesh of triangles?
(For a mesh with a single triangle it is easy, ex: [0][0], [1][0], [0][1])
The result doesn't have to be perfect, even professional softwares can't do that without UV unwrapping and UV seams.
The only algorithm I know is for 2D screen coordinates (screen space):
I already answered a question similar to this here, focus on the algorithm (ie., texturePos = (vPos - 0.5) * 2) of conversion between textureCoords and 2D vertices
EDIT:
Note; The following is a theory:
There might be a method with 3D space. Eventually the transformations lead to the vertices being rendered in 2D screen coordinates.
local space --> world space --> view space --> NDC space --> screen coordinates
Using the general convention above and the 3 matrices (Model, View, Projection),
and since the vertices will end up in 2D space, you could create some form of algorithm to back track the textureCoordinates using the inverse Matrices back to 3D space and move on from there.
This, btw, still is not a defined and perfect algorithm (maybe there is and someone will edit and add the algorithm here in the future...)

Circular grid of uniform density

How can I generate a circular grid, made of tiles with uniform area/whose vertices are uniformly distributed?
I'll need to apply the Laplacian operator to the grid at each frame of my program.
Applying the Laplacian was easy with a rectangular grid made of rectangular tiles whose locations were specified in cartesian coordinates, since for a tile at (i,j), I knew the positions of its neighboring tiles to be (i-1,j), (i,j-1), (i+1,j), and (i,j+1).
While I'd like to use polar coordinates, I'm not sure whether querying a tile's neighborhood would be as easy.
I'm working in OpenGl, and could either render triangles or points. Triangles seem more efficient (and have the nice effect of filling the area between their vertices), but seem more amenable to cartesian coordinates. Perhaps I could render points and then polar coordinates would work fine?
The other concern is the density of tiles. I want waves traveling on the surface of this mesh to have the same resolution whether they're at the center or not.
So the two main concerns are: generating the mesh in a way that allows for easy querying of a tiles' neighborhood, and in a way that preserves a uniform density distribution of tiles.
I think you're asking for something impossible.
However, this is a technique for remapping a regular square 2D grid into a circle shape with a relatively low amount of warping. It might suffice for your problem.
You might want to have a look at this paper, it has been written to sample spheres but you might be able to adapt it for a circle.
An option can be to use a polar grid with a constant angular step but varying radial steps, so that all cells have the same area, i.e. (R+dR)²-R²=Cst, giving dR as a function of R.
You may want to reduce the anisotropy (some cells becoming very elongated) by changing the number of cells every now and then (f.i. by doubling). This will introduce singularities in the mesh, i.e. cells with five vertices instead of four.
See the figures in https://mathematica.stackexchange.com/questions/78806/ndsolve-and-fem-support-for-non-conformal-meshes-of-a-disk-with-kernel-crash

maximal convex patching in Computer graphics

Given a 3D object in Computer graphics, whose surface is represented as a 3D triangular mesh (mesh of 3D triangle objects), I need to find the maximum continual Convex patches on the surface of the given 3D object.
I am using OpenGl to render the graphics within a C++ program. What kind of methods or algorithms should I use to find the convex patches.
I have to apply different colors to the different convex patches on the object to signify the selection.
Say I have a sphere then the whole sphere is one maximal convex patch. Any portion of the sphere surface will be a convex patch, by maximal I mean the maximum continuous convex patch that can be found. Well in the rendering, depending on the viewing angles, the maximal convex patches visible to the viewer will have to colored.
Start from any triangle. Traverse it's edge's and check that the angle between the two triangles is less than 180deg. If it is add it to the current selection and continue expanding.
The check is actually really simple if you use vector geometry. Say A - B is the common edge with C on the selected side and D on the other. Then just check if dot(cross((A-B), (D-B)), cross((A-B), (C-B)) < 0.
Unfortunately OpenGL doesn't help with object algorithms. It only handles converting triangles to pixels.
I need to do it using OpenGL
Then you're out of luck. OpenGL only draws points, lines and triangles. OpenGL is not a 3D modelling library, OpenGL is not a scene graph, OpenGL is not a graphics engine.
It does not do all purpose geometry processing (it may be possible to use a combination of geometry/tesselation shaders, transform feedback and compute shaders to do it, but it would be very cumbersome to implement).

How to create an even sphere with triangles in OpenGL?

Is there a formula that generates a set of coordinates of triangles whose vertices are located on a sphere?
I am probably looking for something that does something similar to gluSphere. Yet, I need to color the different triangles in specfic colors so that it seems I can't use gluSphere.
Also: I do understand that gluSphere draws edges along lines with equal longitudes and lattitudes which entails the triangles being small at the poles compared to their size at the equator. Now, if such a formula would generate the triangles such that their difference in size is minimized, that would be great.
To calculate the normals and the uv map.
Fortunately there is an amazing trick for calculating the normals, on a sphere. If you think about it, the normals on a sphere are indeed nothing more than the direction from the centre of the sphere, to that point!! Furthermore, if you think it through, that means the normals literally equal the point! i.e., it's the same vector! - just don't forget to normalise the length, for the normal.
You can win bar bets on that one: "is there a shape where all the normals happen to be exactly ... equal to the vertices?" At first glance you'd think, that's impossible, no such coincidental shape could exist. But of course the answer is simply "a sphere with radius one!" Heh!
Regarding the UVs. It is relatively easy on a sphere, assuming you're projecting to 2D in the "obvious" manner, a "rectangle-style" map projection. In that case the u and v is basically just the longitude / latitude of any point, normalised to 0,1.
Hope it helps!
Here's the all-time-classic web page that beautifully explains how to build an icosphere .. http://blog.andreaskahler.com/2009/06/creating-icosphere-mesh-in-code.html
Start with a unit icosahedron. Then apply muliple homogenous subdivisions of the triangles, normalizing the resulting vertices distance to the origin.

GPU Render onto sphere

I am trying to write an optimized code that renders a 3D scene using OpenGL onto a sphere and then displays the unwrapped sphere on the screen ie producing a planar map of a purely reflective sphere. In math terms, I would like to produce a projection map where the x axis is the polar angle and y axis is the azimuth.
I am trying to do this by placing the camera at the center of the sphere probe and taking planar shots around so as to approximate spherical quads with planar tiles of the frustum. Then I can use this as texture to apply to a distorted planar patch.
Seems to me this is pretty tedious approach. I wonder if there is way to take this on using shaders or some GPU-smart method.
Thank you
S.
I can give you two solutions.
The first is to make a standard render-to-texture, but with a cubemap attached as the destination buffer. If your hardware is recent enough, it can be done in a single pass. This will deal with all the needed math in HW for you, but data repartition of cubemaps aren't ideal (quite a lot of distortion if the corners). In most cases, it should be enough though.
After this, you render a quad to the screen, and in a shader you map your UV coordinates to xyz vectors using staightforwad spherical mapping. The HW will compute for you which side of the cubemap to take, at which UV.
The second is more or less the same, but with a custom deformation and less HW support : dual paraboloids. Two paraboloids may not be enough, but you are free to slightly modify the equations and make 6 passes. The rendering pass is the same, but this time you're all by yourself to choose the right texture and compute the UVs.
By the time you've bothered to build the model, take the planar shots, apply non-affine transformations and stitch the whole thing together, you've probably gained no performance and considerable complexity. Just project the planar image mathematically and be done with it.
You seem to be asking for OpenGL's sphere mapping. NeHe has a tutorial on sphere mapping that might be useful.