Draw part of sphere limited by set of vertices - opengl

What's the best way to draw part of sphere in, for example, OpenGL, considering I have vertices of boundaries of region that should be rendered?
I'm drawing sphere using octahedron transformation (described here: https://stackoverflow.com/a/7687312/1840136) and I can draw arcs that represent boundaries in same way by creating intermediate vertices and then "normalizing" them.
To create triangles out of plane I can use something from this answer: https://math.stackexchange.com/a/1814637, but thing is it will be still flat something. To get part of sphere, I definitely need another bunch of intermediate vertices for additional triangles. What is the algorithms for such task? And, as I already may have triangles forming original sphere, can I use this data somehow?

Related

What other ways can I draw the outline of an object?

I have a very simple case. I want to draw the outline of an object, in this case I think they'll only be spheres, but I'd like to not rely on this.
I've found methods such as:
Draw the object to the stencil buffer
Turn on wireframe mode
Draw the object with thick lines
Draw the real object on the top
The problem I have with this method is that my models have a lot of vertices, and this requires me to draw it three times. I'm getting some significant frame rate drops.
Are there other ways to do this? My next guess would be to draw circles on the final render as a post-process effect, seeing as I'm only looking at spheres. But I'd much much rather do this for more than just spheres.
Is there something I can do in an existing shader to outline?
I'd also like the outline to appear when the object is behind others.
I'm using OpenGL 4.3.
I know 3 ways of doing contour rendering:
Using the stencil buffer
The first one is a slightly modified version of the one you described: you first render your object as normal with stencil buffer on, then you slightly scale it and render it plain color where the stencil buffer is not filled. You can find an explanation of this technique here.
Using image processing techniques
The second one is a post-process step, where you look for edges using image processing filters (like the sobel operator) and you compose your rendering with your contour detection result. The good thing with the sobel operator is that it is separable; this means you can do the detection in two 1D passes, which is more efficient that doing one 2D pass.
Using the geometry shader
Last but not least, you can use the geometry shader to extract the silhouette of your mesh. The idea is to use adjacent vertices of a triangle to detect if one edge of this triangle (let call it t0) is a contour.
To do this, for each edge ei of t0:
build a new triangle ti using the vertices of ei and its associated vertex,
compute the normal ni of ti, and the normal n0 of t0, transform them both in view space (the silhouette depends on the point of view),
compute the dot product between n0 and ni. If its value is negative, this means that the normals are in opposite directions and the edge ei is a silhouette edge.
You then build a quad around ei, emit each of its vertices and color them the way you want in the fragment shader.
This is the basic idea of this algorithm. Using only this will result in aliased edges, with holes between them, but this can be improved. You can read this paper, and this blog post for further informations.
In my experience you get good results if you render the outlined object in white (unlit) to a texture as big as the final framebuffer, then draw a framebuffer-sized quad with that texture and have the fragment shader blur or otherwise process it and apply the desired color.
I have an example here

OpenGL - parameterized meshes

Given a human 3D model, I want to change its shape by giving parameters, like height, waist, bust etc.
From what I gathered, the 3D model should have some 'hooks' around the areas I can change.
Any pointers for this would be very helpful through OpenGL, Three.js or any other means. I don't want to do it in Blender or other 3D manipulation tools. I want it done programatically.
Here's a Sample 3D model
What you should do is "tag" a group of vertices together.
Then apply a vertex shader to those groups, which changes the position of the vertices to shrink/expand the mesh.
One way to do this is to place a point inside the mesh, and give it a radius. This pretty much means you're creating a sphere.
Run the shader on all the vertices inside the sphere.
What the shader should do is "inflate" the sphere - moving the vertices away from the center point.
Just transform each vertice away from the center by a certain ammount.
(Make a vector from the center to the current vertice, continue the vector, and move the vertice there.
This should work well for the belly.
Another shader you can do is to stretch the mesh vertically (for the person's height).
This is more straightforward.
Just run on all vertices and add to their height.
How much to add - that's what you should figure out. My intuition says it can't be a constant - I think it's a linear function but I'm not sure.

How to clip texture with arbitrary shape?

I am rendering complex 3d objects. Here is a simple example with a sphere-like object:
Next I am applying a clipping plane to these objects and rendering a texture on this plane, giving the impression you are looking at the inside of the object, as if it was sliced. For example:
The problem is the jagged edge of the texture. It will stick out passed the boundary of the surface. Here's another angle where you can see it sticking out. The surface and the texture both derive from the same source data, but the surface is smoothed and has a higher resolution than the texture.
What I want is to be able to somehow clip the texture, so that it never sticks out past the boundary of the surface. Also, I don't want to simply scale down the texture, since although this might prevent it from sticking outside, it would create interior gaps between the texture edge and the surface edge. I would rather the texture be a little too big and have it clipped so that it sits flush against the edge of the surface.
Here's where I am:
I figured the first step would be to define the intersection of the plane and the surface. So now I have that, as an ordered list of line segments. However, I'm not sure how to proceed with this info (or if this is even the best approach).
I've been reading up on stencil buffers. One approach might be to turn the intersection line into a 2d shape and draw this into a stencil buffer. Then apply this when drawing the texture. (Although I think it's a lot of work since the shapes can be complicated.)
I am wondering if I can somehow use the already drawn surface (in conjunction with a stencil buffer or some other technique) to somehow clip the texture -- without having to go through the extra trouble of deriving the intersection line, etc.
What's the best approach here? (Any online examples you can point me to would also be really helpful.)
If you're clipping convex objects and know coordinates of clipped points, you can create polygonal "cap" yourself - just draw clipped points in proper order using GL_TRIANGLE_FAN, and that's it. Won't work with non-convex object - that would require triangulation algorithm. You could use glu tesselators to triangulate polygons, but that can be tricky/difficult.
If clipped area can be defined by formula, you can write a shader that'll precisely clip pixels over certain distance (i.e. if x^2+x^2+z^2 > r^2 do not draw pixel).
You could also draw back-facing faces with a shader that would draw every back facing pixel as if it were on on clip-plane using simple raytracing. That's complicated, and might be overkill in your case. Dead Rising used similar technique in their game engine.
Also you can use stencil buffer.
Draw back-facing faces first with GL_INCR (glStencilOp(GL_KEEP, GL_INCR, GL_INCR)), then draw front-facing surfaces with GL_DECR (glStencilOp(GL_KEEP, GL_DECR, GL_DECR)). Then draw texture only where stencil is non-zero. (glStencilFunc(GL_GREATER, 0, 0xff); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);). If you have many overlapping shapes, however, you'll need to take special care of them.
--edit--
However, I'm not sure how to proceed with this info (or if this is even the best approach).
Draw it as a triangle fan. For convex objects, that's all you need. For non-convex objects that won't work.
ve been reading up on stencil buffers. One approach might be to turn the intersection line into a 2d shape
No, it won't work like that. Region you want to fill with texture should hold certain stencil value. That's how stencil clipping works.
to somehow clip the texture
In OpenGL you have 6(?) clip planes. If you need more than that, you'll need advanced techniques - stencil, deriving intersection line, shaders, or triangulation.
Any online examples you can point me to would also be really helpful
Drawing Filled, Concave Polygons Using the Stencil Buffer

Fill curved object with color

I am new to OpenGL.
I want to draw an object which has 4 vertices. It is like a quad object, but for bottom side I need to draw an arc. Other sides are connected with straight lines. I want to fill the object.
Can anybody guide me to do this please?
Triangulate your shape and render those triangles any way you prefer (immediate mode / VBO / VAO).
Convert your arc shape into segments. Number of vertices depends on detalization/smoothness you want to achieve.
Triangulate the shape. With simple shapes, like this one, you can do it manually in code (draw it on paper like I did and write down vertices indexes that form triangles). With more complicated shapes you could use a triangulation algorithms (available on Net). When shapes are even more complicated (i.e. animal outline) - you might need to use special 2D/3D modelling software just to make them, and it will do triangulation in there.
Render the triangles.

Are there any easy ways to generate OpenGL code for drawing shapes from a GUI?

I have enjoyed learning to use OpenGL under the context of games programming, and I have experimented with creating small shapes. I'm wondering if there are any resources or apps that will generate code similar to the following with a simple paint-like interface.
glColor3f(1.0, 0.0, 0.0);
glBegin(GL_LINE_STRIP);
glVertex2f(1, 0);
glVertex2f(2, 3);
glVertex2f(4, 5);
glEnd();
I'm having trouble thinking of the correct dimensions to generate shapes and coming up with the correct co-ordinates.
To clarify, I'm not looking for a program I can just freely draw stuff in and expect it to create good code to use. Just more of a visual way of representing and modifying the sets of coordinates that you need.
I solved this to a degree by drawing a shape in paint and measuring the distances between the pixels relative to a single point, but it's not that elegant.
It sounds like you are looking for a way to import 2d geometry into your application. The best approach in my opinion would be to develop a content pipeline. It goes something like this:
You would create your content in a 3d modeling program like Google's Sketchup. In your case you would draw 2d shapes using polygons.
You need a conversion tool to get the data out of the original format and into a format that your target application can understand. One way to get polygon and vertex data out of Sketchup is to export to Collada and have your tool read and process it. (The simplest format would be a list of triangles or lines.)
Write a geometry loader in your code that reads the data created by your conversion tool. You need to write opengl code that uses vertex arrays to display the geometry.
The coordinates you'll use just depend on how you define your viewport and the resolution you're operating in. In fact, you might think about collecting the coordinates of the mouse clicks in whatever arbitrary coordinate system you want and then mapping those coordinates to opengl coordinates.
What kind of library are you expecting?
something like
drawSquare(dx,dy);?
drawCircle(radius);?
drawPoly(x1,y1,x2,y2....);?
Isn't that exactly the same as glVertex but with a different name? Where is the abstraction?
I made one of these... it would take a bitmap image, and generate geometry from it. try looking up triangulation.
the first step is generating the edge of the shape, converting it from pixels to vertices and edges, find all the edge pixels and put a vertex at each one, then based on either the distance between vertices, or (better) the difference in gradient between edges to cull out vertices and reduce the poly count of the mesh.
if your shape drawing program works with 'vector graphics' rather than pixels, i.e. plotting points and having lines drawn between them, then you can skip that first step and you just need to do triangulation.
the second step, once you have your edges and vertices is triangulation, in order to generate triangles, ear clipping is a simple method for instance.
as for the coordinates to use? that’s entirely up to you as others have said, to keep it simple, Id just work in pixel coordinates.
you can then scale and translate as needed to transform the shape for use.