Working with .off in opengl with normals - c++

How can i put the right normals to the object? There is a way to transform from .off into an object that contains normals?

OFF files usually don't support special attributes per face or per vertex and thus have no normals.
The simplest thing you can do is do simply calculate the normals yourself using one of the well-known and easy to implement face-normal algorithms. Here are a few examples.

You can also look at doing normals via a texture (aka bump mapping). You will basically need to generate a normal map of your models in a 3D program and be using some kind of perpixel shading and bind your shades to use the normal texture.
The advantage to doing it that way is you can have higher detail without increasing the number of polygons. You can do things like generate a single grainy bumpy stone pattern on a single flat polygon.
Also if you generate your normals procedurally you can't mix flat and smooth shading.

Related

Recreating Blender's Edge Split Algorithm

I'm working on an OpenGL project right now. I use per-vertex normals for my lighting calculations which causes them to be automatically smoothed (I think this is called gara-something shading).
This beginning to work on some low-poly design using hardcoded models - but they look weird because of the auto-smoothing.
Blender has a mesh edge-split option that does exactly what I'm looking for. It turns models
from this
to this
Instead of having to rewrite my render system to allow for both per-vertex normal smooth lighting and per-face normal hard lighting, I was wondering if anyone knows how Blender's edge-split algorithm works so I can recreate it in my hardcoded models.
Using Blender I've compared normally exported .obj's with their edge-splitted counterparts and there's no way for me to fully understand the difference.
Many thanks.
I don't have enough points for a comment, so here goes an answer..
In general, I agree with lfgtm: if each vertex has 1 normal, but the vertex is shared (reused, i.e. indexed) by several triangles, then you cannot achieve a 'flat' look or have split edges because you miss the extra normals. You would have to duplicate the vertex.
But since you mention OBJ export, have you tried ticking the "Smooth groups" option in the OBJ export options:
This will create smoothing groups according to the edges you have marked sharp (In edit mode: Mesh -> Edges -> Mark sharp). If your engine supports these smoothing groups in the OBJ file, you will keep this sharp edge effect. You can preview these sharp edges in Blender using the Edge Split modifier (tick the "Sharp Edges" option):
Blender's edge split modifier achieves it's result by breaking the mesh, if you apply the edge split modifier you get a mesh with faces that are not connected.
What you want to look into is the difference between smooth and flat shading
If you google "opengl flat shading" you will find several items like this one suggesting the use of glShadeModel(GL_FLAT);
That would be Gouraud shading. There's no geometry processing going on here by looks of things. It's simply a different shading model.
In OpenGL, for each triangle, you need to draw all three vertices with the same normal, this normal is the cross product of the two triangle edges. You will then see a mesh with the same flat-shading as your image.
If each vertex has its own normal, and these normals are not all equal in the triangle, OpenGL will interpolate the difference across the surface of the face producing a much smoother transition of the shading from vertex to vertex. This is smooth (gouraud shading).
So in summary, you can achieve this by changing how you draw the normals in relation to the triangles.

Is my situation a good case to use GL_STATIC_DRAW?

I have a textured polygon mesh that I plan to be move-able based on the user's various inputs.
For example: the user can move the vertices in various directions. But the number of vertices and the texture coordinates will always be constant.
Is this a good situation to use GL_STATIC_DRAW, or should i use something else, like GL_STREAM_DRAW?
Instead of updating a VBO every time the vertices are moved, I would suggest using transformations. With transformations, you can create a matrix that can translate, rotate, or scale the vertices by simply multiplying the transformation matrix by the position vector. This multiplication can be done on the graphics card with a GLSL shader. Using this method, your vertex buffer would never have to change.
I would suggest reading this article for more information on how to use transformations in OpenGL: https://open.gl/transformations
No, your situation is not a good case to use GL_STATIC_DRAW. As h4lcOn's link suggests you should use dynamic or stream. Though if I understand correctly what you are trying to do I wouldn't even use VBO at all. There will not be much overhead (if any at all) if you push the coordinates every draw call for a simple polygon. Use a VBO in cases when you have a large quantity of polygons or when you make large amount of draw calls with the same vertex data in a single frame.

OpenGL: create complex and smoothed polygons

In my OpenGL project, I want to dynamically create smoothed polygons, similiar like this one:
The problem relies mainly in the smoothing process. My procedure up to this point, is firstly to create a VBO with randomly placed vertices.
Then, in my fragment shader, (I'm using the programmable function pipeline) there should happen the smoothing process, or in other words, created the curves out of the previously defined "lines" between the vertices.
And exactly here is the problem: I am not very familiar with thoose complex mathematical algorithms, which would examine, if a point is inside the "smoothed polygon" or not.
First up, you can't really do it in the fragment shader. The fragment shader is limited to setting the final(ish) color of a "pixel" (which is basically, but not exactly, an actual pixel) before it gets written to the screen. It can't create new points on a curve.
This page gives a nice overview of the different algorithms for creating smooth curves.
The general approach is to break a couple of points into multiple points using a geometry shader, and then render them just like a normal polygon. But I don't know the details. Try a google search for bezier geometry shader for example.
Wait, I lie. I found a program here that does it in the fragment shader.

OpenGL, applying texture from image to isosurface

I have a program in which I need to apply a 2-dimensional texture (simple image) to a surface generated using the marching-cubes algorithm. I have access to the geometry and can add texture coordinates with relative ease, but the best way to generate the coordinates is eluding me.
Each point in the volume represents a single unit of data, and each unit of data may have different properties. To simplify things, I'm looking at sorting them into "types" and assigning each type a texture (or portion of a single large texture atlas).
My problem is I have no idea how to generate the appropriate coordinates. I can store the location of the type's texture in the type class and use that, but then seams will be horribly stretched (if two neighboring points use different parts of the atlas). If possible, I'd like to blend the textures on seams, but I'm not sure the best manner to do that. Blending is optional, but I need to texture the vertices in some fashion. It's possible, but undesirable, to split the geometry into parts for each type, or to duplicate vertices for texturing purposes.
I'd like to avoid using shaders if possible, but if necessary I can use a vertex and/or fragment shader to do the texture blending. If I do use shaders, what would be the most efficient way of telling it was texture or portion to sample? It seems like passing the type through a parameter would be the simplest way, but possible slow.
My volumes are relatively small, 8-16 points in each dimension (I'm keeping them smaller to speed up generation, but there are many on-screen at a given time). I briefly considered making the isosurface twice the resolution of the volume, so each point has more vertices (8, in theory), which may simplify texturing. It doesn't seem like that would make blending any easier, though.
To build the surfaces, I'm using the Visualization Library for OpenGL and its marching cubes and volume system. I have the geometry generated fine, just need to figure out how to texture it.
Is there a way to do this efficiently, and if so what? If not, does anyone have an idea of a better way to handle texturing a volume?
Edit: Just to note, the texture isn't simply a gradient of colors. It's actually a texture, usually with patterns. Hence the difficulty in mapping it, a gradient would've been trivial.
Edit 2: To help clarify the problem, I'm going to add some examples. They may just confuse things, so consider everything above definite fact and these just as help if they can.
My geometry is in cubes, always (loaded, generated and saved in cubes). If shape influences possible solutions, that's it.
I need to apply textures, consisting of patterns and/or colors (unique ones depending on the point's "type") to the geometry, in a technique similar to the splatting done for terrain (this isn't terrain, however, so I don't know if the same techniques could be used).
Shaders are a quick and easy solution, although I'd like to avoid them if possible, as I mentioned before. Something usable in a fixed-function pipeline is preferable, mostly for the minor increase in compatibility and development time. Since it's only a minor increase, I will go with shaders and multipass rendering if necessary.
Not sure if any other clarification is necessary, but I'll update the question as needed.
On the texture combination part of the question:
Have you looked into 3d textures? As we're talking marching cubes I should probably immediately say that I'm explicitly not talking about volumetric textures. Instead you stack all your 2d textures into a 3d texture. You then encode each texture coordinate to be the 2d position it would be and the texture it would reference as the third coordinate. It works best if your textures are generally of the type where, logically, to transition from one type of pattern to another you have to go through the intermediaries.
An obvious use example is texture mapping to a simple height map — you might have a snow texture on top, a rocky texture below that, a grassy texture below that and a water texture at the bottom. If a vertex that references the water is next to one that references the snow then it is acceptable for the geometry fill to transition through the rock and grass texture.
An alternative is to do it in multiple passes using additive blending. For each texture, draw every face that uses that texture and draw a fade to transparent extending across any faces that switch from one texture to another.
You'll probably want to prep the depth buffer with a complete draw (with the colour masks all set to reject changes to the colour buffer) then switch to a GL_EQUAL depth test and draw again with writing to the depth buffer disabled. Drawing exactly the same geometry through exactly the same transformation should produce exactly the same depth values irrespective of issues of accuracy and precision. Use glPolygonOffset if you have issues.
On the coordinates part:
Popular and easy mappings are cylindrical, box and spherical. Conceptualise that your shape is bounded by a cylinder, box or sphere with a well defined mapping from surface points to texture locations. Then for each vertex in your shape, start at it and follow the normal out until you strike the bounding geometry. Then grab the texture location that would be at that position on the bounding geometry.
I guess there's a potential problem that normals tend not to be brilliant after marching cubes, but I'll wager you know more about that problem than I do.
This is a hard and interesting problem.
The simplest way is to avoid the issue completely by using 3D texture maps, especially if you just want to add some random surface detail to your isosurface geometry. Perlin noise based procedural textures implemented in a shader work very well for this.
The difficult way is to look into various algorithms for conformal texture mapping (also known as conformal surface parametrization), which aim to produce a mapping between 2D texture space and the surface of the 3D geometry which is in some sense optimal (least distorting). This paper has some good pictures. Be aware that the topology of the geometry is very important; it's easy to generate a conformal mapping to map a texture onto a closed surface like a brain, considerably more complex for higher genus objects where it's necessary to introduce cuts/tears/joins.
You might want to try making a UV Map of a mesh in a tool like Blender to see how they do it. If I understand your problem, you have a 3D field which defines a solid volume as well as a (continuous) color. You've created a mesh from the volume, and now you need to UV-map the mesh to a 2D texture with texels extracted from the continuous color space. In a tool you would define "seams" in the 3D mesh which you could cut apart so that the whole mesh could be laid flat to make a UV map. There may be aliasing in your texture at the seams, so when you render the mesh it will also be discontinuous at those seams (ie a triangle strip can't cross over the seam because it's a discontinuity in the texture).
I don't know any formal methods for flattening the mesh, but you could imagine cutting it along the seams and then treating the whole thing as a spring/constraint system that you drop onto a flat surface. I'm all about solving things the hard way. ;-)
Due to the issues with texturing and some of the constraints I have, I've chosen to write a different algorithm to build the geometry and handle texturing directly in that as it produces surfaces. It's somewhat less smooth than the marching cubes, but allows me to apply the texcoords in a way that works for my project (and is a bit faster).
For anyone interested in texturing marching cubes, or just blending textures, Tommy's answer is a very interesting technique and the links timday posted are excellent resources on flattening meshes for texturing. Thanks to both of them for their answers, hopefully they can be of use to others. :)

Techniques to smooth face edges in OpenGL

When I light my human model in OpenGL using face normals it is very apparent where each face is on the model. The lighting becomes considerably smoother using vertex normals but still noticeable. Is there any technique available to smooth organic models without adding additional vertices (ex. subdivisions)?
If you are seeing single faces with per vertex normals you might have forgotten to enable smooth facing:
glShadeModel(GL_SMOOTH);
If that's not helping make sure all vertices with the same position also have the same normal. This should be done by blender automatically however. It's also possible that your model has just too few polygons. If that's the case you either have to use more polygons or do what basszero said. Create a normal map and use that together with a GLSL shader.
If you've got access to a higher resolution model/mesh or perhaps a decent 3d modeling tool that can create a highres model via subdivisions or other fancy methods...
GLSL per-pixel-lighting + normal maps
UPDATE:
http://www.blender.org/development/release-logs/blender-246/render-baking/
Yes there are many different techniques. One is to first calculate a normal for each face (which you probably are using for the vertices in the face) and use these normals to calculate new normals for each vertex.
Here's one where you calculate the average of five adjacent triangles.
Cheers !