Mesh deformation and VBOs - opengl

In my current project I render a series of basically cubic 3D models arranged in a grid. These 3D tiles form the walls of a dungeon level in a game, so they're not perfectly cubic, but I pay special attention to be certain all the edges line up and everything tiles correctly.
I'm interested in implementing a height-map deformation, which seems like it'd require me to manually deform the vertices of the 3D tiles, first by raising or lowering a corner, then by calculating a line between two corners and shifting all the vertices based on the height of that line. Seems pretty straightforward.
My current issue is this: I'm using OpenGL, which provides an optimization called VBOs, which basically are (to my understanding) static copies of the mesh kept in GPU memory for speed. I render using VBOs because I only use three basic models (L-corner, straight-wall, and a cap to join walls when they don't meet in an L). If I have to manually fiddle with the vertices of my models, it seems like I'd have to replace the content of the VBO every tile, which pretty much negates the point of using them.
It seems to me that I might be able to use simple rotation and translation transforms to achieve a similar effect, but I can't figure out how to do it without leaving gaps between the tiles. Any thoughts?

You may be able to use a vertex program on your GPU. The main difficulty (if I understand your problem correctly) is that vertex programs must rely on either global or per-vertex parameters, and there is a strictly limited amount of space available for each.
Without more details, I can only suggest being clever about how you set up the parameters...

Related

OpenGL - How to render many different models?

I'm currently struggling with finding a good approach to render many (thousands) slightly different models. The model itself is a simple cube with some vertex offset, think of a skewed quad face. Each 'block' has a different offset of its vertices, so basically I have a voxel engine on steroids as each block is not a perfect cube but rather a skewed cuboid. To render this shape 48 vertices are needed but can be cut to 24 vertices as only 3 faces are visible. With indexing we are at 12 vertices (4 for each face).
But, now that I have the vertices for each block in the world, how do I render them?
What I've tried:
Instanced Rendering. Sounds good, doesn't work as my models are not the same.
I could simplify distant blocks to a cube and render them with glDrawArraysInstanced/glDrawElementsInstanced.
Put everything in one giant VBO. This has a better performance than rendering each cube individually, but has the downside of having one large mesh. This is not desireable as I need every cube to have different textures, lighting, etc... Selecting a single cube within that huge mesh is not possible.
I am aware of frustum culling and occlusion culling, but I already have problems with some cubes in front of me (tested with a 128x128 world).
My requirements:
Draw some thousand models.
Each model has vertices offsets to make the block less cubic, stored in another VBO.
Each block has to be an individual object, as you should be able to place/remove blocks.
Any good performance advices?
This is not desireable as I need every cube to have different textures, lighting, etc... Selecting a single cube within that huge mesh is not possible.
Programmers should avoid declaring that something is "impossible"; it limits your thinking.
Giving each face of these cubes different textures has many solutions. The Minecraft approach uses texture atlases. Each "texture" is really just a sub-section of one large texture, and you use texture coordinates to select which sub-section a particular face uses. But you can get more complex.
Array textures allow for a more direct way to solve this problem. Here, the texture coordinates would be the same, but you use a per-vertex integer to select the correct texture for a face. All of the vertices for a particular face would have an index. And if you're clever, you don't even really need texture coordinates. You can generate them in your vertex shader, based on per-vertex values like gl_VertexID and the like.
Lighting parameters would work the same way: use some per-vertex data to select parameters from a UBO or SSBO.
As for the "individual object" bit, that's merely a matter of how you're thinking about the problem. Do not confuse what happens in the player's mind with what happens in your code. Games are an elaborate illusion; just because something appears to the user to be an "individual object" doesn't mean it is one to your rendering engine.
What you need is the ability to modify your world's data to remove and add new blocks. And if you need to show a block as "selected" or something, then you simply need another per-block value (like the lighting parameters and index for the texture) which tells you whether to draw it as a "selected" block or as an "unselected" one. Or you can just redraw that specific selected block. There are many ways of handling it.
Any decent graphics card (since about 2010) is able to render a few millions vertices in a blinking.
The approach is different depending on how many changes per frame. In other words, how many data must be transferred to the GPU per frame.
For the case of small number of changes, storing the data in one big VBO or many smaller VBOs (and their VAOs), sending the changes by uniforms, and calling several glDraw***, shows similar performance. Different hardwares behave with little difference. Indexed data may improve the speed.
When most of the data changes in every frame and these changes are hard or impossible to do in the shaders, then your app is memory-transfer bound. Streaming is a good advise.

Creating Huge Low Poly Terrain

I am starting a new project in C++ using GLFW and GLEW.
The plan is to have a fairly big Low Poly terrain. It will NOT be randomly generated, I am planning on making it in Blender.
My problem is, that I cannot create a huge Low Poly terrain in Blender, because the program becomes really slow with the amount of vertices that the terrain has. I created a 500m x 500m terrain, and subdivided it by 1000. That gave me ALOT of vertices, making the program not usable.
What would be the best approach to creating a huge terrain?
Im not sure how I would go onto creating chunks of the terrain, since I have to model them.
How do I create a big Low Poly terrain, without having a problem with
the program being slow?
Another concern of mine is obviously loading the world into a custom game engine of mine. I suppose a big world like this would have huge problems with the load times.
Terrain in game engines like Unity, Unreal Engine and CryEngine is treated differently from your average static or skeletal mesh. Creation of different levels of detail are is usually done at runtime, as opposed to ordinary meshes having their LODs pre-created. Loading a mesh from a 3D program like Blender or 3DS Max as your entire terrain just isn't doable.
The Direct3D tutorials at rastertek are very good for learning, but isn't OpenGL obviously. Here is a basic tutorial of creating a basic terrain in Java OpenGL (This doesn't go into LOD handling I don't think).
Java OpenGL terrain
Most commonly I think I've seen a quad tree system, where you have terrain patches, and each patch is subdivided into four other patches, depending on a condition (whether distance to camera or screenspace size).
This is what a standard quad-tree LOD system looks like, in particular for the game Kerbal Space Program.
Along the way you'll need to figure out how to solve some problems, like how to get rid of the cracks and gaps in between two terrain patches that are of different LOD levels. Kerbal Space Program solved this by treating the edge vertices differently to line up, and not allowing any two adjacent terrain patches to be more than one LOD level of difference.
One method I tried was to upload two vertex positions for each vertex, the current LOD position and the position of the LOD vertex from one level down, and linearly interpolate between the two based on camera distance. Yet I'm pretty sure there are more elegant ways than this.
I've posted a video from a while ago of me messing around with this stuff, it shows the basic quad tree pattern, the problem of cracks, and then the vertex interpolation method. Some people create the patches on the CPU and other on the GPU and read back any necessary info, (like for example for physics) using transform feedback. There's lots of ways of doing things, and I hope to get back into it.
TerrainPatches
I did something similar many years ago and found this tutorial very helpful:
http://www.rastertek.com/tertut05.html
It describes creating a quad tree with specific triangles from your terrain mesh partitioned into AABBs, using frustum culling huge parts of your terrain can be culled during runtime and your application's performance should improve. As long as you are confident importing meshes exported from blender (are they in .obj ?) you should easily be able to partition the different triangles using the strategies outlined in the tutorial.
A further optimization could be to have various LODs for nodes in your quadtree depending on the distance from the camera, i.e if a node is a set distance from the camera render a lower poly mesh by skipping certain vertices to make the smaller triangles "collapse" into larger ones. I'd recommend generating specific index lists to do this and use the same vertex data as opposed to having separate pre-generated chunks of mesh to save on memory.

OpenGL 3.2+ Drawing Cubes around existing vertices

So I have a cool program that renders a pretty cube in the centre of the screen.
I'm trying to now create a tiny cube on each corner of the existing cube (so 8 tiny cubes), centred on each of the existing cubes corners (or vertices).
I'm assuming an efficient way to implement this would be with a loop of some kind, to minimise the amount of code.
My query is, how does this affect the VAO/VBO's? Even in a loop, would each one need it's own buffer or could they all be sent at the same time...
Secondly, if it can be done, what would the structure of this loop be like, in terms of focusing on separate vertices given that each vertex has different coordinates...
As Vaughn Cato said, each object (using the same VBOs) can simply be drawn at different locations in world space, so you do not need to define separate VBO's for each object.
To complete this task, you simply need a loop to modify the given matrix before each one is rendered to the screen to change the origins of where each cube is drawn.

Using vertex array, quad array to create mesh of quad

I'm doing OpenGL program and I'm required to create a user defined dimension quad mesh.
From what I understand so far is that I use array of vertices to draw quads that actually will form quad meshes. It's pretty simple concept but I'm having a tough time understanding it.
So please correct me if I understand it incorrectly.
So if a user wants to do a 4x4 mesh, there will be 16 quads all together and 64 vertices to place them.
So as user defines the resolution of mesh (oh by the way, the boundary size is already given at the beginning), I create those 64 vertices.
Am I getting it correct so far?
I'm going to interact with those quads and reshaped them to form a mountain kinda shapes.
Of course i would need bigger resolution probably 32x32 or even bigger to properly display such thing.
If I understand you correctly, you want to make a mesh of quads that in turn creates one big quad.
It would look something like this for a 4x4
_ _ _ _
|_|_|_|_|
|_|_|_|_|
|_|_|_|_|
|_|_|_|_|
In this case you would only need (4+1)x(4+1) vertices, so 25 vertices. You could specify the four vertices of each quad uniquely, although aside from being unnecessary for a mesh (and wasting memory and speed), it might end up being harder for you to add that "mountain" functionality (something like a terrain I'm guessing). If you move a vertex in the middle of the mesh, you probably want it to move for all four quads that use that vertex. If you have the quads share vertices (as they do in a mesh), moving one point would change all the quads using that point.
Basically, specify all the unique vertices and connect them as quads. Redundancy is best avoided.
Also, if you're learning OpenGL, make sure you're not learning with the fixed function pipeline (if you have a glBegin() or a glVertex3f() in there, you're using the fixed function pipeline). There are far too many reasons why you shouldn't use it (it's decades old, it's slower, it's not nearly as flexible), but perhaps the biggest reason is that it's a waste of time if you want to do real graphics. You'd have to learn how to use the programmable pipeline, and having fixed function pipeline habits and ways of thinking in your head will only make it harder than it has to be.

Perfect filled triangle rendering algorithm?

Where can I get an algorithm to render filled triangles? Edit3: I cant use OpenGL for rendering it. I need the per-pixel algorithm for this.
My goal is to render a regular polygon from triangles, so if I use this triangle filling algorithm, the edges from each triangle wouldn't overlap (or make gaps between them), because then it would result into rendering errors if I use for example XOR to render the pixels.
Therefore, the render quality should match to OpenGL rendering, so I should be able to define - for example - a circle with N-vertices, and it would render like a circle with any size correctly; so it doesn't use only integer coordinates to render it like some triangle filling algorithms do.
I would need the ability to control the triangle filling myself: I could add my own logic on how each of the individual pixels would be rendered. So I need the bare code behind the rendering, to have full control on it. It should be efficient enough to draw tens of thousands of triangles without waiting more than a second perhaps. (I'm not sure how fast it can be at best, but I hope it wont take more than 10 seconds).
Preferred language would be C++, but I can convert other languages to my needs.
If there are no free algorithms for this, where can I learn to build one myself, and how hard would that actually be? (me=math noob).
I added OpenGL tag since this is somehow related to it.
Edit2: I tried the algo in here: http://joshbeam.com/articles/triangle_rasterization/ But it seems to be slightly broken, here is a circle with 64 triangles rendered with it:
But if you zoom in, you can see the errors:
Explanation: There is 2 pixels overlapping to the other triangle colors, which should not happen! (or transparency or XOR etc effects will produce bad rendering).
It seems like the errors are more visible on smaller circles. This is not acceptable if I want to have a XOR effect for the pixels.
What can I do to fix these, so it will fill it perfectly without overlapped pixels or gaps?
Edit4: I noticed that rendering very small circles isn't very good. I realised this was because the coordinates were indeed converted to integers. How can I treat the coordinates as floats and make it render the circle precisely and perfectly just like in OpenGL ? Here is example how bad the small circles look like:
Notice how perfect the OpenGL render is! THAT is what I want to achieve, without using OpenGL. NOTE: I dont just want to render perfect circle, but any polygon shape.
There's always the half-space method.
OpenGL uses the GPU to perform this job. This is accelerated in hardware and is called rasterization.
As far as i know the hardware implementation is based on the scan-line algorithm.
This used to be done by creating the outline and then filling in the horizontal lines. See this link for more details - http://joshbeam.com/articles/triangle_rasterization/
Edit: I don't think this will produce the lone pixels you are after, there should be a pixel on every line.
Your problem looks a lot like the problem one has when it comes to triangles sharing the very same edge. What is done by triangles sharing an edge is that one triangle is allowed to conquer the space while the other has to leave it blank.
When doing work with a graphic card usually one gets this behavior by applying a drawing order from left to right while also enabling a z-buffer test or testing if the pixel has ever been drawn. So if a pixel with the very same z-value is already set, changing the pixel is not allowed.
In your example with the circles the line of both neighboring circle segments are not exact. You have to check if the edges are calculated differently and why.
Whenever you draw two different shapes and you see something like that you can either fix your model (so they share all the edge vertexes), go for a z-buffer test or a color test.
You can also minimize the effect by drawing edges using a sub-buffer that has a higher resolution and down-sample it. Since this does not effect the whole area it is more cost effective in terms of space and time when compared to down-sampling the whole scene.