Using vertex array, quad array to create mesh of quad - c++

I'm doing OpenGL program and I'm required to create a user defined dimension quad mesh.
From what I understand so far is that I use array of vertices to draw quads that actually will form quad meshes. It's pretty simple concept but I'm having a tough time understanding it.
So please correct me if I understand it incorrectly.
So if a user wants to do a 4x4 mesh, there will be 16 quads all together and 64 vertices to place them.
So as user defines the resolution of mesh (oh by the way, the boundary size is already given at the beginning), I create those 64 vertices.
Am I getting it correct so far?
I'm going to interact with those quads and reshaped them to form a mountain kinda shapes.
Of course i would need bigger resolution probably 32x32 or even bigger to properly display such thing.

If I understand you correctly, you want to make a mesh of quads that in turn creates one big quad.
It would look something like this for a 4x4
_ _ _ _
|_|_|_|_|
|_|_|_|_|
|_|_|_|_|
|_|_|_|_|
In this case you would only need (4+1)x(4+1) vertices, so 25 vertices. You could specify the four vertices of each quad uniquely, although aside from being unnecessary for a mesh (and wasting memory and speed), it might end up being harder for you to add that "mountain" functionality (something like a terrain I'm guessing). If you move a vertex in the middle of the mesh, you probably want it to move for all four quads that use that vertex. If you have the quads share vertices (as they do in a mesh), moving one point would change all the quads using that point.
Basically, specify all the unique vertices and connect them as quads. Redundancy is best avoided.
Also, if you're learning OpenGL, make sure you're not learning with the fixed function pipeline (if you have a glBegin() or a glVertex3f() in there, you're using the fixed function pipeline). There are far too many reasons why you shouldn't use it (it's decades old, it's slower, it's not nearly as flexible), but perhaps the biggest reason is that it's a waste of time if you want to do real graphics. You'd have to learn how to use the programmable pipeline, and having fixed function pipeline habits and ways of thinking in your head will only make it harder than it has to be.

Related

Texture tiling with continuous random offset?

I have a texture and a mesh, if I apply the texture on the mesh, it tiles it continuously as one would expect. The offset for each tile is equal.
The problem:
Non-tilable texture or texture with some outstanding elements are looking repetitive and cheap.
Example:
Solution Attempt
My first attempt was to programatically generate a texture size of a mesh with randomised offsets for each tiles. Of course the size of the texture became a problem, let alone the GPU limitation of a single texture max size.
What I would like to do
I would like to know if there's a way to make a Unity shader or a material that would load a single texture and tile it with random offsets for each tile and do it only once to keep the performance high?
I believe you might try one of techniques invented by Inigo Quilez (http://www.iquilezles.org/www/articles/texturerepetition/texturerepetition.htm).
Basically, non-tilable textures and textures with some outstanding elements are different problems.
Non-tilable textures
There are 2 ways of solving it:
Fixing the texture itself;
Mirrored repeat can be used in some cases (see GL_MIRRORED_REPEAT)
Textures with some outstanding elements
This can be solved in the following ways (or conjunction of them):
Modifying the texture (this includes enlargement as well);
Using multitexturing;
Well, maybe mirrored repeat can be used as well in some cases.
Shifting texture coordinates randomly
Unfortunately, I can't think of any case of these 2 problems (except, maybe, white nose textures) where texture coordinates shifting is a solution.
You are looking at this problem the wrong way. All games face this issue. They hide it simply by a) varying textures a lot instead of texturing large areas with the same texture and b) through level design. Imagine this plane filled with barns, gras, trees, fences and what not - suddenly the mono-textured surface blends in with its surroundings. Also camera angle plays a huge role in this. Try changing your camera position close to the ground and the repeating texture is much less noticeable.
Your plane is just a very extreme example. You should not try to fix it at this point but rather continue to build your game. Or design your textures to repeat well without showing clear patterns. The extreme would be a flatcolored texture. But generally large outdoor terrain textures simply have very little structure, almost being like noise, plus they don't use colors with any contrast, just shades of the same color.
Your offset idea won't work. Perhaps it might work technically (it may be inefficient though). But random offsets can't cover up the patterns, instead it will create new ones because the textures won't smoothly interpolate at their edges anymore, so you could clearly see a grid of squares. That I guess would be even uglier and more noticeable.
Lastly you can increase texture size or scale (blurryness may need to be covered up as explained above). In relation to camera angle this would be the easiest, most effective fix. Or at least an improvement.
old thread, but relevant to many I think. You can do this in a shader, by randomizing the Vertex position on the XZ plane, (or better) the UV co-ordinates, based on the world space of the co-ordinates.
The texture will still tile.... but instead of being in a straight line... it will be in a random wiggly line. This is great for stuff like terrain, grass etc.... but obviously no good if you want to maintain straight lines in your textures.
A second option is diffuse-detail shader. It tiles one texture up close to camera, and another when further away (which you can make softer / more blurry
Third option... blend 2 textures together, with different UV tiling scale (non divisible. e.g not scale 2 and 4, but use 1 and 2.334556) on each, so the pattern is harder to see

OpenGL- drawarrays or drawelements?

I'm making a small 2D game demo and from what I've read, it's better to use drawElements() to draw an indexed triangle list than using drawArrays() to draw an unindexed triangle list.
But it doesn't seem possible as far as I know to draw multiple elements that are not connected in a single draw call with drawElements().
So for my 2D game demo where I'm only ever going to draw squares made of two triangles, what would be the best approach so I don't end having one draw call per object?
Yes, it's better to use indices in many cases since you don't have to store or transfer duplicate vertices and you don't have to process duplicate vertices (vertex shader only needs to be run once per vertex). In the case of quads, you reduce 6 vertices to 4, plus a small amount of index data. Two thirds is quite a good improvement really, especially if your vertex data is more than just position.
In summary, glDrawElements results in
Less data (mostly), which means more GPU memory for other things
Faster updating if the data changes
Faster transfer to the GPU
Faster vertex processing (no duplicates)
Indexing can affect cache performance, if the reference vertices that aren't near each other in memory. Modellers commonly produce meshes which are optimized with this in mind.
For multiple elements, if you're referring to GL_TRIANGLE_STRIP you could use glPrimitiveRestartIndex to draw multiple strips of triangles with the one glDrawElements call. In your case it's easy enough to use GL_TRIANGLES and reference 4 vertices with 6 indices for each quad. Your vertex array then needs to store all the vertices for all your quads. If they're moving you still need to send that data to the GPU every frame. You could position all the moving quads at the front of the array and only update the active ones. You could also store static vertex data in a separate array.
The typical approach to drawing a 3D model is to provide a list of fixed vertices for the geometry and move the whole thing with the model matrix (as part of the model-view). The confusing part here is that the mesh data is so small that, as you say, the overhead of the draw calls may become quite prominent. I think you'll have to draw a LOT of quads before you get to the stage where it'll be a problem. However, if you do, instancing or some similar idea such as particle systems is where you should look.
Perhaps only go down the following track if the draw calls or data transfer becomes a problem as there's a lot involved. A good way of implementing particle systems entirely on the GPU is to store instance attributes such as position/colour in a texture. Each frame you use an FBO/render-to-texture to "ping-pong" this data between another texture and update the attributes in a fragment shader. To draw the particles, you can set up a static VBO which stores quads with the attribute-data texture coordinates for use in the vertex shader where the particle position can be read and applied. I'm sure there's a bunch of good tutorials/implementations to follow out there (please comment if you know of a good one).

Rendering thousands of moving quads

I have to render large quantities of particles. These particles are simple non-textured quads (squares actually). Oh, and they're moving all the time since they're particles.
I have considered 2 options but as I'm not an OpenGL expert I don't know what's best.
Use VBOs to render them all.
Pros: faster than immediate mode.
Cons: (I don't know much about VBOs but) from what I gather the quads' coordinates need to be stored in some buffer in RAM... and all of these coordinates need to be computed by the CPU. So for particle P1(x,y) I would have to compute 4 other coordinates (P2(x-1,y-1), P3(x-1,y+1), P4(x+1,y+1), P5(x+1,y-1)) - that's a lot of work for the CPU!
Use a display list: First create a tiny display list for a single square quad. Then, to render each particle do some pushMatrix, glTranslate, callList, popMatrix.
Pros: I don't have to compute 4 coordinates manually - glTranlate does that.
Supposedly display lists are faster than VBOs.
Cons: Are they faster than VBOs when they contain just one quad?
Mind you: I'm calling OpenGL stuff from Java so there's no smooth way of transforming Java arrays to GPU arrays (everything has to be stored in intermediary FloatBuffers before transfers).
Display Lists are for static geometry. Rendering just one single quad, then changing the transformation, rinse and repeat is horribly inefficient.
Updating VBOs is better but still not optimal.
You should look into instanced rendering. Here's a tutorial:
http://ogldev.atspace.co.uk/www/tutorial33/tutorial33.html
In the case of simple quads using a geometry shader turning simple GL_POINTS into two GL_TRIANGLES would do the trick as well.
To do particle simulations i think what you are looking for is transform feedback, here is a nice demonstration with some code on how to do it http://prideout.net/blog/?tag=opengl-transform-feedback

OpenGL VBO storage and templates

I have a question about VBO's. Let's say just as an example I'm trying to build a voxel style engine that makes even a 16x16x16 chunk.
Do I store the map information in the VBO? How do I get the verticies for a cube? The way I'm thinking about it, the VBO would require 24 vector3 variables (vectors for each cube at each location). That seems like a lot.
is there some way to have a single 'cube' VBO template, then somehow change the coordinates for each cube I want to draw, calling the template (i hope that makes sense) and using bufferdata to update that template for every location, do I have to actually store those 24 vectors for every single location in the 16x16x16, or would I just store the map coordinates, then have the cube and polygons drawn through a shader?
I hope that makes sense. it seems expensive memory wise loading up something that stores 24 vectors per location, and it seems resource intensive to me calling bufferdata 16x16x16 times per frame... so the last option using the vertex shader seems the most viable, but I'm new to shaders so is something like that possible?
What is the most common method used?
Geometry shaders can, indeed, emit multiple primitives for a single input primitive. So drawing all 6 faces of a cube from a single input point is certainly possible. Though for "voxel" engines you might be better served by point sprites, as often the orientation of the cube isn't useful. A point sprite draws a single screen-aligned quad from an input point. Beyond that you'll need to be more specific about what you're doing.

Mesh deformation and VBOs

In my current project I render a series of basically cubic 3D models arranged in a grid. These 3D tiles form the walls of a dungeon level in a game, so they're not perfectly cubic, but I pay special attention to be certain all the edges line up and everything tiles correctly.
I'm interested in implementing a height-map deformation, which seems like it'd require me to manually deform the vertices of the 3D tiles, first by raising or lowering a corner, then by calculating a line between two corners and shifting all the vertices based on the height of that line. Seems pretty straightforward.
My current issue is this: I'm using OpenGL, which provides an optimization called VBOs, which basically are (to my understanding) static copies of the mesh kept in GPU memory for speed. I render using VBOs because I only use three basic models (L-corner, straight-wall, and a cap to join walls when they don't meet in an L). If I have to manually fiddle with the vertices of my models, it seems like I'd have to replace the content of the VBO every tile, which pretty much negates the point of using them.
It seems to me that I might be able to use simple rotation and translation transforms to achieve a similar effect, but I can't figure out how to do it without leaving gaps between the tiles. Any thoughts?
You may be able to use a vertex program on your GPU. The main difficulty (if I understand your problem correctly) is that vertex programs must rely on either global or per-vertex parameters, and there is a strictly limited amount of space available for each.
Without more details, I can only suggest being clever about how you set up the parameters...