I have the following problem (no code yet):
We have a data set of 4000 x 256 with a 16 bit resolution, and I need to code a program to display this data.
I wanted to use DirectX or OpenGL to do so, but I don't know what the proper approach is.
Do I create a buffer with 4000 x 256 triangles with the resolution being the y axis, or would I go ahead and create a single quad and then manipulate the data by using tesselation?
When would I use a big vertex buffer over tesselation and vice versa?
It really depends on a lot of factors.
You want to render a map of about 1million pixels\vertices. Depending on your hardware this could be doable with the most straight forward technique.
Out of my head I can think of 3 techniques:
1) Create a grid of 4000x256 vertices and set their height according to the height map image of your data.
You set the data once upon creation. The shaders will just draw the static buffer and a apply a single transform matrix(world\view\projection) to all the vertices.
2) Create a grid of 4000x256 vertices with height 0 and translate each vertex's height inside the vertex shader by the sampled height map data.
3) The same as 2) only you add a tessellation phase.
The advantage of doing tessellation is that you can use a smaller vertex buffer AND you can dynamically tessellate in run time.
This mean you can make part of your grid more tessellated and part of it less tessellated. For instance maybe you want to tessellate more only where the user is viewing the grid.
btw, you can't tesselate one quad into a million quads, there is a limit how much a single quad can tessellate. But you can tessellate it quite a lot, in any case you will gain several factors of reduced grid size.
If you never used DirectX or OpenGL I would go with 1. See if it's fast enough and only if it's not fast enough go with 2 and last go to 3.
The fact that you know the theory behind 3D graphics rendering doesn't mean it will be easy for you to learn DirectX or OpenGL. They are difficult to understand and learn because they are quite complex as an API.
If you want you can take a look at some tessellation stuff I did using DirectX11:
http://pompidev.net/2012/09/25/tessellation-simplified/
http://pompidev.net/2012/09/29/tessellation-update/
Related
I'm starting to learn openGL (working with version 3.3) with intent to get a small 3d falling sand simulation up, akin to this:
https://www.youtube.com/watch?v=R3Ji8J2Kprw&t=41s
I have a little experience with setting up a voxel environment like Minecraft from some Udemy tutorials for Unity, but I want to build something simple from the ground up and not deal with all the systems already laid on top of things with Unity.
The first issue I've run into comes early. I want to build a system for rendering quads, because instancing a ton of cubes is ridiculously inefficient. I also want to be efficient with storage of vertices, colors, etc. Thus far in the opengl tutorials I've worked with the way to do this is to store each vertex in a float array with both position and color data, and set up the buffer object to read every set of six entries as three floats for position and three for color, using glVertexAttribPointer. The problem is that for each neighboring quad, the same vertices will be repeated because if they are made of different "blocks" they will be different colors, and I want to avoid this.
What I want to do instead to make things more efficient is store the vertices of a cube in one int array (positions will all be ints), then add each quad out of the terrain to an indices array (which will probably turn into each chunk's mesh later on). The indices array will store each quad's position, and a separate array will store each quad's color. I'm a little confused on how to set this up since I am rather new to opengl, but I know this should be doable based on what other people have done with minecraft clones, if not even easier since I don't need textures.
I just really want to get the framework for the chunks, blocks, world, etc, up and running so that I can get to the fun stuff like adding new elements. Anyone able to verify that this is a sensible way to do this (lol) and offer guidance on how to set this up in the rendering, I would very much appreciate.
Thus far in the opengl tutorials I've worked with the way to do this is to store each vertex in a float array with both position and color data, and set up the buffer object to read every set of six entries as three floats for position and three for color, using glVertexAttribPointer. The problem is that for each neighboring quad, the same vertices will be repeated because if they are made of different "blocks" they will be different colors, and I want to avoid this.
Yes, and perhaps there's a reason for that. You seem to be trying to save.. what, a few bytes of RAM? Your graphics card has 8GB of RAM on it, what it doesn't have is a general processing unit or an unlimited bus to do random lookups in other buffers for every single rendered pixel.
The indices array will store each quad's position, and a separate array will store each quad's color.
If you insist on doing it this way, nothing's stopping you. You don't even need the quad vertices, you can synthesize them in a geometry shader.
Just fill in a buffer with X|Y|Width|Height|Color(RGB) with glVertexAttribPointer like you already know, then run a geometry shader to synthesize two triangles for each entry in your input buffer (a quad), then your vertex shader projects it to world units (you mentioned integers, so you're not in world units initially), and then your fragment shader can color each rastered pixel according to its color entry.
ridiculously inefficient
Indeed, if that sounds ridiculously inefficient to you, it's because it is. You're essentially packing your data on the CPU, transferring it to the GPU, unpacking it and then processing it as normal. You can skip at least two of the steps, and even more if you consider that vertex shader outputs get cached within rasterized primitives.
There are many more variations of this insanity, like:
store vertex positions unpacked as normal, and store an index for the colors. Then store the colors in a linear buffer of some kind (texture, SSBO, generic buffer, etc) and look up each color index. That's even more inefficient, but it's closer to the algorithm you were suggesting.
store vertex positions for one quad and set up instanced rendering with a multi-draw command and a buffer to feed individual instance data (positions and colors). If you also have textures, you can use bindless textures for each quad instance. It's still rendering multiple objects, but it's slightly more optimized by your graphics driver.
or just store per-vertex data in a buffer and render it. Done. No pre-computations, no unlimited expansions, no crazy code, you have your vertex data and you render it.
A cube with different colored faces in intermediate mode is very simple. But doing this same thing with shaders seems to be quite a challenge.
I have read that in order to create a cube with different coloured faces, I should create 24 vertices instead of 8 vertices for the cube - in other words, (I visualies this as 6 squares that don't quite touch).
Is perhaps another (better?) solution to texture the faces of the cube using a real simple texture a flat color - perhaps a 1x1 pixel texture?
My texturing idea seems simpler to me - from a coder's point of view.. but which method would be the most efficient from a GPU/graphic card perspective?
I'm not sure what your overall goal is (e.g. what you're learning to do in the long term), but generally for high performance applications (e.g. games) your goal is to reduce GPU load. Every time you switch certain states (e.g. change textures, render targets, shader uniform values, etc..) the GPU stalls reconfiguring itself to meet your demands.
So, you can pass in a 1x1 pixel texture for each face, but then you'd need six draw calls (usually not so bad, but there is some prep work and potential cache misses) and six texture sets (can be very bad, often as bad as changing shader uniform values).
Suppose you wanted to pass in one texture and use that as a texture map for the cube. This is a little less trivial than it sounds -- you need to express each texture face on the texture in a way that maps to the vertices. Often you need to pass in a texture coordinate for each vertex, and due to the spacial configuration of the texture this normally doesn't end up meaning one texture coordinate for one spatial vertex.
However, if you use an environmental/reflection map, the complexities of mapping are handled for you. In this way, you could draw a single texture on all sides of your cube. (Or on your sphere, or whatever sphere-mapped shape you wanted.) I'm not sure I'd call this easier since you have to form the environmental texture carefully, and you still have to set a different texture for each new colors you want to represent -- or change the texture either via the GPU or in step with the GPU, and that's tricky and usually not performant.
Which brings us back to the canonical way of doing as you mentioned: use vertex values -- they're fast, you can draw many, many cubes very quickly by only specifying different vertex data, and it's easy to understand. It really is the best way, and how GPUs are designed to run quickly.
Additionally..
And yes, you can do this with just shaders... But it'd be ugly and slow, and the GPU would end up computing it per each pixel.. Pass the object space coordinates to the fragment shader, and in the fragment shader test which side you're on and output the corresponding color. Highly not recommended, it's not particularly easier, and it's definitely not faster for the GPU -- to change colors you'd again end up changing uniform values for the shaders.
I've searched for a while and I've heard of different ways to do this, so I thought I'd come here and see what I should do,
From what I've gathered I should use.. glBitmap and 0s and 0xFF values in the array to make the terrain. Any input on this?
I tried switching it to quads, but I'm not sure that is efficient and the way its meant to be done.
I want the terrain to be able to have tunnels, such as worms. 2 Dimensional.
Here is what I've tried so far,
I've tried to make a glBitmap, so..
pixels = pow(2 * radius, 2);
ras = new GLubyte[pixels];
and then set them all to 0xFF, and drew it using glBitmap(x, y, 0, 0, ras);
This could be then checked for explosions and what not and whatever pixels could be set to zero. Is this a plausible approach? I'm not too good with opengl, can I put a texture on a glBitmap? From what I've seen it I don't think you can.
I would suggest you to use the stencil buffer. You mark destroyed parts of the terrain in the stencil buffer and then draw your terrain with stencil testing enabled with a simple quad without manually testing each pixel.
OK, this is a high-level overview, and I'm assuming you're familiar with OpenGL basics like buffer objects already. Let me know if something doesn't make sense or if you'd like more details.
The most common way to represent terrain in computer graphics is a heightfield: a grid of points that are spaced regularly on the X and Y axes, but whose Z (height) can vary. A heightfield can only have one Z value per (X,Y) grid point, so you can't have "overhangs" in the terrain, but it's usually sufficient anyway.
A simple way to draw a heightfield terrain is with a triangle strip (or quads, but they're deprecated). For simplicity, start in one corner and issue vertices in a zig-zag order down the column, then go back to the top and do the next column, and so on. There are optimizations that can be done for better performance, and more sophisticated ways of constructing the geometry for better appearance, but that'll get you started.
(I'm assuming a rectangular terrain here since that's how it's commonly done; if you really want a circle, you can substitute 𝑟 and 𝛩 for X and Y so you have a polar grid.)
The coordinates for each vertex will need to be stored in a buffer object, as usual. When you call glBufferData() to load the vertex data into the GPU, specify a usage parameter of either GL_STREAM_DRAW if the terrain will usually change from one frame to the next, or GL_DYNAMIC_DRAW if it will change often but not (close to) every frame. To change the terrain, call glBufferData() again to copy a different set of vertex data to the GPU.
For the vertex data itself, you can specify all three coordinates (X, Y, and Z) for each vertex; that's the simplest thing to do. Or, if you're using a recent enough GL version and you want to be sophisticated, you should be able to calculate the X and Y coordinates in the vertex shader using gl_VertexID and the dimensions of the grid (passed to the shader as a uniform value). That way, you only have to store the Z values in the buffer, which means less GPU memory and bandwidth consumed.
I'm doing a 2D turn based RTS game with 32x32 tiles (400-500 tiles per frame). I could use a VBO for this, but I may have to change almost all the VBO data each frame, as the background is a scrolling one and the visible tiles will change every time the map scrolls. Will using VBOs rather than client side vertex arrays still yield a performance benefit here? Also if using VBOs which data format is most efficient (float, or int16, or ...)?
If you are simply scrolling, you can use the vertex shader to manipulate the position rather than update the vertices themselves. Pass in a 'scroll' value as a uniform to your background and simply add that value to the x (or y, or whatever applies to your case) value of each vertex.
Update:
If you intend to modify the VBO often, you can tell the driver this using the usage param of glBufferData. This page has a good description of how that works: http://www.opengl.org/wiki/Vertex_Buffer_Object, under Accessing VBOs. In your case, it looks like you should specify GL_DYNAMIC_DRAW to glBufferData so that the driver puts your VBO in the best place in memory for your application.
The regular approach is to move the camera and perform culling instead of updating the content of the VBOs. For a 2d game culling will use simple rectangle intersection algorithm, which you will need anyway for unit selection in the game. As a bonus, manipulating the camera will allow to rate the camera and zoom in and zoom out. Also you could combine several tiles (4, 9 or 16) into one VBO.
I would strongly advise against writing logic to move the tiles instead of the camera. It will take you longer, have more bugs, and be less flexible.
The format will depend on what data you are storing in the VBOs. When in doubt, just use uint8 for color and float32 for everything else. Though for a 2d game your VBOs or vertex array are going to be very small compared to 3d applications, so it's highly unlikely VBO will make any difference.
I'm creating a tile-based game in C# with OpenGL and I'm trying to optimize my code as best as possible.
I've read several articles and sections in books and all come to the same conclusion (as you may know) that use of VBOs greatly increases performance.
I'm not quite sure, however, how they work exactly.
My game will have tiles on the screen, some will change and some will stay the same. To use a VBO for this, I would need to add the coordinates of each tile to an array, correct?
Also, to texture these tiles, I would have to create a separate VBO for this?
I'm not quite sure what the code would look like for tiling these coordinates if I've got tiles that are animated and tiles that will be static on the screen.
Could anyone give me a quick rundown of this?
I plan on using a texture atlas of all of my tiles. I'm not sure where to begin to use this atlas for the textured tiles.
Would I need to compute the coordinates of the tile in the atlas to be applied? Is there any way I could simply use the coordinates of the atlas to apply a texture?
If anyone could clear up these questions it would be greatly appreciated. I could even possibly reimburse someone for their time & help if wanted.
Thanks,
Greg
OK, so let's split this into parts. You didn't specify which version of OpenGL you want to use - I'll assume GL 3.3.
VBO
Vertex buffer objects, when considered as an alternative to client vertex arrays, mostly save the GPU bandwidth. A tile map is not really a lot of geometry. However, in recent GL versions the vertex buffer objects are the only way of specifying the vertices (which makes a lot of sense), so we cannot really talked about "increasing performance" here. If you mean "compared to deprecated vertex specification methods like immediate mode or client-side arrays", then yes, you'll get a performance boost, but you'd probably only feel it with 10k+ vertices per frame, I suppose.
Texture atlases
The texture atlases are indeed a nice feature to save on texture switching. However, on GL3 (and DX10)-enabled GPUs you can save yourself a LOT of trouble characteristic to this technique, because a more modern and convenient approach is available. Check the GL reference docs for TEXTURE_2D_ARRAY - you'll like it. If GL3 cards are your target, forget texture atlases. If not, have a google which older cards support texture arrays as an extension, I'm not familiar with the details.
Rendering
So how to draw a tile map efficiently? Let's focus on the data. There are lots of tiles and each tile has the following infromation:
grid position (x,y)
material (let's call it "material" not "texture" because as you said the image might be animated and change in time; the "material" would then be interpreted as "one texture or set of textures which change in time" or anything you want).
That should be all the "per-tile" data you'd need to send to the GPU. You want to render each tile as a quad or triangle strip, so you have two alternatives:
send 4 vertices (x,y),(x+w,y),(x+w,y+h),(x,y+h) instead of (x,y) per tile,
use a geometry shader to calculate the 4 points along with texture coords for every 1 point sent.
Pick your favourite. Also note that directly corresponds to what your VBO is going to contain - the latter solution would make it 4x smaller.
For the material, you can pass it as a symbolic integer, and in your fragment shader - basing on current time (passed as an uniform variable) and the material ID for a given tile - you can decide on the texture ID from the texture array to use. In this way you can make a simple texture animation.