C++ OpenGL array of coordinates to draw lines/borders and filled rectangles? - c++

I'm working on a simple GUI for my application on OpenGL and all I need is to draw a bunch of rectangles and a 1px border arround them. Instead of going with glBegin and glEnd for each widget that has to draw (which can reduce performance). I need to know if this can be done with some sort of arrays/lists (batch data) of coordinates and their color.
Requirements:
Rectangles are simple filled with one color for every corner or each corner with a color. (mainly to form gradients)
Lines/borders are simple with one color and 1px thick, but they may not always closed (do not form a loop).
Use of textures/images is excluded. Only geometry data.
Must be compatible with older OpenGL versions (down to version 1.3)
Is there a way to achieve this with some sort of arrays and not glBegin and glEnd? I'm not sure how to do this for lines/borders.
I've seen this kind of implementation in Gwen GUI but it uses textures.
Example: jQuery EasyUI Metro Theme

In any case in modern OpenGL you should restrain to use old fashion API calls like glBegin and the likes. You should use the purer approach that has been introduced with core contexts from OpenGL 3.0. The philosophy behind it is to become much closer to actual way of modern hardware functionning. DiretX10 took this approach, somehow OpenGL ES also.
It means no more lists, no more immediate mode, no more glVertex or glTexCoord. In any case the drivers were already constructing VBOs behind this API because the hardware only understands that. So the OpenGL core "initiative" is to reduce OpenGL implementation complexity in order to let the vendors focus on hardware and stop producing bad drivers with buggy support.
Considering that, you should go with VBO, so you make an interleaved or multiple separated buffer data to store positions and color information, then you bind to attributes and use a shader combination to render the whole. The attributes you declare in the vertex shader are the attributes you bound using glBindVertexBuffer.
good explanation here:
http://www.opengl.org/wiki/Vertex_Specification
The recommended way is then to make one vertex buffer for the whole GUI and every element should just be put one after another in the buffer, then you can render your whole GUI in one draw call. This is how you will get the best performance.
Then if your GUI has dynamic elements this is no longer possible exept if using glUpdateBufferSubData or the likes but it has complex performance implications. You are better to cut your vertex buffer in as many buffers that are necessary to compose the independent parts, then you can render with uniforms modified between each draw call at will to configure the change of looks that is necessary in the dynamic part.

Related

opengl - possibility of a mirroring shader?

Until today, when I wanted to create reflections (a mirror) in opengl, I rendered a view into a texture and displayed that texture on the mirroring surface.
What i want to know is, are there any other methods to create a mirror in opengl?
And 2. can this be done lonely in shaders (e.g. geometry shader) ?
Ray-tracing. You can write a ray-tracer in the fragment shader (every fragment follows a ray). Ray-tracers can perfectly deal with reflection (mirroring) on all kinds of surfaces.
You can find an OpenGL example here and a WebGL example including mirroring here.
There are no universal way to do that, in any 3D API i know of.
Depending on your case there are several possible techniques with different downsides.
Planar reflections: That's what you are doing already.
Note that your mirror needs to be flat and you have to clip so anything closer than the mirror ins't rendered into the texture.
Good old cubemaps: attach a cubemap to each mirror then sample it in the reflection direction. This works for any surface but you will need to render the cubemaps (which can be done only once if you don't care about moving objects being reflected). I don't think you can do this without shaders but only the mirror will need one. Its a very common technique as it's easy do implement, can be dynamic and fairly cheap while being easy to integrate into an existing engine.
Screen space ray-marching: It's what danny-ruijters suggested. Kind of like SSAO : for each pixel, sample the depth buffer along the reflection vector until you hit something. This has the advantage to be applicable anywhere (on arbitrary complex surfaces) however it can only reflect stuff that appear on screen which can introduce lots of small artifacts but it's completly dynamic and very simple to implement. Note that you will need an additional pass (or rendering normals into a buffer) to access your scene final color in while computing the reflections. You absolutely need shaders for that, but it's post process so it won't interfere with the scene rendering if that's what you fear.
Some modern game engines use this to add small details to reflective surfaces without the burden of having to compute/store cubemaps.
They are probably many other ways to render mirrors but these are the tree main one (at least for what i know) ways of doing reflections.

add color to VBOs - best practices

I am working on a visualisation engine (simple CAD style) (with python and pyopengl bindings) that will display and animate up to 10-20 bodies simultaneously.
I am using VBO data objects to store vertex data and to display each body. I would like to know what is the best (most practical, easiest and less expensive-GPU) method to assign color to a VBO. Each body has uniform color and the appearance can be set to transparent - optional. As I know this can be done with the following methods (I tested method 1 and 2):
glColorf(R, G, B, A)
glMaterialfv(GL_FRONT_AND_BACK, , [R, G, B, A])
assign color to each vertex and create interleaved VBO
Are there any other methods? And which one is most suited for the job?
I would also like to ask how many vertices per VBO are recomended for one VBO and how many vertices have, lets say; small, medium and large VBOs? Just to give me more knowledge about size of the displayed objects.
The coloring part depends on which version of OpenGL you are using now, which version in the future, and whether or not you want lighting.
If you are using OpenGL 2.1, perhaps because you like having a built in matrix stack and gluPerspective, then glColor4f is the easiest way to set the uniform color for a non-lit object. If you want to use lighting, add a glColorMaterial call as well. Or for lighting you could use glMaterial.
As Andon points out, these will stop working if you have to move to OpenGL 3 or 4. So if this program is going to be updated in the future, or you have plans to add extra capabilities based on programmable GPU shaders, grab a copy of the OpenGL SuperBible 6th ed and start coding. The easiest way will be to add another VBO with per-vertex colors, or interleave colors with VBO as you've already discovered. In theory this is wasting space because a single color gets duplicated many times, but if you're not changing the color every frame, so what? (Gigabyte graphics cards are wonderful.)
OK, the recommended number of vertexes per VBO. Again, do you care? You say you have 10 - 20 objects to be rendered. That's not an excessive number of OpenGL calls per frame. Perhaps if you needed to render thousands of objects per frame it would be worth thinking about, but my advice is always to do the simplest thing that works first, because very often that's fast enough.
To get maximum performance from OpenGL you generally need to minimise the number of calls per frame. So if you have too many individual OpenGL calls, it doesn't really matter whether the data is one big VBO or lots of little ones. Stuffing more data into big VBOs (read up on primitive restart) usually does allow you to reduce the number of calls, and modern graphics cards let you store megabytes or even gigabytes per VBO. Read Real-Time Rendering by Moller and Haines, or the indirect drawing section of the SuperBible for more detail.
Hope this helps.

Techniques for drawing tiles with OpenGL

I've been using XNA for essentialy all of my programming so far and would like to move on to OpenGL (along with SFML for IO, creating the window etc.) with C++ . For starters I'd like to create a tile-based game and I've mostly looked at LazyFoo's tutorials.
I just have a two questions:
How should I draw the tiles? Should I use immediate drawing, arrays, VBOs or what? VBOs feel like overkill for this but I'm not sure. It's very tempting to use immediate drawing but apparently it's deprecated. Maybe it's fine for this purpose since it's 2D and only for a bunch of quads.
I'd like a lot of different tiles and thus all of my tiles will not fit into a single texture without making it massive. I've read that using bindTexture isn't very cheap and thus I should avoid as many calls as I can. I thought that maybe I can create a manager for my textures and stitch them all together into one big texture and bind that but then the dimensions of that is an issue.
Don't use immediate mode! It's cumbersome to work with and has been removed from recent OpenGL versions. Use Vertex Arrays, ideally through VBOs. In the end they're much easier to use, believe me.
Regarding that switching of textures. We're talking about optimizing the texture switch patterns in very complex scenes. In your case it will hardly matter at all.
Update
Right now you worry abount things without having even used them. That's worse than premature optimization. I suggest you first get a good grip on OpenGL, then start worrying about state switch management.
With regards to the texture atlas; this is usually done by stitching textures into groups of power-of-two sized textures. For example in a tile-based game you might have a particular tile set (say, tiles for an ice world) grouped together on 2 or 3 textures. When you want to render them you would determine what tiles are visible, then you bind each texture once and render the tiles from that texture for any tiles that are visible on screen.
This requires quite a lot of set-up time to get right; you need keep information on each sub-texture of the atlas so you can find the right texture and render the appropriate region of that texture whenever a tile is referenced. You also need a good way of grouping rendering operations so that they occur when the appropriate texture is bound.
Like datenwolf said, I wouldn't focus too much on complicated texture systems early on; eager binding of textures will be plenty fast enough until you get further down the road.

GLSL Shaders: blending, primitive-specific behavior, and discarding a vertex

Criteria: I’m using OpenGL with shaders (GLSL) and trying to stay with modern techniques (e.g., trying to stay away from deprecated concepts).
My questions, in a very general sense--see below for more detail—are as follows:
Do shaders allow you to do custom blending that help eliminate z-order transparency issues found when using GL_BLEND?
Is there a way for a shader to know what type of primitive is being drawn without “manually” passing it some sort of flag?
Is there a way for a shader to “ignore” or “discard” a vertex (especially when drawing points)?
Background: My application draws points connected with lines in an ortho projection (vertices have varying depth in the projection). I’ve only recently started using shaders in the project (trying to get away from deprecated concepts). I understand that standard blending has ordering issues with alpha testing and depth testing: basically, if a “translucent” pixel at a higher z level is drawn first (thus blending with whatever colors were already drawn to that pixel at a lower z level), and an opaque object is then drawn at that pixel but at a lower z level, depth testing prevents changing the pixel that was already drawn for the “higher” z level, thus causing blending issues. To overcome this, you need to draw opaque items first, then translucent items in ascending z order. My gut feeling is that shaders wouldn’t provide an (efficient) way to change this behavior—am I wrong?
Further, for speed and convenience, I pass information for each vertex (along with a couple of uniform variables) to the shaders and they use the information to find a subset of the vertices that need special attention. Without doing a similar set of logic in the app itself (and slowing things down) I can’t know a priori what subset of vericies that is. Thus I send all vertices to the shader. However, when I draw “points” I’d like the shader to ignore all the vertices that aren’t in the subset it determines. I think I can get the effect by setting alpha to zero and using an alpha function in the GL context that will prevent drawing anything with alpha less than, say, 0.01. However, is there a better or more “correct” glsl way for a shader to say “just ignore this vertex”?
Do shaders allow you to do custom blending that help eliminate z-order transparency issues found when using GL_BLEND?
Sort of. If you have access to GL 4.x-class hardware (Radeon HD 5xxx or better, or GeForce 4xx or better), then you can perform order-independent transparency. Earlier versions have techniques like depth peeling, but they're quite expensive.
The GL 4.x-class version uses essentially a series of "linked lists" of transparent samples, which you do a full-screen pass to resolve into the final sample color. It's not free of course, but it isn't as expensive as other OIT methods. How expensive it would be for your case is uncertain; it is proportional to how many overlapping pixels you have.
You still have to draw opaque stuff first, and you have to draw transparent stuff using special shader code.
Is there a way for a shader to know what type of primitive is being drawn without “manually” passing it some sort of flag?
No.
Is there a way for a shader to “ignore” or “discard” a vertex (especially when drawing points)?
No in general, but yes for points. A Geometry shader can conditionally emit vertices, thus allowing you to discard any vertex for arbitrary reasons.
Discarding a vertex in non-point primitives is possible, but it will also affect the interpretation of that primitive. The reason it's simple for points is because a vertex is a primitive, while a vertex in a triangle isn't a whole primitive. You can discard lines, but discarding a vertex within a line is... of dubious value.
That being said, your explanation for why you want to do this is of dubious merit. You want to update vertex data with essentially a boolean value that says "do stuff with me" or not to. That means that, every frame, you have to modify your data to say which points should be rendered and which shouldn't.
The simplest and most efficient way to do this is to simply not render with them. That is, arrange your data so that the only thing on the GPU are the points you want to render. Thus, there's no need to do anything special at all. If you're going to be constantly updating your vertex data, then you're already condemned to dealing with streaming vertex data. So you may as well stream it in a way that makes rendering efficient.

Is there a way of keeping track of the relationship between vertices and pixels using either OpenGL or DirectX? 11

I would like to know if there is a way to generate a single static image of a 3D object (1 single object represented as a triangle list), using OpenGL or DirectX, that allows you to know which specific triangles defining the object have been used to generate every one of the pixels forming the rendered image. I've cited OpenGL and DirectX because they are widely used APIs graphics if somebody knows other ways of achieving the previous that works at high speed I would be also interested in his/her answer. I currently use my own software implementation of the rendering pipeline to keep track of the relationship, but I would like to use the power and effects (mainly antialiasing, shadows and specific skin rendereing techniques) that graphics cards offer.
Thanks very much for your help
Sure, just output a triangle identifier to a separate render-target (using MRT). In GLSL-terms, this is gl_PrimitiveID, and in HLSL-terms it's SV_PrimitiveID. If you are using multi-sampling, then your multi-sample buffer for that render-target become a list of primitives that contribute to each pixel.
Draw each triangle in a different colour. R8G8B8 offers you about 16.7 million possible colours, so one can index that number of triangles with it. You don't have to draw to a on-screen buffer. You could render the picture as usual, and render to a second target, indexing the triangles in a off-screen buffer.