OpenGL define colors using shaders - c++

I am learning OpenGL. At the moment i knew how to define primitives using VBO.
I implemented simple class Mesh and from this class some primitives like Square.
Now i wanted to learn good way to define colors. I am thinking about using shaders. My idea is to get something like this.
class ColorShader{
public:
static GLuint red = LoadShaders( "SimpleVertexShader.vertexshader", "red.fragmentshader" );
};
But i am not sure is that good way to do. I think that plus of this method is that i will get 30-50% less memory for each triangle. But minus will be that i will need prepare more fragmentshaders.
VertexColor gives me more power to define objects but it consume more memory and I dont like idea settings colors and vertices in the same place.

If I understand you correctly, you want to set the color per object instead of per-vertex? I would go with sending a float4 value to the fragment shader for every object in a constant buffer (at least that is what it is called in DirectX). It is really simple and the most basic thing, really.
Edit: http://www.opengl.org/wiki/Uniform_Buffer_Object They are called Uniform buffers in OpenGL.
The basic idea is that you declare a uniform variable in your shader, and get a pointer to it in your c++ code and upload 4 floats there every time you draw an object (or set once and the same color will be used for every object).

Related

Design for good relation between shader program and VAO

I am learning OpenGL and trying to get grasp of the best practices. I am working on a simple demonstration project in C++ which however aims to be a bit more generic and better structured (not everything just put in main()) than most of the tutorials I have seen on the web. I want to use the modern OpenGL ways which means VAOs and shaders. My biggest concern is about the relation of VAOs and shader programs. Maybe I am missing something here.
I am now thinking about the best design. Consider the following scenario:
there is a scene which contains multiple objects
each object has its individual size, position and rotation (i.e. transform matrix)
each object has a certain basic shape (e.g. box, ball), there can be multiple objects of the same shape
there can be multiple shader programs (e.g. one with plain interpolated RGBA colors, another with textures)
This leads me to the basic three components of my design:
ShaderProgram class - each instance contains a vertex shader and fragment shader (initialized from given strings)
Object class - has transform matrix and reference to a shape instance
Shape base class - and derived classes e.g. BoxShape, SphereShape; each derived class knows how to generate its mesh and turn it into buffer and how to map it to vertex attributes, in other words it will initialize its own VAO; it also known which glDraw... function(s) to use to render itself
When a scene is being rendered, I will call glUseProgram(rgbaShaderProgram). Then I will go through all objects which can be rendered using this program and render them. Then I will switch to glUseProgram(textureShaderProgram) and go through all textured objects.
When rendering an individual object:
1) I will call glUniformMatrix4fv() to set the individual transformation matrix (of course including projection matrix etc.)
2) then I will call the shape to which the object is associated to render
3) when shape is redered, it will bind its VAO, call its specific glDraw...() function and then unbind VAO
In my design I wanted to uncouple the dependency between Shape and ShaderProgram as they in theory can be interchangeable. But still some dependency seems to be there. When generating vertices in a specific ...Shape class and setting buffers for them I already need to know that I for example need to generate texture coordinates rather than RGBA components for each vertex. And when setting vertex attribute pointers glVertexAttribPointer I already must know that the shader program will use for example floats rather than integers (otherwise I would have to call glVertexAttribIPointer). I also need to know which attribute will be at which location in the shader program. In other words I am mixing the responsibility for sole shape geometry and the prior knowledge about how it will be rendered. And as a consequence of this I cannot render a shape with a shader program which is not compatible with it.
So finally my question: how to improve my design to achieve the goal (render the scene) and at the same time keep the versatility (interchangeability of shaders and shapes), force the correct usage (not to allow mixing wrong shapes with incompatible shaders), have the best performance possible (avoid unnecessarry program or context switching) and maintain good design principles (one class - one responsibility).
What I would do, is prepare ShaderProgram(s) templates, which I adapt to VAO attributes.
After all, shader programs are text, initially. What you might do is write your main functions for vertex and fragment programs, and attach the "header" to them, depending on the bound VAO. It could be useful to use standardized names for the variables you use inside the shaders, such as InPos, InNor, InTan, InTex.
That way, you can scan for elements that are missing in the VAO, but used inside the shader, and simply appending them in the header as const value with some default setting.
I do this via ShaderManager, with RequestShader(template,VAO) kind of function, which adapts the template to the VAO, and caches the compiled shader for later use.
If another VAO has the same attributes, and requires the same template, the precompiled cached version is returned to avoid the same adaptation process.

Is it possible to check drawing mode in shader and change some vertex attr accordingly?

Im working on an OpenGL project where i have to draw a colored gridblock as well as (white/black) lines bordering each cell in the block.
The vertices locations composing the cells are the same as the ones used for the lines(borders).. however they will be colored when used to draw faces(triangles), and have static color when used to draw the lines.
so my question is, is there a way to know the drawing mode inside the shader and assign the static color if GL_LINES was used, else use the color in the VBO ?
EDIT: a 2nd question just popped into my head.. If I use the same vertices to draw both the triangles and the lines, will the lines be obscured by the faces or is it the other way around?
Vertex shaders are not able to know what the primitive type they are being used for.
The general ways of solving this are:
Change programs between drawing static and dynamic colors. Have a different program based on whether the color comes from an input array or a uniform.
Uber-shader style. Have a uniform specify whether to use a static or dynamic color. This, for example, is totally legal:
uniform bool is_color_static;
in vec4 dyn_color;
uniform vec4 static_color;
void use_color(in vec4 color) {...}
void main()
{
if(is_color_static)
use_color(static_color);
else
use_color(dyn_color);
}
This is typically used in high-end games to prevent having to swap shaders a lot. For your use case, it's probably excessive. But it does have the advantage of you having fewer shader files to worry about and bug-fix.
Use unarrayed attributes. When you glDisableVertexAttribArray for an attribute, and your shader looks at that value anyway, the value the shader gets comes from a piece of global state. This global state can be set with the glVertexAttrib functions. The performance characteristics of this are unknown, as little code uses it. It's also likely to be buggy.

OpenGL and OOP programming

I want to structure my OpenGL program in classes, at first, I think I'll have Mesh, Shader, Vertex, Face (indicates) classes. Shader is a class for handling shader operations like create, compile, link, bind, unbind. Mesh contains a shader, list of Vertex and list of Face, it will create buffers, draw object to screen using attached shader, Hence all objects (like Box, Sphere, Monkey, TeaPot, ..) will inherited from Mesh class (or even use Mesh class directly). The problem is in the draw method of Mesh: each shader have different attribute and uniform variables, ... I can't find a way to draw object with dynamic shader. Is my OOP model not good? or are there any ways to do this?
I just want my program will like below:
Vertex vertices[] = {....};
Face faces[] = {...}; // Indicates
Shader color_shader = ColorShader::getOne();
Mesh box;
box.pushVertices(vertices);
box.pushFaces(faces);
box.shader = color_shader;
box.buildObject();
In render method like this:
box.draw(MVP);
Can anyone one suggest me a way to draw object with dynamic shader or other ways to do that thing?
Assembled from comments:
Conventionally mesh is just a geometry. Material is a shader (more generally - effect) and its parameters. And something like 'sceneobject' - virtual node entity that contains reference to mesh (or other object type) and materials. I highly doubt mesh should draw itself - this approach lacks flexibility, and isn't making much sense.
I think you should use something like 'renderer' or 'render_queue' or whatever it called, that takes objects to draw (bundles of meshes and materials) and draws them somewhen. It will even allow you to sort rendering queue to minimise state switches (switching shaders/shader parameters/textures have performance cost).
Attributes comes from mesh (maybe it would be easier to use some standard names like 'position', 'normal', 'texcoord0', ...). As for uniforms - there's more than one way. E.g. you have a material file with contents like
use_effect phong_single_texture;
texture t0 = "path_to_texture";
vec3 diffuse_colour = { 1, 0, 0 };
Then you load this file into material structure, link it with effect (what you called 'shader'), texture, etc.. Then renderer pulls this all together and setups attributes and uniforms by names.
Side note:
OOP everywhere isn't very good if you aim for high performance, but it is good for educational purpose. Whenever your experience is good enough - google something like 'data oriented design', but it isn't easy topic.

Using multiple texture atlases in glsl when using gl.DrawElements

So I am making a fairly complex 2d game using modern OpenGL. Right now I am passing a VBO with model matrix, texture cords etc. for all of my sprites (it’s an “Entity” based game so everything is essentially a sprite) to the shader and using glDrawElements to draw them. Everything is working fine and I can draw many thousands of transformed sprites and I have a camera system working with zoom etc. However, I am using a single sampler2D uniform with my texture atlas. The problem is that I want to be able to use multiple texture atlases. How do other games/engines handle this?
The only thing I can think of is something like this:
I know I can have up to GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS textures bound, but in order to draw them I need to set a sampler2D uniform for each texture and pass them to the shader. Then, in the shader I need to know which sampler2D to use when I draw. I somehow need to figure this out based on what data I have in my VBO (I assume I would pass a texture id or something for each vertex). Is this approach even possible using glDrawElements, is there a better/sane way to do this? I realize that I could sort my sprites by texture atlas and use multiple glDrawElement calls, but the problem is I need the sprites to be in a specific order for layering.
If this isn’t very clear please let me know so I can try and reword it. I am new to OpenGL so it’s hard for me to explain what I am trying to do when I don’t even know what I’m doing.
bind atlas 1
draw objects that use atlas 1
bind atlas 2
draw objects that use atlas 2
...
would be the simple way. Other methods tend to be over-complicated for a 2D game where performance isn't that important. When working with VBO's you just need an index-buffer for every atlas you use.

Proper way to manage shaders in OpenGL

I'm writing code in C++ to deal with the usual 3D stuff - models, materials, lights, etc. However, I'm finding that for anything to work, it needs to know about the shader. EG, to set uniform variables for a material, you need to know their handles from the shader; to load a mesh into memory, you need to know handles to different in locations.
I've ended up having models, materials, etc. each have a handle to the shader so they can do things like glUniform1f(shader->getKdLocation(),kd), but this kind of hot potato seems like bad design to me. I've seen tutorials where uniforms and ins and outs are hardcoded in the shader (eg layout = 0) and then just bound with glUniform1f(0,kd),. However, this means the rest of my code will only function with my specially laid out shaders and therefore seems like a suboptimal solution. Also, I don't think you can do this for subroutines, which makes this an inconsistent option.
It seems like a choice between everything getting a shader reference and in some cases being unable to even properly instantiate without one (eg meshes) OR hardcoding numbers in more places and having to deal with the problems that follow with hardcoding.
I should be able to have these models, lights, etc. live/operate independently and only "work" when I set a shader for the scene, but I can't seem to find a solid way to do this. My bottom line question is what is the best practice for handling shaders? I'm okay if it doesn't solve all my problems, I just want to know what a good design decision(s) is and why.
The problem is your engine architecture,or it's lack.In many game engines game objects are divided into several categories like Mesh object which takes care of geometry (vertex buffers etc),Material objects(responsible for the appearance of the mesh object).In such a design the material is usually an entity which contains info for the uniforms that being passed into shaders during rendering.Such a design allows a good amount of flexibility as you can reuse,reassign different materials between different renderable objects.So for the starter you can set specific shader program to specific material type so each time a mesh is drawn, its materials interacts with the shader program passing in all needed uniforms.
I would suggest you to take a look at open source OpenGL engine to get an idea how it works.
I had the same concern with regards to loose association between the uniform specification on the client and the actual uniform location as laid out in the shader.
One tact you can take is to use glGetUniformLocation to determine the location of a uniform variable based on its name. It may still not be perfect, but it's probably the best you can do.
Example:
// Old way:
GLfloat myfloat = ...;
glUniform1f(0, myfloat); // Hope that myfloat was supposed to go at location 0...
vs
// Slightly better...
GLfloat myfloat = ...;
GLint location = glGetUniformLocation(myprogram, "myfloat");
glUniform1f(location, myfloat); // We know that a uniform with name "myfloat" goes at this location
Some extra work with glGetActiveUniform would also allow you to make sure that "myfloat" has the type/size/etc that you had expected.