I am trying to attach different shaders to different program inside my OpenGL main program so as to render different objects. More specifically, I have a planet object(obj), I want to render it into two different planets, one with bump mapping the other with multiple textures.
I came up with the following framework but it didn't work.
void setUp(){
planet_1_programID = createProgramAndAttachShader(parameters...)
planet_2_programID = createProgramAndAttachShader(parameters...)
}
void sendDataToOpenGL(){
create VAO and bind the VAO
load data into a VBO
send the data to OpenGL
}
void paintGL(void){
glm::mat4 lookatMatrix = ...
glm::mat4 projectionMatrix = ...
glm::mat4 lookatMatrix = ...
glm::mat4 transformationMatrix = ...
// planet 1
glUseProgram(planet_1_programID)
bind to the previous created VAO
bind texture information
some model transformation
glDrawArrays();
// planet 2
glUseProgram(planet_2_programID)
bind to the previous created VAO
bind texture information
some model transformation
glDrawArrays();
}
The rendering result is that only one planet shows up in the final scene. I am wondering if I am understanding the function glUseProgram in an incorrect way. Can anyone give me some hints?
[update] Is it because I need to define all the variables all over again after glUseProgram? Currently I define most of the variables before glUseProgram because these two planets will have basically the same variables.
Is it because I need to define all the variables all over again after glUseProgram? Currently I define most of the variables before glUseProgram because these two planets will have basically the same variables.
If you're referring to "uniforms", then something(!) like that is your problem. You see, uniform values are stored per program so you have to set the uniform values at least once for each program. It's not necessary to do it after each and every glUseProgram, but you have to do it at least once for each "glUseProgram".
Related
I've a program with two texture: one from a video, and one from an image.
For the image texture, do I have to pass it to the program at each rendering, or can I do it just once? ie can I do
glActiveTexture(GLenum(GL_TEXTURE1))
glBindTexture(GLenum(GL_TEXTURE_2D), texture.id)
glUniform1i(textureLocation, 1)
just once? I believed so, but in my experiment, this works ok if there no video texture involved, but as soon as I add the video texture that I'm attaching at every rendering pass (since it's changing) the only way to get the image is to run the above code at each rendering frame.
Let's dissect what your doing, including some unnecessary stuff, and what the GL does.
First of all, none of the C-style casts you're doing in your code are necessary. Just use GL_TEXTURE_2D and so on instead of GLenum(GL_TEXTURE_2D).
glActiveTexture(GL_TEXTURE0 + i), where i is in the range [0, GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS - 1], selects the currently active texture unit. Commands that alter texture unit state will affect unit i as long as you don't call glActiveTexture with another valid unit identifier.
As soon as you call glBindTexture(target, name) with the current active texture unit i, the state of the texture unit is changed to refer to name for the specified target when sampling it with the appropriate sampler in a shader (i.e. name might be bound to TEXTURE_2D and the corresponding sample would have to be a sampler2D). You can only bind one texture object to a specific target for the currently active texture unit - so, if you need to sample two 2D textures in your shader, you'd need to use two texture units.
From the above, it should be obvious what glUniform1i(samplerLocation, i) does.
So, if you have two 2D textures you need to sample in a shader, you need two texture units and two samplers, each referring to one specific unit:
GLuint regularTextureName = 0;
GLunit videoTextureName = 0;
GLint regularTextureSamplerLocation = ...;
GLint videoTextureSamplerLocation = ...;
GLenum regularTextureUnit = 0;
GLenum videoTextureUnit = 1;
// setup texture objects and shaders ...
// make successfully linked shader program current and query
// locations, or better yet, assign locations explicitly in
// the shader (see below) ...
glActiveTexture(GL_TEXTURE0 + regularTextureUnit);
glBindTexture(GL_TEXTURE_2D, regularTextureName);
glUniform(regularTextureSamplerLocation, regularTextureUnit);
glActiveTexture(GL_TEXTURE0 + videoTextureUnit);
glBindTexture(GL_TEXTURE_2D, videoTextureName);
glUniform(videoTextureSampleLocation, videoTextureUnit);
Your fragment shader, where I assume you'll be doing the sampling, would have to have the corresponding samplers:
layout(binding = 0) uniform sampler2D regularTextureSampler;
layout(binding = 1) uniform sampler2D videoTextureSampler;
And that's it. If both texture objects bound to the above units are setup correctly, it doesn't matter if the contents of the texture changes dynamically before each fragment shader invocation - there are numerous scenarios where this is common place, e.g. deferred rendering or any other render-to-texture algorithm so you're not exactly breaking new ground with some video texture.
As to the question on how often you need to do this: you need to do it when you need to do it - don't change state that doesn't need changing. If you never change the bindings of the corresponding texture unit, you don't need to rebind the texture at all. Set them up once correctly and leave them alone.
The same goes for the sampler bindings: if you don't sample other texture objects with your shader, you don't need to change the shader program state at all. Set it up once and leave it alone.
In short: don't change state if don't have to.
EDIT: I'm not quite sure if this is the case or not, but if you're using teh same shader with one sampler for both textures in separate shader invocations, you'd have to change something, but guess what, it's as simple as letting the sampler refer to another texture unit:
// same texture unit setup as before
// shader program is current
while (rendering)
{
glUniform(samplerLocation, regularTextureUnit);
// draw call sampling the regular texture
glUniform(samplerLocation, videoTextureUnit);
// draw call sampling teh video texture
}
You should bind the texture before every draw. You only need to set the location once. You can also do layout(binding = 1) in your shader code for that. The location uniform stays with the program. The texture binding is a global GL state. Also be careful about ActiveTexture: it is a global GL state.
Good practice would be:
On program creation, once, set texture location (uniform)
On draw: SetActive(i), Bind(i), Draw, SetActive(i) Bind(0), SetActive(0)
Then optimize later for redundant calls.
For example if I have a quad (for textures) or even a mesh I wish to draw multiple times, however the model matrix would be different for each instance EACH FRAME (position, rotation, scale etc), is it better to have a VBO for a single instance of the quad/mesh, and update the shaders uniforms (such as the model matrix) and call glDrawArrays() per instance?
I am thinking it would work like this:
init() {
create VBO of a quad/mesh;
}
loop {
create view matrix based on camera position and send to the shader;
for each object {
create model matrix based on position, rotation, scale;
send the model matrix using glUniforMatrix4f() to the shader;
draw the object using glDrawArrays();
}
}
Basically this is for a game where the objects move around every frame, so I would like to make sure I'm making this right from the gecko before I start writing a huge wrapper for this.
I am writing an OpenGL3+ application and have some confusion about the use of VAOs. Right now I just have one VAO, a normalised quad set around the origin. This single VAO contains 3 VBOs; one for positions, one for surface normals, and one GL_ELEMENT_ARRAY_BUFFER for indexing (so I can store just 4 vertices, rather than 6).
I have set up some helper methods to draw objects to the scene, such as drawCube() which takes position and rotation values and follows the procedure;
Bind the quad VAO.
Per cube face:
Create a model matrix that represents this face.
Upload the model matrix to the uniform mat4 model vertex shader variable.
Call glDrawElements() to draw the quad into the position for this face.
I have just set about the task of adding per-cube colors and realised that I can't add my color VBO to the single VAO as it will change with each cube, and this doesn't feel right.
I have just read the question; OpenGL VAO best practices, which tells me that my approach is wrong, and that I should use more VAOs to save the work of setting the whole scene up every time.
How many VAOs should be used? Clearly my approach of having 1 is not optimal, should there be a VAO for every static surface in the scene? What about ones that move?
I am writing to a uniform variable for each vertex, is that correct? I read that uniform shader variables should not change mid-frame, if I am able to write different values to my uniform variable, how do uniforms differ from simple in variables in a vertex shader?
Clearly my approach of having 1 is not optimal, should there be a VAO for every static surface in the scene?
Absolutely not. Switching VAOs is costly. If you allocate one VAO per object in your scene, you need to switch the VAO before rendering such objects. Scale that up to a few hundred or thousand objects currently visible and you get just as much VAO changes. The questions is, if you have multiple objects which share a common memory layout, i.e. sizes/types/normalization/strides of elements are the same, why would you want to define multiple VAOs that all store the same information? You control the offset where you want to start pulling vertex attributes from directly with a corresponding draw call.
For non-indexed geometry this is trivial, since you provide a first (or an array of offsets in the multi-draw case) argument to gl[Multi]DrawArrays*() which defines the offset into the associated ARRAY_BUFFER's data store.
For indexed geometry, and if you store indices for multiple objects in a single ELEMENT_ARRAY_BUFFER, you can use gl[Multi]DrawElementsBaseVertex to provide a constant offset for indices or manually offset your indices by adding a constant offset before uploading them to the buffer object.
Being able to provide offsets into a buffer store also implies that you can store multiple distinct objects in a single ARRAY_BUFFER and corresponding indices in a single ELEMENT_ARRAY_BUFFER. However, how large buffer objects should be depends on your hardware and vendors differ in their recommendations.
I am writing to a uniform variable for each vertex, is that correct? I read that uniform shader variables should not change mid-frame, if I am able to write different values to my uniform variable, how do uniforms differ from simple in variables in a vertex shader?
First of all, a uniforms and shader input/output variables declared as in/out differ in various instances:
input/output variables define an interface between shader stages, i.e. output variables in one shader stage are backed by a corresponding and equally named input variable in the following stage. A uniform is available in all stages if declared with the same name and is constant until changed by the application.
input variables inside a vertex shader are filled from an ARRAY_BUFFER. Uniforms inside a uniform block are backed a UNIFORM_BUFFER.
input variables can also be written directly using the glVertexAttrib*() family of functions. single uniforms are written using the glUniform*() family of functions.
the values of uniforms are program state. the values of input variables are not.
The semantic difference should also be obvious: uniforms, as their name suggests, are usually constant among a set of primitives, whereas input variables usually change per vertex or fragment (due to interpolation).
EDIT: To clarify and to factor in Nicol Bolas' remark: Uniforms cannot be changed by the application for a set of vertices submitted by a single draw call, neither can vertex attributes by calling glVertexAttrib*(). Vertex shader inputs backed by a buffer objects will change either once per vertex or at some specific rate set by glVertexAttribDivisor.
EDIT2: To clarify how a VAO can theoretically store multiple layouts, you can simply define multiple arrays with different indices but equal semantics. For instance,
glVertexAttribPointer(0, 4, ....);
and
glVertexAttribPointer(1, 3, ....);
could define two arrays with indices 0 and 1, component sized 3 and 4 and both refer to position attributes of vertices. However, depending on what you want to render, you can bind a hypothetical vertex shader input
// if you have GL_ARB_explicit_attrib_location or GL3.3 available, use explicit
// locations
/*layout(location = 0)*/ in vec4 Position;
or
/*layout(location = 1)*/ in vec3 Position;
to either index 0 or 1 explicitly or glBindAttribLocation() and still use the same VAO. AFAIK, the spec says nothing about what happens if an attribute is enabled but not sourced by the current shader but I suspect implementation to simply ignore the attribute in that case.
Whether you source the data for said attributes from the same or a different buffer object is another question but of course possible.
Personally I tend to use one VBO and VAO per layout, i.e. if my data is made up of an equal number of attributes with the same properties, I put them into a single VBO and a single VAO.
In general: You can experiment with this stuff a lot. Do it!
I am coding a wavefront (.obj) loader with VBO's.
When "usemtl" is called, I am thinking about sending textureID together with vertex, texCoord and normal data.
With that texture ID can I bind the texture inside vertex/fragment shader without calling glBindTexture?
With that texture ID can I bind the texture inside vertex/fragment shader without calling glBindTexture?
No. Textures are not bound to shaders; they're bound to the context.
If you want to get technical, NV_bindless_texture allows such functionality, but that's NVIDIA-specific.
that is problem with materials in general, they need to be switched before rendering geometry.
simplest way is:
foreach object in renderQueue
set_material()
draw_geometry()
and of course we get into some troubles when one objects has to be rendered with two different materials. Another problem is with performance, you usually would sort objects by materials and save the switching (of textures, shaders and other data)
I'm trying to write a simple maze game, without using any deprecated OpenGL API (i.e. no immediate mode). I'm using one Vertex Buffer Object for each tile I have in my maze, which is essentially a combination of four Vertexs:
class Vertex {
public:
GLfloat x, y, z; // coords
GLfloat tx, ty; // texture coords
Vertex();
};
and are stored in VBOs like this:
void initVBO()
{
Vertex vertices[4];
vertices[0].x = -0.5;
vertices[0].y = -0.5;
vertices[0].z = 0.0;
vertices[0].tx = 0.0;
vertices[0].ty = 1.0;
vertices[1].x = -0.5;
vertices[1].y = 0.5;
vertices[1].z = 0.0;
vertices[1].tx = 0.0;
vertices[1].ty = 0.0;
vertices[2].x = 0.5;
vertices[2].y = 0.5;
vertices[2].z = 0.0;
vertices[2].tx = 1.0;
vertices[2].ty = 0.0;
vertices[3].x = 0.5;
vertices[3].y = -0.5;
vertices[3].z = 0.0;
vertices[3].tx = 1.0;
vertices[3].ty = 1.0;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex)*4, &vertices[0].x, GL_STATIC_DRAW);
ushort indices[4];
indices[0] = 0;
indices[1] = 1;
indices[2] = 2;
indices[3] = 3;
glGenBuffers(1, &ibo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(ushort) * 4, indices, GL_STATIC_DRAW);
}
Now, I'm stuck on the camera movement. In a previous version of my project, I used glRotatef and glTranslatef to translate and rotate the scene and then I rendered every tile using glBegin()/glEnd() mode. But these two functions are now deprecated, and I didn't find any tutorial about creating a camera in a context using only VBOs. Which is the correct way to proceed? Should I loop between every tile modifying the position of the vertices according to the new camera position?
But these two functions are now deprecated, and I didn't find any tutorial about creating a camera in a context using only VBOs.
VBOs have nothing to do with this.
Immediate mode and the matrix stack are two different pairs of shoes. VBOs deal with getting geometry data to the renderer, the matrix stack deals with getting the transformation there. It's only geometry data that's affected by VBOs.
As for your question: You calculate the matrices yourself and pass them to the shader by uniform. It's also important to understand the OpenGL's matrix function never have been GPU accelerated (except for one single machine, SGI's Onyx), so this didn't even offer some performance gain. Actually using OpenGL's matrix stack had a negative impact on overall performance, due to carrying out redundant operations, that have to be done somewhere else in the program as well.
For a simple matrix math library look at my linmath.h http://github.com/datenwolf/linmath.h
I will add to datenwolf's answer. I am assuming that only the shader pipeline is available to you.
Requirements
In OpenGL 4.0+ Opengl does not do any rendering for you whatsoever as it moves away from the fixed function pipeline. If you are rendering your geometry without a shader right now you are using the deprecated pipeline. Getting up and running without some base framework will be difficult (not impossible, but I would recommend using a base framework). I would recommend, as a start, using GLUT (this will create a window for you and has basic callbacks for the idle function and input), GLEW (to setup the rendering context) and gLTools (matrix stack, generic shaders and shader manager for a quick setup so that you can at least start rendering).
Setup
I will be giving the important pieces here which you can then piece together. At this point I am assuming you have GLUT set up properly (search for how to set it up) and you are able to register the update loop with it and create a window (that is, the loop which calls one of your selected functions [note, this cannot be a method] every frame). Refer to the link above for help on this.
First, initialize glew (by calling glewInit())
Setup your scene. This includes using GLBatch class (from glTools) to create a set of vertices to render as triangles and initializing the GLShaderManager class (also from GLTools) and its stock shaders by calling its InitializeStockShaders() function.
In your idle loop, call UseStockShader() function of the shader manager to start a new batch. Call the draw() function on your vertex batch. For a complete overview of glTools, go here.
Don't forget to clear the window before rendering and swapping the buffers after rendering by calling glClear() and glSwapBuffers() respectively.
Note that most of the functions I gave above accept arguments. You should be able to figure those out by looking at the respective library's documentations.
MVP Matrix (EDIT: Forgot to add this section)
OpenGL renders everything that is in the -1,1 co-ordinates looking down the z-axis. It has no notion of a camera and does not care for anything that falls outside these co-ordinates. The model-view-projection matrix is what transforms your scene to fit these co-ordinates.
As a starting point, don't worry about this until you have something rendered on the screen (make sure all the co-ordinates that you give your vertex batch are less than 1). Once you do, then setup your projection matrix (default projection is orthographic) by using the GLFrustum class in glTools. You will get your projection matrix from this class which you will multiply with your model view matrix. The model-view matrix is a combination of the model's transformation matrix and your camera's transformation (remember, there is no camera, so essentially, you are moving the scene instead). Once you have multiplied all of them to make one matrix, you pass it on to the shader using the UseStockShader() function.
Use a stock shader in GLTools (e.g. GLT_SHADER_FLAT) then start creating your own.
Reference
Lastly, I would highly recommend getting this book: OpenGL SuperBible, Comprehensive Tutorial and Reference (Fifth edition - make sure it is this edition)
If you really want to stick with the newest OpenGL API, where lots of features where removed in favor of a programmable pipeline (OpenGL 4 and OpenGL ES 2), you will have to write the vertex and fragment shaders yourself, and implement the transformation stuff there. You will have to manually create all the attributes you use in the shader, specifically the coords and texture coords you use in your example. You will also need 2 uniform variables, one for model-view matrix and other for projection matrix, if you want to mimic the behavior of old fixed functionality OpenGL.
The rotation/translation you are used to do are matrix operations. During vertex transformation stage of the pipeline, now performed by the vertex shader you supply, you must multiply a 4x4 transformation matrix by the vertex position (4 coordinates, interpreted as a 4x1 matrix, where the 4th coordinate is usually 1 if you are not doing anything too fancy). There resulting vector will be at the correct relative position according to the that transformation. Then you multiply the projection matrix to that vector, and the output the result to the fragment shader.
You can learn how all those matrices are built by looking at the documentation of glRotate, glTranslate and gluPerspective. Remember matrix multiplication is non-comutative, so the order you multiply them matters (this is exactly why the order you call glRotate and
glTranslate also matters).
About learning GLSL and how to use shaders, I've learned on here, but these tutorials relates to OpenGL 1.4 and 2 and are now very old. The main difference is that the predefined input variables to the vertex shaders, such as gl_Vertex and gl_ModelViewMatrix no longer exists, and you have to create them by yourself.