OpenGL Trying to render 2 textures, but only 1 does? - c++

I am trying to render 2 textures in OpenGL 3.
I created two arrays of vertices of GLfloat type,generated and bound the buffers etc.
Note: The texture loading function is working fine,I have already loaded a texture before, now I just need 2 textures rendered at the same time.
Then I load my textures like this:
GLuint grass = texture.loadTexture("grass.bmp");
GLuint grassLoc = glGetUniformLocation(programID, "grassSampler");
glUniform1i(grassLoc, 0);
GLuint crate = texture.loadTexture("crate.bmp");
GLuint crateLoc = glGetUniformLocation(programID, "crateSampler");
glUniform1i(crateLoc, 1);
This is how I draw them:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, grass);
glDrawArrays(GL_TRIANGLES, 0, 6);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, crate);
glDrawArrays(GL_TRIANGLES, 2, 6);
Vertex shader:
#version 330 core
layout(location = 0) in vec3 grassPosition;
layout(location = 1) in vec2 grassUvPosition;
layout(location = 2) in vec3 cratePosition;
layout(location = 3) in vec2 crateUvPosition;
out vec2 grassUV;
out vec2 crateUV;
uniform mat4 MVP;
void main(){
gl_Position = MVP * vec4(grassPosition,1);
gl_Position = MVP * vec4(cratePosition,1);
grassUV = grassUvPosition;
crateUV = crateUvPosition;
}
Fragment shader:
#version 330 core
in vec2 grassUV;
in vec2 crateUV;
out vec3 grassColor;
out vec3 crateColor;
uniform sampler2D grassSampler;
uniform sampler2D crateSampler;
void main(){
crateColor = texture(grassSampler, grassUV).rgb;
grassColor = texture(crateSampler, crateUV).rgb;
}
Can anyone see what I am doing wrong?
EDIT:
I am trying to render 2 different textures on 2 different VAOs

You're kinda doing everything wrong; it's hard to pick out one thing.
Your shaders look like they're tying to take two positions and two texture coordinates, presumably generate two triangles, then sample from two textures and write colors to two different images.
That's not how it works. Unless you use a geometry shader (and please do not take that as an endorsement), your call to glDrawArrays(GL_TRIANGLES, 0, 6); will render exactly 2 triangles, no matter what your VS or FS's say.
A vertex has only one position. Writing to gl_Position twice will simply overwrite the previous value, just like writing to any variable twice in C++ would. And the number of triangles to be rendered is defined by the number of vertices. A vertex shader cannot create vertices. It can't even destroy them (though, through gl_CullDistance, it can potentially cull whole primitives).
It is not clear what you mean by "I just need 2 textures rendered at the same time." Or more to the point, what "at the same time" refers to. I don't know what your code ought to be trying to do.
Given the data your vertex shader expects, it looks like you have two separate sets of triangles, with their own positions and texture coordinates. You want to render one set of triangles with one texture, then render another set with a different texture.
So... do that. Instead of having your VAOs send 2 positions and 2 texture coordinates, send just one. Your VS should also take one position/texcoord, and your FS should similarly take a single texture and write to a single output. The difference will be determined by what VAO is currently active and which texture is bound to texture unit 0 at the time you issue the render call.
If you truly intend to write to different output images, the way your FS suggests, then change FBOs between rendering as well.
If however, your goal is to have the same triangle use two textures with two mappings, writing separate results to two images, you can do that too. The difference is that you only provide a single position, and textures must be bound to both texture units 0 and 1 when you issue your rendering command.

Related

Fragment Shader seems to fail after combining GL_TEXTURE_BUFFER and GL_TEXTURE_1D

Currently, I'm trying to implement a fragment shader, which mixes colors of different fluid particles by combining the percentage of the fluids' phases inside the particle. So for example, if fluid 1 possesses 15% of the particle and fluid 2 possesses 85%, the resulting color should reflect that proportion. Therefore, I have a buffer texture containing the percentage reflected as a float value in [0,1] per particle and per phase and a texture containing the fluid colors.
The buffer texture does currently contain the percentages for each particle in a subsequential list. That is for example:
| Particle 1 percentage 1 | Particle 1 percentage 2 | Particle 2 percentage 1 | Particle 2 percentage 2 | ...
I already tested the correctness of the textures by assigning them to the particles directly or by assigning the volFrac to the red part of the final color. I also tried different GLSL debuggers trying to analyze the problem, but none of the popular options did work on my machine after trying.
#version 330
uniform float radius;
uniform mat4 projection_matrix;
uniform uint nFluids;
uniform sampler1D colorSampler;
uniform samplerBuffer volumeFractionSampler;
in block
{
flat vec3 mv_pos;
flat float pIndex;
}
In;
out vec4 out_color;
void main(void)
{
vec3 fluidColor = vec3(0.0, 0.0, 0.0);
for (int fluidModelIndex = 0; fluidModelIndex < int(nFluids); fluidModelIndex++)
{
float volFrac = texelFetch(volumeFractionSampler, int(nFluids * In.pIndex) + fluidModelIndex).x;
vec3 phaseColor = texture(colorSampler, float(fluidModelIndex)/(int(nFluids) - 1)).xyz;
fluidColor = volFrac * phaseColor;
}
out_color = vec4(fluidColor, 1.0);
}
And also a short snippet of the texture initialization
//Texture Initialisation and Manipulation here
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_1D, m_textureMap);
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGB, nFluids, 0, GL_RGB, GL_FLOAT, color_map);
//Creation and Initialisation for Buffer Texture containing the volume Fractions
glBindBuffer(GL_TEXTURE_BUFFER, m_texBuffer);
glBufferData(GL_TEXTURE_BUFFER, nFluids * nParticles * sizeof(float), m_volumeFractions.data(), GL_STATIC_DRAW);
glBindBuffer(GL_TEXTURE_BUFFER, 0);
glBindTexture(GL_TEXTURE_BUFFER, m_bufferTexture);
glTexBuffer(GL_TEXTURE_BUFFER, GL_R32F, m_texBuffer);
The problem now is, that if I multiply the information of the buffer texture with the information of the texture, the particles that should be rendered disappear completely without any warnings or other error messages. So the particles disappear if I use the statement:
fluidColor = volFrac * phaseColor;
Does anybody know, why this is the case or how I can further debug this problem?
Does anybody know, why this is the case
Yes. You seem to use the same texture unit for both colorSampler and volumeFractionSampler which is simply not allowed as per the spec. Quoting from section 7.11 of the OpenGL 4.6 core profile spec:
It is not allowed to have variables of different sampler types pointing to the same texture image unit within a program object. This situation can only
be detected at the next rendering command issued which triggers shader invocations, and an INVALID_OPERATION error will then be generated.
So while you can bind different textures do the different targets of texture unit 0 at the same time, each draw call can only use one particular target per texture unit. If you only use one sampler or the other (and the shader compilere will aggresively optimize these out if they don't influence the outputs of your shader), you are in a legal use case, but as soon as you use both, it will not work.

Reading texels after imageStore()

I'm modifying texels of a texture with imageStore() and after that i'm reading those texels in some other shader as sampler2D with texture() but i get the values which were stored in the texture before the imageStore(). With imageLoad() it works fine but i need to use filtering and the performance of texture() is better, so is there a way to get the modified data with texture()?
Edit:
First fragment shader(for writing):
#version 450 core
layout (binding = 0, rgba32f) uniform image2D img;
in vec2 vs_uv_out;
void main()
{
imageStore(img, ivec2(vs_uv_out), vec4(0.0f, 0.0f, 1.0f, 1.0f));
}
Second fragment shader(for reading):
#version 450 core
layout (binding = 0) uniform sampler2D tex;
in vec2 vs_uv_out;
out vec4 out_color;
void main()
{
out_color = texture(tex, vs_uv_out);
}
Thats how i run the shaders:
glUseProgram(shader_programs[0]);
glBindImageTexture(0, texture, 0, GL_FALSE, 0, GL_READ_WRITE,
GL_RGBA32F);
glDrawArrays(GL_TRIANGLES, 0, 6);
glUseProgram(shader_programs[1]);
glBindTextureUnit(0, texture);
glDrawArrays(GL_TRIANGLES, 0, 6);
i made this simple application to test that because the real one is very complex, i first clear the texture with red but the texels won't appear blue(except of using imageLoad in the second frag. shader).
Oh, that's easy then. Image Load/Store's writes uses an incoherent memory model, not the synchronous model most of the rest of OpenGL uses. As such, just because you write something with Image Load/Store doesn't mean it's visible to anyone else. You have to explicitly make it visible for reading.
You need a glMemoryBarrier call between the rendering operation that writes the data and the operation that reads it. And since the reading operation is a texture fetch, the correct barrier to use is GL_TEXTURE_FETCH_BARRIER_BIT.
And FYI: your imageLoad was able to read the written data only due to pure luck. Nothing guaranteed that it would be able to read the written data. To ensure such reads, you'd need a memory barrier as well. Though obviously a different one: GL_SHADER_IMAGE_ACCESS_BARRIER_BIT.
Also, texture takes normalized texture coordinates. imageStore takes integer pixel coordinates. Unless that texture is a rectangle texture (and it's not, since you used sampler2D), it is impossible to pass the exact same coordinate to both imageStore and texture.
Therefore, either your pixels are being written to the wrong location, or your texture is being sampled from the wrong location. Either way, there's a clear miscommunication. Assuming that vs_uv_out really is non-normalized, then you should either use texelFetch or you should normalize it. Fortunately, you're using OpenGL 4.5, so that ought to be fairly simple:
ivec2 size = textureSize(tex);
vec2 texCoord = vs_uv_out / size;
out_color = texture(tex, texCoord);

How to use GLM with OpenGL?

I am trying to render an object using GLM for matrix transformations, but I'm getting this:
EDIT: Forgot to mention that the object I'm trying to render is a simple Torus.
I did a lot of digging around and one thing I noticed is glGetUniformLocation(program, "mvp") returns -1. The docs says it will return -1 if the uniform variable isn't used in the shader, even if it is declared. As you can see below, it has been declared and is being used in the vertex shader. I've checked against program to make sure it is valid, and such.
So my questions are:
Question 1:
Why is glGetUniformLocation(program, "mvp") returning -1 even though it is declared and is being used in the vertex shader?
Question 2: (Which I think may be related to Q1)
Another thing I'm not particularly clear on. My GameObject class has a struct called Mesh with variables GLuint vao and GLuint[4] vbo (Vertex Array Object) and (Vertex Buffer Object). I am using Assimp, and my GameObject class is based on this tutorial. The meshes are rendered in the same way as the tutorial, using:
glBindVertexArray(vao);
glDrawElements(GL_TRIANGLES, elementCount, GL_UNSIGNED_INT, NULL);
I'm not sure how VAO's and VBO's work. What I've found is that VAO's are used if you want access to the vertex arrays throughout your program, and VBO's are used if you just want to send it to the graphics card and not touch it again (Correct me if I'm wrong here). So why does the tutorial mix them? In the constructor for a mesh, it creates and binds a VAO then doesn't touch it for the rest of the constructor (unless creating and binding VBO's have an effect on the currently bound VAO). It then goes on and creates and binds VBO's for the vertex buffer, normal buffer, texture coordinate buffer, and index buffer. To render the object it binds the VAO and calls glDrawElements. What I'm confused about is how/where does OpenGL access the VBO's, and if it can't with the setup in the tutorial, which I'm pretty sure it can, what needs to change?
Source
void GameObject::render() {
GLuint program = material->shader->program;
glUseProgram(program);
glm::mat4 mvp = Game::camera->mvpMatrix(this->position);
GLuint mvpLoc = glGetUniformLocation(program, "mvp");
printf("MVP Location: %d\n", mvpLoc); // prints -1
glUniformMatrix4fv(mvpLoc, 1, GL_FALSE, &mvp[0][0]);
for (unsigned int i = 0; i < meshes.size(); i++) {
meshes.at(i)->render(); // renders element array for each mesh in the GameObject
}
}
Vertex shader (simple unlit red color):
#version 330 core
layout(location = 0) in vec3 position;
uniform mat4 mvp;
out vec3 vertColor;
void main(void) {
gl_Position = mvp * vec4(position, 1);
vertColor = vec3(1, 0, 0);
}
Fragment shader:
#version 330 core
in vec3 vertColor;
out vec3 color;
void main(void) {
color = vertColor;
}
Question 1
You've pretty much answered this one yourself. glGetUniformLocation(program, name) gets the location of the uniform "mvp" in the shader program program and returns -1 if the uniform is not declared (or not used: if you don't use it, it doesn't get compiled in). Your shader does declare and use mvp, which strongly suggests there is an issue with compiling the program. Are you sure you are using this shader in the program?
Question 2
A VBO stores the data values that the GPU will use. These could be colour values, normals, texture coordinates, whatever you like.
A VAO is used to express the layout of your VBOs - think of it like a map, indicating to your program where to find the data in the VBOs.
The example program does touch the VAO whenever it calls glVertexAttribPointer, e.g.
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
This is not related to your missing uniform.

How can I pass multiple textures to a single shader?

I am using freeglut, GLEW and DevIL to render a textured teapot using a vertex and fragment shader. This is all working fine in OpenGL 2.0 and GLSL 1.2 on Ubuntu 14.04.
Now, I want to apply a bump map to the teapot. My lecturer evidently doesn't brew his own tea, and so doesn't know they're supposed to be smooth. Anyway, I found a nice-looking tutorial on old-school bump mapping that includes a fragment shader that begins:
uniform sampler2D DecalTex; //The texture
uniform sampler2D BumpTex; //The bump-map
What they don't mention is how to pass two textures to the shader in the first place.
Previously I
//OpenGL cpp file
glBindTexture(GL_TEXTURE_2D, textureHandle);
//Vertex shader
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
//Fragment shader
gl_FragColor = color * texture2D(DecalTex,gl_TexCoord[0].xy);
so now I
//OpenGL cpp file
glBindTexture(GL_TEXTURE_2D, textureHandle);
glBindTexture(GL_TEXTURE_2D, bumpHandle);
//Vertex shader
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
gl_TexCoord[1] = gl_TextureMatrix[1] * gl_MultiTexCoord1;
//Fragment shader
gl_FragColor = color * texture2D(BumpTex,gl_TexCoord[0].xy);
//no bump logic yet, just testing I can use texture 1 instead of texture 0
but this doesn't work. The texture disappears completely (effectively the teapot is white). I've tried GL_TEXTURE_2D_ARRAY, glActiveTexture and few other likely-seeming but fruitless options.
After sifting through the usual mixed bag of references to OpenGL and GLSL new and old, I've come to the conclusion that I probably need glGetUniformLocation. How exactly do I use this in the OpenGL cpp file to pass the already-populated texture handles to the fragment shader?
How to pass an array of textures with different sizes to GLSL?
Passing Multiple Textures from OpenGL to GLSL shader
Multiple textures in GLSL - only one works
(This is homework so please answer with minimal code fragments (if at all). Thanks!)
Failing that, does anyone have a tea cosy mesh?
It is very simple, really. All you need is to bind the sampler to some texture unit with glUniform1i. So for your code sample, assuming the two uniform samplers:
uniform sampler2D DecalTex; // The texture (we'll bind to texture unit 0)
uniform sampler2D BumpTex; // The bump-map (we'll bind to texture unit 1)
In your initialization code:
// Get the uniform variables location. You've probably already done that before...
decalTexLocation = glGetUniformLocation(shader_program, "DecalTex");
bumpTexLocation = glGetUniformLocation(shader_program, "BumpTex");
// Then bind the uniform samplers to texture units:
glUseProgram(shader_program);
glUniform1i(decalTexLocation, 0);
glUniform1i(bumpTexLocation, 1);
OK, shader uniforms set, now we render. To do so, you will need the usual glBindTexture plus glActiveTexture:
glActiveTexture(GL_TEXTURE0 + 0); // Texture unit 0
glBindTexture(GL_TEXTURE_2D, decalTexHandle);
glActiveTexture(GL_TEXTURE0 + 1); // Texture unit 1
glBindTexture(GL_TEXTURE_2D, bumpHandle);
// Done! Now you render normally.
And in the shader, you will use the textures samplers just like you already do:
vec4 a = texture2D(DecalTex, tc);
vec4 b = texture2D(BumpTex, tc);
Note: For techniques like bump-mapping, you only need one set of texture coordinates, since the textures are the same, only containing different data. So you should probably pass texture coordinates as a vertex attribute.
instead of using:
glUniform1i(decalTexLocation, 0);
glUniform1i(bumpTexLocation, 1);
in your code,
you can have:
layout(binding=0) uniform sampler2D DecalTex;
// The texture (we'll bind to texture unit 0)
layout(binding=1)uniform sampler2D BumpTex;
// The bump-map (we'll bind to texture unit 1)
in your shader. That also mean you don't have to query for the location.

OpenGL issue: cannot render geometry on screen

My program was meant to draw a simple textured cube on screen, however, I cannot get it to render anything other than the clear color. This is my draw function:
void testRender() {
glClearColor(.25f, 0.35f, 0.15f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUniformMatrix4fv(resources.uniforms.m4ModelViewProjection, 1, GL_FALSE, (const GLfloat*)resources.modelviewProjection.modelViewProjection);
glEnableVertexAttribArray(resources.attributes.vTexCoord);
glEnableVertexAttribArray(resources.attributes.vVertex);
//deal with vTexCoord first
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,resources.hiBuffer);
glBindBuffer(GL_ARRAY_BUFFER, resources.htcBuffer);
glVertexAttribPointer(resources.attributes.vTexCoord,2,GL_FLOAT,GL_FALSE,sizeof(GLfloat)*2,(void*)0);
//now the other one
glBindBuffer(GL_ARRAY_BUFFER,resources.hvBuffer);
glVertexAttribPointer(resources.attributes.vVertex,3,GL_FLOAT,GL_FALSE,sizeof(GLfloat)*3,(void*)0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, resources.htextures[0]);
glUniform1i(resources.uniforms.colorMap, 0);
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, (void*)0);
//clean up a bit
};
In addition, here is the vertex shader:
#version 330
in vec3 vVertex;
in vec2 vTexCoord;
uniform mat4 m4ModelViewProjection;
smooth out vec2 vVarryingTexCoord;
void main(void) {
vVarryingTexCoord = vTexCoord;
gl_Position = m4ModelViewProjection * vec4(vVertex, 1.0);
};
and the fragment shader (I have given up on textures for now):
#version 330
uniform sampler2D colorMap;
in vec2 vVarryingTexCoord;
out vec4 vVaryingFragColor;
void main(void) {
vVaryingFragColor = texture(colorMap, vVarryingTexCoord);
vVaryingFragColor = vec4(1.0,1.0,1.0,1.0);
};
the vertex array buffer for the position coordinates make a simple cube (with all coordinates a signed 0.25) while the modelview projection is just the inverse camera matrix (moved back by a factor of two) applied to a perspective matrix. However, even without the matrix transformation, I am unable to see anything onscreen. Originally, I had two different buffers that needed two different element index lists, but now both buffers (containing the vertex and texture coordinate data) are the same length and in order. The code itself is derived from the Durian Software Tutorial and the latest OpenGL Superbible. The rest of the code is here.
By this point, I have tried nearly everything I can think of. Is this code even remotely close? If so, why can't I get anything to render onscreen?
You're looking pretty good so far.
The only thing that I see right now is that you've got DEPTH_TEST enabled, but you don't clear the depth buffer. Even if the buffer initialized to a good value, you would be drawing empty scenes on every frame after the first one, because the depth buffer's not being cleared.
If that does not help, can you make sure that you have no glGetError() errors? You may have to clean up your unused texturing attributes/uniforms to get the errors to be clean, but that would be my next step.