When sending two textures to my GLSL shader only one actually arrives. What is strange is the first texture I bind is used for both textures slots in my shader. This leads me to believe the way I am passing my textures in OpenGL is wrong. However, I am unable to track down the problem.
Here is the code where I configure the textures for use in my shader.
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo2);
glPushAttrib(GL_VIEWPORT_BIT | GL_ENABLE_BIT);
glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Get uniforms
GLuint pass_3O = glGetUniformLocation(blend_shader, "org");
GLuint pass_3B = glGetUniformLocation(blend_shader, "blur");
// Activate shaders
glUseProgram(blend_shader);
// Bind first texture
glActiveTexture(GL_TEXTURE0 );
glBindTexture(GL_TEXTURE_2D, init_texture);
// Bind the second texture
glActiveTexture(GL_TEXTURE1 );
glBindTexture(GL_TEXTURE_2D, third_texture);
// Assign index to 2d images
glUniform1i(pass_3O, 0);
glUniform1f(pass_3B, 1);
The code above is passing in two textures. The first is a 2D image of the first rendering pass of the 3D scene. The third is that same texture with x2 levels of blur added. This final stage is to blend them together for a poor mans bloom.
Here is the code where I am drawing both textures to the quad.
// Draw to quad
glBegin(GL_QUADS);
glMultiTexCoord2f(GL_TEXTURE0, 0.0f, 0.0f);
glMultiTexCoord2f(GL_TEXTURE1, 0.0f, 0.0f);
glVertex3f(-w_width/2.0, -w_height/2.0, 0.5f);
glMultiTexCoord2f(GL_TEXTURE0, 0.0f, 1.0f);
glMultiTexCoord2f(GL_TEXTURE1, 0.0f, 1.0f);
glVertex3f(-w_width/2.0, w_height/2.0, 0.5f);
glMultiTexCoord2f(GL_TEXTURE0, 1.0f, 1.0f);
glMultiTexCoord2f(GL_TEXTURE1, 1.0f, 1.0f);
glVertex3f(w_width/2.0, w_height/2.0, 0.5f);
glMultiTexCoord2f(GL_TEXTURE0, 1.0f, 0.0f);
glMultiTexCoord2f(GL_TEXTURE1, 1.0f, 0.0f);
glVertex3f(w_width/2.0, -w_height/2.0, 0.5f);
glEnd();
glFlush();
glPopAttrib();
// Unbind textures
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, 0);
// Disable blend shader
glUseProgram(0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,0);
And here is the shader I am using to render the final image.
Vert
#version 120
void main()
{
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_TexCoord[1] = gl_MultiTexCoord1;
gl_Position = ftransform();
}
Frag
#version 120
uniform sampler2D org;
uniform sampler2D blur;
void main()
{
vec4 orig_clr = texture2D( org, gl_TexCoord[0].st);
vec4 blur_clr = texture2D( blur, gl_TexCoord[1].st );
//gl_FragColor = orig_clr;
gl_FragColor = blur_clr;
}
If I switch between the last two lines in the fragment shader I get the same exact results. The only way to change which texture gets render is to change the order in which I bind them.
For example, the following would finally pass me the blurred image. Once again, only getting one of the two images.
glActiveTexture(GL_TEXTURE0 );
glBindTexture(GL_TEXTURE_2D, third_texture);
glActiveTexture(GL_TEXTURE1 );
glBindTexture(GL_TEXTURE_2D, init_texture);
Any thoughts on what I am overlooking?
Look at this code:
glUniform1i(pass_3O, 0);
glUniform1f(pass_3B, 1);
you have some small typo here, it should be glUniform1*i* instead of Uniform1*f* in the second call. The type must match that of the shader variable, so this call should just result in some error, leaving the uniform initialized at 0, which completely explains your results.
Related
I have a very simple OpenGL application that renders only one textured quad. This is my code, which works just fine (the textured quad appears just fine):
// Bind the test texture
glBindTexture(GL_TEXTURE_2D, mTestTexture);
// Draw the quad
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(x, y + (float)height, 0.0f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f(x + (float)width, y + (float)height, 0.0f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f(x + (float)width, y, 0.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(x, y, 0.0f);
glEnd();
Then I wanted to intoduce a simple shader. So I modified my code a little:
// Use shader and point it to the right texture
auto texLocation = glGetUniformLocation(mProgram, "tex");
glUseProgram(mProgram);
glUniform1i(texLocation, mTestTexture);
// Draw the quad
// Same drawing code as before...
Vertex shader:
void main(void)
{
gl_Position = ftransform();
gl_TexCoord[0] = gl_MultiTexCoord0;
}
Fragment shader:
uniform sampler2D tex;
void main()
{
vec4 color = texture2D(tex, gl_TexCoord[0].st);
gl_FragColor = color;
}
Now all I get is a black quad :-(
I already tried and tested a lot of things:
The shaders compile fine (no errors)
The quad is visible (vertex shader seems OK)
If I change the shader to produce a fixed color ("gl_FragColor = vec4(1,0,0,1);") my quad becomes red -> fragment shader is doing something!
glGetError() does not return any errors
My texLocation, mProgram and mTestTexture all seem to be valid IDs
Does anyone have an idea why I won't see my texture when using the shader?
glUniform1i(texLocation, mTestTexture);
^^^^^^^^^^^^ texture object
Texture unit indexes are bound to samplers, not texture objects.
Use texture unit zero instead:
glUniform1i(texLocation, 0);
I'm staring to understand that a vertex Shader handles transformations of my texture. While the fragment Shader handles individual pixels. But this vector math is confusing.
What I'm trying to do is render a sprite from a sprite sheet. I can render a whole image just fine, but now i'm actually trying to write my own shader.
I think its more efficient to have the graphics card do the heavy lifting, that being said;
Currently I draw whole images like so:
In my init step,
void TextureRenderer::initRenderData()
{
// Configure VAO/VBO
game_uint VBO;
game_float vertices[] = {
// Pos // Tex
0.0f, 1.0f, 0.0f, 1.0f,
1.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 0.0f
};
glGenVertexArrays(1, &this->quadVAO);
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glBindVertexArray(this->quadVAO);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 4 * sizeof(game_float), (GLvoid*)0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
Then when its time to draw any texture:
void TextureRenderer::DrawTexture(Texture2D &texture, vec2 position, vec2 size, game_float rotate, vec3 color)
{
// Prepare transformations
this->shader->Use();
glm::mat4 model;
model = glm::translate(model, vec3(position, 0.0f)); // First translate (transformations are: scale happens first, then rotation and then finall translation happens; reversed order)
model = glm::translate(model, vec3(0.5f * size.x, 0.5f * size.y, 0.0f)); // Move origin of rotation to center of quad
model = glm::rotate(model, rotate, vec3(0.0f, 0.0f, 1.0f)); // Then rotate
model = glm::translate(model, vec3(-0.5f * size.x, -0.5f * size.y, 0.0f)); // Move origin back
model = glm::scale(model, vec3(size, 1.0f)); // Last scale
this->shader->SetMatrix4("model", model);
// Render textured quad
this->shader->SetVector3f("spriteColor", color);
glActiveTexture(GL_TEXTURE0);
texture.Bind();
glBindVertexArray(this->quadVAO);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindVertexArray(0);
}
TextureShader.vs:
#version 330 core
layout (location = 0) in vec4 vertex; // <vec2 position, vec2 texCoords>
out vec2 TexCoords;
uniform mat4 model;
uniform mat4 projection;
void main()
{
TexCoords = vertex.zw;
gl_Position = projection * model * vec4(vertex.xy, 0.0, 1.0);
}
Fragment Shader:
#version 330 core
in vec2 TexCoords; //Position
out vec4 color;
uniform sampler2D image;
uniform vec3 spriteColor;
void main()
{
color = vec4(spriteColor, 1.0) * texture(image, TexCoords);
}
Now that work all fine and dandy (assuming proper opengl setup ext.)
But id like to apply this to a Sprite sheet shader, and have the GPU handle the math to draw it.
void SpriteRenderer::drawSprite(Texture2D &texture, vec2 position,game_float spriteHeight,game_float spriteWidth,int frameIndex)
{
shader->Use();//Load a diffrent Shader here.
shader->SetInteger("frameindex", frameIndex);
shader->SetVector2f("position", position);
shader->SetFloat("spriteHeight", spriteHeight);
shader->SetFloat("spriteWidth", spriteWidth);
shader->SetMatrix4("model", model);
shader->SetVector3f("spriteColor", color);
glActiveTexture(GL_TEXTURE0); //Set texture to nothin
texture.Bind(); //Bind my texture.
glBindVertexArray(this->quadVAO); //Bind the fullscreen quad
glDrawArrays(GL_TRIANGLES, 0, 6); //Draw
glBindVertexArray(0); //Unbind the quad.
}
I assume:
Inside the vertex Shader, I manipulate the VAO quad to the position it is on the canvas then set the color of the pixles in the fragment shader to that spesific region.
How would that be done?
Or would it be better off for me to pre-calculate a VAO Array for each Sprite in a sprite class? Then each draw call would be:
void SpriteRenderer::drawSprite(Texture2D &texture, vec2 position,Sprite s)
Where the sprite has these vertices stored.
Iv seen:
Techniques for drawing spritesheets in OpenGL with shaders
Somewhat similar, but id like to have the GPU handle the math all together.
I'm trying to show a texture(yes it is a pot) with opengl 2.1 and glsl 120, but i'm not sure on how to do it, all i can get is a black quad, i've been following this tutorials: A Textured Cube, OpenGl - Textures and what i have understood is that i need to:
Specify the texture coordinates to attach to each vertex(in my case are 6 vertices, a cube without indexing)
Load the texture and bind it in a texture unit(default is 0)
call glDrawArrays
Inside the shaders i need to:
Receive the texture coords in an attribute in the vertex shader and pass it to the fragment shader through a varying variable
In the fragment shader use a sampler object to sample a pixel, in the position specified by the varying variable, from the texture.
Is it all correct?
Here is how i create the texture VBO and load the texture:
void Application::onStart(){
unsigned int format;
SDL_Surface* img;
float quadCoords[] = {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
-0.5f, 0.5f, 0.0f,
-0.5f, -0.5f, 0.0f};
const float texCoords[] = {
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
1.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f};
//shader loading omitted ...
sprogram.bind(); // call glUseProgram(programId)
//set the sampler value to 0 -> use texture unit 0
sprogram.loadValue(sprogram.getUniformLocation(SAMPLER), 0);
//quad
glGenBuffers(1, &quadBuffer);
glBindBuffer(GL_ARRAY_BUFFER, quadBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*18, quadCoords, GL_STATIC_DRAW);
//texture
glGenBuffers(1, &textureBuffer);
glBindBuffer(GL_ARRAY_BUFFER, textureBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*12, texCoords, GL_STATIC_DRAW);
//load texture
img = IMG_Load("resources/images/crate.jpg");
if(img == nullptr)
throw std::runtime_error(SDL_GetError());
glGenTextures(1, &this->texture);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, this->texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img->w, img->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, img->pixels);
SDL_FreeSurface(img);
}
rendering phase:
glClear(GL_COLOR_BUFFER_BIT);
glEnableVertexAttribArray(COORDS);
glBindBuffer(GL_ARRAY_BUFFER, quadBuffer);
glVertexAttribPointer(COORDS, 3, GL_FLOAT, GL_FALSE, 0, nullptr);
glEnableVertexAttribArray(TEX_COORDS);
glBindBuffer(GL_ARRAY_BUFFER, textureBuffer);
glVertexAttribPointer(TEX_COORDS, 2, GL_FLOAT, GL_FALSE, 0, nullptr);
//draw the vertices
glDrawArrays(GL_TRIANGLES, 0, 6);
vertex shader:
#version 120
attribute vec3 coord;
attribute vec2 texCoord;
varying vec2 UV;
void main(){
gl_Position = vec4(coord.x, coord.y, coord.z, 1.0);
UV = texCoord;
}
fragment shader:
#version 120
uniform sampler2D tex;
varying vec2 UV;
void main(){
gl_FragColor.rgb = texture2D(tex, UV).rgb;
gl_FragColor.a = 1.0;
}
I know that the tutorials use out instead of varying so i tried to "convert" the code, also there is this tutorial: Simple Texture - LightHouse that explain the gl_MultiTexCoord0 attribute and gl_TexCoord array wich are built in, but this is almost the same thing i'm doing. I want to know if 'm doing it all right and if not, i would like to know how to show a simple 2d texture in the screen with opengl 2.1 and glsl 120
Do you have a particular reason to use opengl 2.1 with glsl version 1.2 ? If not stick to the openGl 3.0 because its easier to understand imho.
My guess is you have 2 big problems :
First of all getting a black quad: If its size occupies your hole app then its the background color. That means it doesn't draw anything at all .
I think(by testing this) OpenGL has a default program which will activate and even if you have already set a vertex array/buffer object on the gpu.It should render as a white quad in your window... So that might be ur 1st problem . I dont know if opengl 2.1 has vertex buffer arrays but opengl 3.0 has and you should definetly make use of that!
Second : you don't use your shader program in the rendering phase;
Call this function before drawing your quad:
glUseProgram(myProgram); // The myProgram variable is your compiled shader program
If by any chance you would like me to explain how to draw your quad using OpegGL 3.0 ++ let me know :) ...It is not far from what you already wrote in your code .
Can someone tell me how to draw a single white pixel at a coordinate, say (100,200)?
I am using GLUT and so far have figured out how to open a blank window. Once I figure out how to draw pixels, I will use that to implement the Bresenham line drawing algorithm. (Yes, I am aware OpenGL can draw lines. I am required to implement this myself).
#include <stdio.h>
#include <GL/glut.h>
static int win(0);
int main(int argc, char* argv[]){
glutInit(&argc,argv);
glutInitDisplayMode(GLUT_RGBA|GLUT_SINGLE);
glutInitWindowSize(500,500);
glutInitWindowPosition(100,100);
//step 2. Open a window named "GLUT DEMO"
win = glutCreateWindow("GLUT DEMO");
glClearColor(0.0,0.0,0.0,0.0); //set background
glClear(GL_COLOR_BUFFER_BIT);
glFlush();
glutMainLoop();
}
glVertex2i(x,y);
Here is the context it needs to work:
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowPosition(80, 80);
glutInitWindowSize(500,500);
glutCreateWindow("A Simple OpenGL Program");
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
gluOrtho2D( 0.0, 500.0, 500.0,0.0 );
glBegin(GL_POINTS);
glColor3f(1,1,1);
glVertex2i(100,100);
glEnd();
This can be done easily by setting the scissor rectangle, and then clearing, which will only clear the specified area in the scissor rectangle. For example:
glEnable(GL_SCISSOR_TEST);
glScissor(100, 200, 1, 1);
glClearColor(1,1,1,1);
glClear(GL_COLOR_BUFFER_BIT);
// Remember to disable scissor test, or, perhaps reset the scissor rectangle:
glDisable(GL_SCISSOR_TEST);
Using modern OpenGL you can combine glScissor() with a quad that fills the entire screen.
The shaders can be as simple as:
// Vertex Shader
#version 330 core
layout (location = 0) in vec3 aPos;
void main()
{
gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);
}
// Fragment Shader
#version 330 core
out vec4 FragColor;
uniform vec4 inColor;
void main()
{
FragColor = inColor;
}
After doing OpenGL and Windows initialization with your preferred method (GLFW, GLAD, GLEN, etc.), create a quad (really two triangles) to cover the entire screen like:
float vertices[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
-1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
-1.0f, 1.0f, 0.0f
};
Then create your buffers, compile your shaders and bind your shader program and VAO.
Then use:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Your drawing code would look something like this to draw a green pixel at x = 100, y = 100:
GLint transformLoc = glGetUniformLocation(shaderProgramId, "inColor");
glUniform4f(transformLoc, 0.0f, 1.0f, 0.0f, 1.0f);
glEnable(GL_SCISSOR_TEST);
glScissor(100, 100, 1, 1); // position of pixel
glDrawArrays(GL_TRIANGLES, 0, 6);
glDisable(GL_SCISSOR_TEST);
I wrote a tiny graphics rendering class built on top of OpenGL that has code doing exatly that. Its core function is putPixel(), which receives the x/y coordinates and a color to draw a single pixel on the screen. Feel free to take a look at the code: https://github.com/amengol/MinGL
I just started to learning OpenGL 3.1 and I'm trying to implement deferred shading to my engine(framework?). I wrote simple shaders for first stage, lighting stage and deferred stage.
Lighting stage takes the diffuse color from deferred texture and saves it in lighting texture. Deferred stage draws the lighting texture. In lighting shader is a bug and scene is very strange. It looks like this, and it should look like this. Lighting stage vertex shader:
#version 150
in vec4 vertex;
out vec2 position;
void main(void)
{
gl_Position = vertex*2-1;
gl_Position.z = 0.0;
position.xy = vertex.xy;
}
Lighting stage fragment shader:
#version 150
in vec2 position;
uniform sampler2D diffuseTexture;
uniform sampler2D positionTexture;
out vec4 lightingOutput;
void main()
{
vec4 diffuse = texture(diffuseTexture, vec2(position.x, position.y));
vec4 position = texture(positionTexture, position.xy);
vec4 ambient = vec4(0.05, 0.05, 0.05, 1.0) * diffuse;
lightingOutput = diffuse;
}
That's what I render in lighting stage:
static const GLfloat _vertices[] =
{
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
};
And that's how I render it:
glUseProgram( programID[2] );
glEnableVertexAttribArray(vertexID[1]);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glVertexAttribPointer(
vertexID[1],
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, positionTexture);
glUniform1i(positionTextureID[1], 1);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, diffuseTexture);
glUniform1i(diffuseTextureID[1], 0);
glDrawArrays( GL_TRIANGLES, 0, 6 );
glDisableVertexAttribArray(vertexID[1]);
If you need all the code it's here www.dropbox.com/s/hvfe4v4pb1pfxb3/code.zip.
How to fix that strange problem?