I am trying to get my very simple shader working with my current OpenGL set up. I am using a shader manager and upon loading the shaders, all the output says they have loaded correctly.
Here is my data:
static const GLfloat g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
Here is where i set up my buffer:
// This will identify our vertex buffer
GLuint vertexbuffer;
// Generate 1 buffer, put the resulting identifier in vertexbuffer
glGenBuffers(1, &vertexbuffer);
// The following commands will talk about our 'vertexbuffer' buffer
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
// Give our vertices to OpenGL.
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
Here is my declaration of my shader using a tested and working shader loader:
Shader shader;
shader.loadFromFile(VERTEX_SHADER, resourcePath() + "tri.vert");
shader.loadFromFile(FRAGMENT_SHADER, resourcePath() + "tri.frag");
shader.createAndLinkProgram();
shader.use();
shader.addAttribute("position");
shader.unUse();
The attributes for this shader are then stored in a map. This is the rendering call:
shader.use();
glEnableVertexAttribArray(shader["position"]);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(
shader["position"], // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangle !
glDrawArrays(GL_TRIANGLES, 0, 3); // Starting from vertex 0; 3 vertices total -> 1 triangle
glDisableVertexAttribArray(shader["position"]);
shader.unUse();
I then swap the buffers. I tried with an old GL_TRIANGLES fixed pipeline drawing and it worked fine.
Here is my vertex shader:
#version 330
layout(location = 0) in vec3 position;
void main()
{
gl_Position.xyz = position;
}
Here is my fragment shader:
out vec3 color;
main()
{
color = vec3(1,0,0);
}
It is supposed to simply draw a red triangle. When I draw using intermediate mode, it renders fine. I am running Xcode on Mac OSX 10.7.4.
gl_Position.xyz = position;
And what is the W coordinate? It's kind of important not to leave it undefined. If you want it to be 1.0, you need to set it to be that:
gl_Position = vec4(position, 1.0);
gl_Position must be provided with x,y,z,w coordinate.
In GLSL version above 1.50 u have to provide the matrix manipulation in the shader i.e u have to do ll the projection , model, view matrix using some math library like GLM http://glm.g-truc.net/
from the vertex data set the z coordinate is 0.0 which is co insiding with the center of the scene.prove some -ve value o the Z coordinate and check for the results.
Related
I'm trying to show a texture(yes it is a pot) with opengl 2.1 and glsl 120, but i'm not sure on how to do it, all i can get is a black quad, i've been following this tutorials: A Textured Cube, OpenGl - Textures and what i have understood is that i need to:
Specify the texture coordinates to attach to each vertex(in my case are 6 vertices, a cube without indexing)
Load the texture and bind it in a texture unit(default is 0)
call glDrawArrays
Inside the shaders i need to:
Receive the texture coords in an attribute in the vertex shader and pass it to the fragment shader through a varying variable
In the fragment shader use a sampler object to sample a pixel, in the position specified by the varying variable, from the texture.
Is it all correct?
Here is how i create the texture VBO and load the texture:
void Application::onStart(){
unsigned int format;
SDL_Surface* img;
float quadCoords[] = {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
-0.5f, 0.5f, 0.0f,
-0.5f, -0.5f, 0.0f};
const float texCoords[] = {
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
1.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f};
//shader loading omitted ...
sprogram.bind(); // call glUseProgram(programId)
//set the sampler value to 0 -> use texture unit 0
sprogram.loadValue(sprogram.getUniformLocation(SAMPLER), 0);
//quad
glGenBuffers(1, &quadBuffer);
glBindBuffer(GL_ARRAY_BUFFER, quadBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*18, quadCoords, GL_STATIC_DRAW);
//texture
glGenBuffers(1, &textureBuffer);
glBindBuffer(GL_ARRAY_BUFFER, textureBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*12, texCoords, GL_STATIC_DRAW);
//load texture
img = IMG_Load("resources/images/crate.jpg");
if(img == nullptr)
throw std::runtime_error(SDL_GetError());
glGenTextures(1, &this->texture);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, this->texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img->w, img->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, img->pixels);
SDL_FreeSurface(img);
}
rendering phase:
glClear(GL_COLOR_BUFFER_BIT);
glEnableVertexAttribArray(COORDS);
glBindBuffer(GL_ARRAY_BUFFER, quadBuffer);
glVertexAttribPointer(COORDS, 3, GL_FLOAT, GL_FALSE, 0, nullptr);
glEnableVertexAttribArray(TEX_COORDS);
glBindBuffer(GL_ARRAY_BUFFER, textureBuffer);
glVertexAttribPointer(TEX_COORDS, 2, GL_FLOAT, GL_FALSE, 0, nullptr);
//draw the vertices
glDrawArrays(GL_TRIANGLES, 0, 6);
vertex shader:
#version 120
attribute vec3 coord;
attribute vec2 texCoord;
varying vec2 UV;
void main(){
gl_Position = vec4(coord.x, coord.y, coord.z, 1.0);
UV = texCoord;
}
fragment shader:
#version 120
uniform sampler2D tex;
varying vec2 UV;
void main(){
gl_FragColor.rgb = texture2D(tex, UV).rgb;
gl_FragColor.a = 1.0;
}
I know that the tutorials use out instead of varying so i tried to "convert" the code, also there is this tutorial: Simple Texture - LightHouse that explain the gl_MultiTexCoord0 attribute and gl_TexCoord array wich are built in, but this is almost the same thing i'm doing. I want to know if 'm doing it all right and if not, i would like to know how to show a simple 2d texture in the screen with opengl 2.1 and glsl 120
Do you have a particular reason to use opengl 2.1 with glsl version 1.2 ? If not stick to the openGl 3.0 because its easier to understand imho.
My guess is you have 2 big problems :
First of all getting a black quad: If its size occupies your hole app then its the background color. That means it doesn't draw anything at all .
I think(by testing this) OpenGL has a default program which will activate and even if you have already set a vertex array/buffer object on the gpu.It should render as a white quad in your window... So that might be ur 1st problem . I dont know if opengl 2.1 has vertex buffer arrays but opengl 3.0 has and you should definetly make use of that!
Second : you don't use your shader program in the rendering phase;
Call this function before drawing your quad:
glUseProgram(myProgram); // The myProgram variable is your compiled shader program
If by any chance you would like me to explain how to draw your quad using OpegGL 3.0 ++ let me know :) ...It is not far from what you already wrote in your code .
This question already has an answer here:
Can't set uniform value in OpenGL shader
(1 answer)
Closed 6 years ago.
I have a simple vertex array that draws a quad to the screen.
glViewport(0, 0, screenWidth, screenHeight);
// Load shaders
...
GLfloat vertices[] = {
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f
};
GLuint vao, vbo;
glGenVertexArrays(1, &vao);
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(GLfloat), (GLvoid*)0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
...
glUseProgram(program);
glUniform3f(glGetUniformLocation(program, "spriteColour"), colour.x, colour.y, colour.z);
glBindVertexArray(vao);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindVertexArray(0);
Using the below simple vertex and fragment shaders.
#version 330 core
layout (location = 0) in vec2 vertex;
void main()
{
gl_Position = vec4(vertex.xy, 0.0, 1.0);
}
#version 330 core
out vec4 color;
uniform vec3 spriteColour;
void main()
{
color = vec4(spriteColour, 1.0);
}
This works exactly how I expect. It renders a rectangle to the upper right corner of the window.
Now I want to add a simple model and projection matrix. I am using an orthographic projection matrix and my model matrix just scales the quad to 100 x 100.
glm::mat4 projection = glm::ortho(0.0f, (GLfloat)screenWidth, (GLfloat)screenHeight, 0.0f, -1.0f, 1.0f);
glUniformMatrix4fv(glGetUniformLocation(shader, "projection"), 1, GL_FALSE, glm::value_ptr(projection));
...
glm::mat4 model;
model = glm::scale(model, glm::vec3(100.0f, 100.0f, 1.0f));
glUniformMatrix4fv(glGetUniformLocation(program, "model"), 1, GL_FALSE, glm::value_ptr(model));
...
// Update vertex shader
#version 330 core
layout (location = 0) in vec2 vertex;
uniform mat4 model;
uniform mat4 projection;
void main()
{
gl_Position = projection * model * vec4(vertex.xy, 0.0, 1.0);
}
I would expect this to render a 100 x 100 quad at the top left of the screen however I don't see anything.
I can only assume that somehow my transformations are causing the quad to be drawn off screen and clipped? I am pretty new at Open GL so I am not entirely sure what is wrong here. I've been over it numerous times and based on the tutorials I am using (http://learnopengl.com/) it seems correct.
Does anyone have any ideas as to what I've done wrong?
Turns out you need to activate the program before setting uniforms.
Can't set uniform value in OpenGL shader
I already have a program that can draw textured objects. I want to draw debug lines, so I tried to copy the same sort of drawing process I use for sprites to draw a line. I made a new fragment and vertex shader because lines aren't going to be textured and having a different debug shader could be useful.
My system continues to work if I try to draw a line, but the line doesn't draw. I tried to write code similar to working code for my sprites, but clearly I've missed something or made a mistake.
Vertex Shader:
#version 330 core
layout (location = 0) in vec2 position;
uniform mat4 uniformView;
uniform mat4 uniformProjection;
void main()
{
gl_Position = uniformProjection * uniformView * vec4(position, 0.0f, 1.0f);
}
Fragment Shader:
#version 330 core
out vec4 color;
uniform vec4 uniformColor;
void main()
{
color = uniformColor;
}
Drawing Code:
void debugDrawLine(glm::vec3 startPoint, glm::vec3 endPoint, glm::vec3 color, Shader debugShader)
{
GLint transformLocation, colorLocation;
GLfloat lineCoordinates[] = { startPoint.x, startPoint.y,
endPoint.x, endPoint.y};
GLuint vertexArray, vertexBuffer;
glLineWidth(5.0f);
glGenVertexArrays(1, &vertexArray);
glGenBuffers(1, &vertexBuffer);
glBindVertexArray(vertexArray);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
debugShader.Use();
//Copies the Vertex data into the buffer
glBufferData(GL_ARRAY_BUFFER, sizeof(lineCoordinates), lineCoordinates, GL_STATIC_DRAW);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
2, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
2*sizeof(GL_FLOAT), // stride
(GLvoid*)0 // array buffer offset
);
//Sends the sprite's color information in the the shader
colorLocation = glGetUniformLocation(debugShader.Program, "uniformColor");
glUniform4f(colorLocation, 1.0f, color.x, color.y, color.z);
//Activates Vertex Position Information
glEnableVertexAttribArray(0);
// Draw the line
glDrawArrays(GL_LINES, 0, 2);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
Line widths greater than 1 are not allowed in core profile. Does the line render if you set the size to 1
The issue is that I can't figure out how to properly draw two objects, because my another object isn't being drawn.
Here's the main code:
GLuint VertexArrayID;
glGenVertexArrays(1, &VertexArrayID);
glBindVertexArray(VertexArrayID);
GLuint VertexArrayID2;
glGenVertexArrays(1, &VertexArrayID2);
glBindVertexArray(VertexArrayID2);
GLuint programID = LoadShaders( "SimpleVertexShader.vertexshader", "SimpleFragmentShader.fragmentshader" );
GLuint MatrixID = glGetUniformLocation(programID, "MVP");
GLuint MatrixID2 = glGetUniformLocation(programID, "MVP2");
glm::mat4 Projection = glm::perspective(45.0f, 5.0f / 4.0f, 0.1f, 100.0f);
glm::mat4 View = glm::lookAt(
glm::vec3(4*2,3*2,8*2),
glm::vec3(0,0,0),
glm::vec3(0,1,0)
);
glm::mat4 Model = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, 0.0f));
glm::mat4 MVP = Projection * View * Model;
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
glm::mat4 Model2 = glm::translate(glm::mat4(1.0f), glm::vec3(-5.0f, 0.0f, 0.0f));
glm::mat4 MVP2 = Projection * View * Model2;
glUniformMatrix4fv(MatrixID2, 1, GL_FALSE, &MVP2[0][0]);
static const GLfloat g_vertex_buffer_data[] = {
-1.0f,-1.0f,-1.0f,
-1.0f,-1.0f, 1.0f,
(plenty of floats)
1.0f,-1.0f, 1.0f
};
static const GLfloat g_vertex_buffer_data2[] = {
-1.0f, -1.0f, 3.0f,
(plenty of floats)
0.0f, 1.0f, 2.0f,
};
GLuint vertexbuffer;
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
GLuint vertexbuffer2;
glGenBuffers(1, &vertexbuffer2);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer2);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data2), g_vertex_buffer_data2, GL_STATIC_DRAW);
do{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(programID);
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
glUniformMatrix4fv(MatrixID2, 1, GL_FALSE, &MVP2[0][0]);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,(void*)0);
glDrawArrays(GL_TRIANGLES, 0, 12*3);
glEnableVertexAttribArray(2);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer2);
glVertexAttribPointer(2,3,GL_FLOAT,GL_FALSE,0,(void*)0);
glDrawArrays(GL_TRIANGLES, 0, 4*3);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(2);
glfwSwapBuffers(window);
glfwPollEvents();
}
And shader:
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 2) in vec3 vertexPosition_modelspace2;
uniform mat4 MVP;
uniform mat4 MVP2;
void main(){
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
gl_Position = MVP2 * vec4(vertexPosition_modelspace2,1);
}
I have noticed that only last object is being drawn, so the issue is that 'gl_Position' overwrites it's values, but how should I figure it out?
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
gl_Position = MVP2 * vec4(vertexPosition_modelspace2,1);
That is not how the graphics pipeline work. You can not draw two objects at the same time. Just the last write to gl_Position will be effective, and your first object will be completely ignored. In the most basic variant, you want to draw two completely independent objects, and you will need two draw calls for that - as you do in your code.
However, when doing so, you do not need two different vertex attributes. Your shader just processes vertices, which in your case only have the verexPosition_modelspace attribute. So you can use that attribute for all the objects you want to draw. There is no point in using different attributes for different objects if the attribute means the same thing.
Let's have a look at your drawing code:
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,(void*)0);
Here, you set up vertex attribute 0 to point to the vertex data of the first buffer, and you enable the attribute array. So the data will not be used as source for vertexPosition_modelspace.
glDrawArrays(GL_TRIANGLES, 0, 12*3);
Now you draw the object. But as we already have seen, your shader does only really use vertexPosition_modelspace2, for which you did not have set an pointer, or enabled the array. Since the array is disabled, the GL will use the current value of attribute 2 - for all vertices. So in the case of triangles, you create triangles with all points being the same - getting triangles with a surface area of 0 and are invisible anyways, no matter what actual value attribute 2 currently has.
glEnableVertexAttribArray(2);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer2);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,(void*)0);
Now you do a strange thing: you enable attribute 2 array, but do not set a pointer for it! You should re-specify the pointer for attribute 0 to point to your second model.
glDrawArrays(GL_TRIANGLES, 0, 4*3);
Now you draw with both attribute 0 and 2 enabled. Attribute 0 will contain the data you want, but is ignored by the shader. Attribute 2 is just point somewhere, and you get undefined behavior - it might just crash, but It might also display strange stuff, or nothing at all.
To make this work, just remove vertexPosition_modelspace2 completely from the shader. Use just one MVP matrix also.
When drawing any object, you have to:
Set the MVP uniform matrix for the object
Set the attribute pointer for attribute 0
Enable the attribute array for attribute 0 (or make sure it is already enabled)
Issue the draw call
You can do this with as many objects as you want.
I am building a simple 3D game for practice, and I am having trouble passing normals to my shader when using indexed rendering. For each face of a polygon, at each vertex there would be the same normal value. For a cube with 8 vertices, there would be 6 * 6 = 36 normals (since each surface renders with two triangles). With indexed drawing I can only pass 8, one for each vertex. This does not allow me to pass surface normals, only averaged vertex normals.
How could I pass 36 different normals to the 36 different indexes? Using glDrawArrays is apparently slow so I have elected not to use it.
Here is my shader:
#version 330
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 vertNormal;
smooth out vec4 colour;
uniform vec4 baseColour;
uniform mat4 modelToCameraTrans;
uniform mat3 modelToCameraLight;
uniform vec3 sunPos;
layout(std140) uniform Projection {
mat4 cameraToWindowTrans;
};
void main() {
gl_Position = cameraToWindowTrans * modelToCameraTrans * vec4(position, 1.0f);
vec3 dirToLight = normalize((modelToCameraLight * position) - sunPos);
vec3 camSpaceNorm = normalize(modelToCameraLight * vertNormal);
float angle = clamp(dot(camSpaceNorm, dirToLight), 0.0f, 1.0f);
colour = baseColour * angle * 0.07;
}
And here is the code I am currently using to bind to the VAO:
glGenVertexArrays(1, &vertexArray);
glBindVertexArray(vertexArray);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, polygonBuffer);
// The position input to the shader is index 0
glEnableVertexAttribArray(POSITION_ATTRIB);
glVertexAttribPointer(POSITION_ATTRIB, 3, GL_FLOAT, GL_FALSE, 0, 0);
// Add the vertex normal to the shader
glBindBuffer(GL_ARRAY_BUFFER, vertexNormBuffer);
glEnableVertexAttribArray(VERTEX_NORMAL_ATTRIB);
glVertexAttribPointer(VERTEX_NORMAL_ATTRIB, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindVertexArray(0);
This is my renderer:
glBindVertexArray(vertexArray);
glDrawElements(GL_TRIANGLES, polygonVertexCount, GL_UNSIGNED_SHORT, 0);
glBindVertexArray(0);
For a cube with 8 vertices, there would be 6 * 6 = 36 normals (since
each surface renders with two triangles).
Correct me if wrong, but I only see 6 normals, one per face. All vertices on that face will use the same normal.
With indexed drawing I can only pass 8, one for each vertex. This does
not allow me to pass surface normals, only averaged vertex normals.
Here's where your reasoning fails. You do not just pass vertex locations to the shader, you pass a whole bunch of vertex attributes that make the vertex unique.
So if you use the same vertex location 6 times (which you'll often do), but with a different normal each time (actually two triangles will share the same data), you actually should send all the data for that vertex 6x, including duplicates of the location.
That being said, you don't need 36, you need 4*6=24 unique attributes. You have to send all vertices for each face separately because the normals differ and per face you have to differ between 4 positions. You could also see it as 8*3 because you have 8 positions that have to be replicated to handle 3 different normals.
So you'll end up with something like:
GLFloat positions[] = { 0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f,
1.0f, 0.0f, 0.0f,
...}
GLFloat normals[] = { 1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f,
... }
Note that while within normals and positions there is repetition, but that the combination of both at the same position is unique.