I'm using LWJGL 3 on OSX. The shaders work fine when using a version <150 but porting the code to 330 nothing renders.
My shaders are as simple as possible:
vertex shader:
#version 330 core
in vec3 position;
void main(void) {
gl_Position = vec4(position, 1.0);
}
fragment shader:
#version 330 core
out vec4 outColour;
void main(void) {
outColour = vec4(1.0, 0.0, 0.0, 1.0);
}
I create a simple triangle like this (Scala):
val vertices = Array(
0.0f, 0.5f, 0.0f,
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f
)
val vertexBuffer = BufferUtils.createFloatBuffer(vertices.length)
vertexBuffer.put(vertices)
vertexBuffer.flip()
val buffer = GL15.glGenBuffers()
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, buffer)
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, vertexBuffer, GL15.GL_STATIC_DRAW)
and I draw it like this:
GL20.glUseProgram(shader)
GL20.glEnableVertexAttribArray(0)
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, buffer)
GL20.glBindAttribLocation(shader, 0, "position")
GL20.glVertexAttribPointer(0, 3, GL11.GL_FLOAT, false, 0, 0)
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, 9)
GL20.glDisableVertexAttribArray(0)
GL20.glUseProgram(0)
The shaders compile fine and the program runs but I just get a blank screen! Is there anything obviously wrong with my code?
Vertex Array Objects (VAOs) are required for rendering in a Core context. In Compatibility contexts they're optional.
However, you can just generate one at startup and leave it bound if you're feeling lazy :)
Related
I'm trying to show a texture(yes it is a pot) with opengl 2.1 and glsl 120, but i'm not sure on how to do it, all i can get is a black quad, i've been following this tutorials: A Textured Cube, OpenGl - Textures and what i have understood is that i need to:
Specify the texture coordinates to attach to each vertex(in my case are 6 vertices, a cube without indexing)
Load the texture and bind it in a texture unit(default is 0)
call glDrawArrays
Inside the shaders i need to:
Receive the texture coords in an attribute in the vertex shader and pass it to the fragment shader through a varying variable
In the fragment shader use a sampler object to sample a pixel, in the position specified by the varying variable, from the texture.
Is it all correct?
Here is how i create the texture VBO and load the texture:
void Application::onStart(){
unsigned int format;
SDL_Surface* img;
float quadCoords[] = {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
-0.5f, 0.5f, 0.0f,
-0.5f, -0.5f, 0.0f};
const float texCoords[] = {
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
1.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f};
//shader loading omitted ...
sprogram.bind(); // call glUseProgram(programId)
//set the sampler value to 0 -> use texture unit 0
sprogram.loadValue(sprogram.getUniformLocation(SAMPLER), 0);
//quad
glGenBuffers(1, &quadBuffer);
glBindBuffer(GL_ARRAY_BUFFER, quadBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*18, quadCoords, GL_STATIC_DRAW);
//texture
glGenBuffers(1, &textureBuffer);
glBindBuffer(GL_ARRAY_BUFFER, textureBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*12, texCoords, GL_STATIC_DRAW);
//load texture
img = IMG_Load("resources/images/crate.jpg");
if(img == nullptr)
throw std::runtime_error(SDL_GetError());
glGenTextures(1, &this->texture);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, this->texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img->w, img->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, img->pixels);
SDL_FreeSurface(img);
}
rendering phase:
glClear(GL_COLOR_BUFFER_BIT);
glEnableVertexAttribArray(COORDS);
glBindBuffer(GL_ARRAY_BUFFER, quadBuffer);
glVertexAttribPointer(COORDS, 3, GL_FLOAT, GL_FALSE, 0, nullptr);
glEnableVertexAttribArray(TEX_COORDS);
glBindBuffer(GL_ARRAY_BUFFER, textureBuffer);
glVertexAttribPointer(TEX_COORDS, 2, GL_FLOAT, GL_FALSE, 0, nullptr);
//draw the vertices
glDrawArrays(GL_TRIANGLES, 0, 6);
vertex shader:
#version 120
attribute vec3 coord;
attribute vec2 texCoord;
varying vec2 UV;
void main(){
gl_Position = vec4(coord.x, coord.y, coord.z, 1.0);
UV = texCoord;
}
fragment shader:
#version 120
uniform sampler2D tex;
varying vec2 UV;
void main(){
gl_FragColor.rgb = texture2D(tex, UV).rgb;
gl_FragColor.a = 1.0;
}
I know that the tutorials use out instead of varying so i tried to "convert" the code, also there is this tutorial: Simple Texture - LightHouse that explain the gl_MultiTexCoord0 attribute and gl_TexCoord array wich are built in, but this is almost the same thing i'm doing. I want to know if 'm doing it all right and if not, i would like to know how to show a simple 2d texture in the screen with opengl 2.1 and glsl 120
Do you have a particular reason to use opengl 2.1 with glsl version 1.2 ? If not stick to the openGl 3.0 because its easier to understand imho.
My guess is you have 2 big problems :
First of all getting a black quad: If its size occupies your hole app then its the background color. That means it doesn't draw anything at all .
I think(by testing this) OpenGL has a default program which will activate and even if you have already set a vertex array/buffer object on the gpu.It should render as a white quad in your window... So that might be ur 1st problem . I dont know if opengl 2.1 has vertex buffer arrays but opengl 3.0 has and you should definetly make use of that!
Second : you don't use your shader program in the rendering phase;
Call this function before drawing your quad:
glUseProgram(myProgram); // The myProgram variable is your compiled shader program
If by any chance you would like me to explain how to draw your quad using OpegGL 3.0 ++ let me know :) ...It is not far from what you already wrote in your code .
I'm trying to do a cross OpenGL ES - OpenGL 3.0 (not ES) code to draw triangles just changing the shaders being used.
The code works for the OpenGL ES version, but doesn't work for the OpenGL 3.0 one. Here is what I'm doing:
Vertex shader:
#version 330 core
in vec3 a_v4Position;
void main(){
gl_Position = vec4(a_v4Position, 1.0);
}
Fragment shader:
#version 330 core
out vec3 color;
void main(){
color = vec3(0,1,0);
}
Code to draw the triangle (GLProgram is linked correctly)
const float triangleVertices[] =
{
0.0f, 0.5f, 0.0f,
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
};
positionAttribLocation = glGetAttribLocation(GLProgram, "a_v4Position");
glEnableVertexAttribArray(positionAttribLocation);
glVertexAttribPointer(positionAttribLocation, 3, GL_FLOAT, GL_FALSE, 0, TriangleVertices);
glDrawArrays(GL_TRIANGLES, 0, 3);
I've looked many papers and blogs, and I finally reached these code to generate a quad:
program stores in gProgram
vert shader:---------------------
#version 330
layout(location = 0) in vec3 attrib_position;
layout(location = 6) in vec2 attrib_texcoord;
out Varing
{
vec4 pos;
vec2 texcoord;
} VS_StateOutput;
uniform mat4 gtransform;
void main(void)
{
VS_StateOutput.pos = gtransform * vec4(attrib_position,1.0f);
VS_StateOutput.texcoord = attrib_texcoord;
}
frag shader:--------------------
#version 330
uniform sampler2D texUnit;
in Varing
{
vec4 pos;
vec2 texcoord;
} PS_StateInput;
layout(location = 0) out vec4 color0;
void main(void)
{
color0 = texture(texUnit, PS_StateInput.texcoord);
}
Here is my vertex data: stores in gVertexVBO
float data[] =
{
-1.0f, 1.0f, 0.0f, 0.0f, 0.0f,
1.0f, 1.0f, 0.0f, 1.0f, 0.0f,
1.0f, -1.0f, 0.0f, 1.0f, 1.0f,
-1.0f, -1.0f, 0.0f, 0.0f, 1.0f
};
and index data: stores in gIndexVBO
unsigned short idata[] =
{
0, 3, 2, 0, 2, 1
};
in the drawing section, I do like this:
ps: the gtransform is set as an identity matrix
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(gProgram);
glProgramUniformMatrix4fv(gProgram, gtLoc, 1, GL_FALSE, matrix);
glProgramUniform1uiv(gProgram, texLoc, 1, &colorTex);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, gIndexVBO);
glBindBuffer(GL_ARRAY_BUFFER, gVertexVBO);
char* offset = NULL;
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(float)*5, offset);
glEnableVertexAttribArray(0);
glVertexAttribPointer(6, 2, GL_FLOAT, GL_FALSE, sizeof(float)*5, offset+sizeof(float)*3);
glEnableVertexAttribArray(6);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, NULL);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(6);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
I get nothing but the cleared back color, I think the errors located within these codes, but I can't figure out where is it, I appreciate any help.
If you need to check codes in initialization section, I will post them in the reply, thanks!
You're not supplying the built in gl_Position in the vertex shader.
uniform mat4 gtransform;
void main(void)
{
vec4 outPosition = gtransform * vec4(attrib_position,1.0f);
gl_Position = outPosition;
// if you need the position in the fragment shader
VS_StateOutput.pos = outPosition;
VS_StateOutput.texcoord = attrib_texcoord;
}
OpenGL wont know about your VS_StateOutput struct and won't have any idea where to put the new vertices. See here for a quick reference. (Page 7)
This link gives a better detail of the built in structure.
EDIT:
Next I would change the fragment shader main to:
void main(void)
{
color0 = vec4(1.0); //texture(texUnit, PS_StateInput.texcoord);
}
to rule out any texture issues. Sometimes texture problems lead to a blank black texture. If you get a white colour, you then know its a texture problem and can look there instead.
I just started to learning OpenGL 3.1 and I'm trying to implement deferred shading to my engine(framework?). I wrote simple shaders for first stage, lighting stage and deferred stage.
Lighting stage takes the diffuse color from deferred texture and saves it in lighting texture. Deferred stage draws the lighting texture. In lighting shader is a bug and scene is very strange. It looks like this, and it should look like this. Lighting stage vertex shader:
#version 150
in vec4 vertex;
out vec2 position;
void main(void)
{
gl_Position = vertex*2-1;
gl_Position.z = 0.0;
position.xy = vertex.xy;
}
Lighting stage fragment shader:
#version 150
in vec2 position;
uniform sampler2D diffuseTexture;
uniform sampler2D positionTexture;
out vec4 lightingOutput;
void main()
{
vec4 diffuse = texture(diffuseTexture, vec2(position.x, position.y));
vec4 position = texture(positionTexture, position.xy);
vec4 ambient = vec4(0.05, 0.05, 0.05, 1.0) * diffuse;
lightingOutput = diffuse;
}
That's what I render in lighting stage:
static const GLfloat _vertices[] =
{
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
};
And that's how I render it:
glUseProgram( programID[2] );
glEnableVertexAttribArray(vertexID[1]);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glVertexAttribPointer(
vertexID[1],
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, positionTexture);
glUniform1i(positionTextureID[1], 1);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, diffuseTexture);
glUniform1i(diffuseTextureID[1], 0);
glDrawArrays( GL_TRIANGLES, 0, 6 );
glDisableVertexAttribArray(vertexID[1]);
If you need all the code it's here www.dropbox.com/s/hvfe4v4pb1pfxb3/code.zip.
How to fix that strange problem?
I have two images, and with the help of the instruction here:
http://en.wikibooks.org/wiki/OpenGL_Programming/Intermediate/Textures
I was able store them separately, into two separate textures, and upload them into video memory:
gluBuild2DMipmaps(GL_TEXTURE_2D, 4, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
Now, how would I access these textures with shaders to multiply these two textures?
For example, I found this example, about multiplication using shaders:
http://www.opengl.org/wiki/Texture_Combiners
//Vertex shader
#version 110
attribute vec4 InVertex;
attribute vec2 InTexCoord0;
attribute vec2 InTexCoord1;
uniform mat4 ProjectionModelviewMatrix;
varying vec2 TexCoord0;
varying vec2 TexCoord1; //Or just use TexCoord0
//------------------------
void main()
{
gl_Position = ProjectionModelviewMatrix * InVertex;
TexCoord0 = InTexCoord0;
TexCoord1 = InTexCoord1;
}
//------------------------
//Fragment shader
#version 110
uniform sampler2D Texture0;
uniform sampler2D Texture1;
//------------------------
varying vec2 TexCoord0;
varying vec2 TexCoord1; //Or just use TexCoord0
//------------------------
void main()
{
vec4 texel = texture2D(Texture0, TexCoord0);
texel *= texture2D(Texture1, TexCoord1);
gl_FragColor = texel;
}
But how would I make the textures that I've uploaded in a form of Vertex, so that I can use this Fragment shaders to accomplish this multiplication.
All I did was generated gluBuild2DMipmaps, but now I don't know how to apply Vertex/Fragment shaders to my texture?
Assume you have a quad where the first three values are vertex coord. and the last two your TexCoord.
-1.0f,-1.0f, 1.0f, 0.0f, 0.0f,
1.0f,-1.0f, 1.0f, 1.0f, 0.0f,
-1.0f, 1.0f, 1.0f, 0.0f, 1.0f,
1.0f,-1.0f, 1.0f, 1.0f, 0.0f,
1.0f, 1.0f, 1.0f, 1.0f, 1.0f,
-1.0f, 1.0f, 1.0f, 0.0f, 1.0f,
you have to submit your Hardware different uniforms and attributes:
first of all the (after MVP and so on.) the vertex and textcoord:
glEnableVertexAttribArray(VAA_Normal);
glVertexAttribPointer(VAA_Normal, 3, GL_FLOAT, GL_TRUE, 5*sizeof(GLfloat), (const GLvoid*)(5 * sizeof(GLfloat)));
glEnableVertexAttribArray(VAA_TexCoord);
glVertexAttribPointer(VAA_TexCoord, 2, GL_FLOAT, GL_TRUE, 5*sizeof(GLfloat), (const GLvoid*)(3 * sizeof(GLfloat)));
(VAA_Normal = glGetAttribLocation(aProgram, attribName);)
last but not least the important Texture:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, aTexture);
dont forget: its up to you how you combine different textures
edit:
sorry forgot
glUniform1i(glGetUniformLocation(aProgramID, "TEXTURE0"), 0);