I'm working on implementing deferred shading to my game. I have rendered the diffuse textures to a render target, and I have lighting rendered to a render target. Both of which I know are fine because I can render them straight to the screen with no problems. What I want to do is combine both the diffuse map and the light map in a shader to create a final image. Here is my current fragment shader, which results in a black screen.
#version 110
uniform sampler2D diffuseMap;
uniform sampler2D lightingMap;
void main()
{
vec4 color = texture(diffuseMap, gl_TexCoord[0].st);
vec4 lighting = texture(lightingMap, gl_TexCoord[0].st);
vec4 finalColor = color;
gl_FragColor = finalColor;
}
Shouldn't this result in the same thing as just straight up drawing the diffuse map?
I set the sampler2d with this method
void ShaderProgram::setUniformTexture(const std::string& name, GLint t) {
GLint var = getUniformLocation(name);
glUniform1i(var, t);
}
GLint ShaderProgram::getUniformLocation(const std::string& name) {
if(mUniformValues.find(name) != mUniformValues.end()) {
return mUniformValues[name];
}
GLint var = glGetUniformLocation(mProgram, name.c_str());
mUniformValues[name] = var;
return var;
}
EDIT: Some more information. Here is the code where I use the shader. I set the two textures, and draw a blank square for the shader to use. I know for sure, my render targets are working, as I said before, because I can draw them fine using the same getTextureId as I do here.
graphics->useShader(mLightingCombinedShader);
mLightingCombinedShader->setUniformTexture("diffuseMap", mDiffuse->getTextureId());
mLightingCombinedShader->setUniformTexture("lightingMap", mLightMap->getTextureId());
graphics->drawPrimitive(mScreenRect, 0, 0);
graphics->clearShader();
void GraphicsDevice::useShader(ShaderProgram* p) {
glUseProgram(p->getId());
}
void GraphicsDevice::clearShader() {
glUseProgram(0);
}
And the vertex shader
#version 110
varying vec2 texCoord;
void main()
{
texCoord = gl_MultiTexCoord0.xy;
gl_Position = ftransform();
}
In GLSL version 110 you should use:
texture2D(diffuseMap, gl_TexCoord[0].st); // etc.
instead of just the texture function.
And then to combine the textures, just multiply the colours together, i.e.
gl_FragColor = color * lighting;
glUniform1i(var, t);
The glUniform functions affect the program that is currently in use. That is, the last program that glUseProgram was called on. If you want to set the uniform for a specific program, you have to use it first.
The problem ended up being that I didn't enable the texture coordinates for the screen rectangle I was drawing.
Related
I have an NVidia example that uses an ARB Assembly shader:
!!ARBfp1.0
TEX result.color, fragment.texcoord, texture[0], RECT;
END
Now I would like to translate that into a GLSL shader. This is what I've come up with:
uniform sampler2D tex;
void main(void)
{
vec4 col = texture2D ( tex, gl_TexCoord[0] );
gl_FragColor = vec4(col.r, col.g, col.b, col.a);
}
I was hoping to see no change in the resulting rendering, but sadly I only get a black texture.
I've already made sure that the tex sampler is set correctly. Also my GLSL code compiles with no errors. For debugging I tried to make my shader even simpler:
void main(void)
{
gl_FragColor = vec4(1,0,0,1);
}
This gives me a red texture. Thus my basic setup seems to be OK.
Pay attention to the 4th parameter of TEX. It says RECT, so the sampler needs to have sampler2DRect type.
uniform sampler2DRect tex;
void main(void) {
gl_FragColor = texture2DRect(tex, gl_TexCoord[0]);
}
UPDATE: So it turns out this was due to a bug in the C side of things, causing some of the matrix to become malformed. The shaders are all fine. So if adding uniforms causes weird things to happen, my advice would be to use a debugger to check the value of ALL uniforms and make sure that they are all being set correctly.
So I am trying to render depth to a cube map to use as a shadow map, but when I add and use a uniform in the fragment shader everything becomes white as if the shader isn't being used. No warnings or errors are generated when compiling/linking the shader.
The shader program I am using to render the depth map (setting the depth simply to the fragment z position as a test) is as follows:
//vertex shader
#version 430
in layout(location=0) vec4 vertexPositionModel;
uniform mat4 modelToWorldMatrix;
void main() {
gl_Position = modelToWorldMatrix * vertexPositionModel;
}
//geometry shader
#version 430
layout (triangles) in;
layout (triangle_strip, max_vertices=18) out;
out vec4 fragPositionWorld;
uniform mat4 projectionMatrices[6];
void main() {
for (int face = 0; face < 6; face++) {
gl_Layer = face;
for (int i = 0; i < 3; i++) {
fragPositionWorld = gl_in[i].gl_Position;
gl_Position = projectionMatrices[face] * fragPositionWorld;
EmitVertex();
}
EndPrimitive();
}
}
//Fragment shader
#version 430
in vec4 fragPositionWorld;
void main() {
gl_FragDepth = abs(fragPositionWorld.z);
}
The main shader samples from the cubemap and simply renders the depth as greyscale colour:
vec3 lightDirection = fragPositionWorld - pointLight.position;
float closestDepth = texture(shadowMap, lightDirection).r;
finalColour = vec4(vec3(closestDepth), 1.0);
The scene is a small cube in a larger cubic room, and renders as expected, dark near z = 0 and the cube projected back onto the wall (The depth map is being rendered from the centre of the room):
Good:
[2
I can move the small cube around and the projection projects correctly onto all the sides of the cubemap. All good so far.
The problem is when I add a uniform to the fragment shader, i.e:
#version 430
in vec4 fragPositionWorld;
uniform vec3 lightPos;
void main() {
gl_FragDepth = min(lightPos.y, 0.5);
}
Everything renders as white, same as if the render failed to compile:
Bad:
gDEBugger reports that the uniform is set correctly (0,4,0) but regardless of what that lightPos is, gl_FragDepth should be set to a value less than 0.5 and appear a shade of grey (which is what happens if I set gl_FragDepth = 0.5 directly), so I can only conclude that the fragment shader is not being used for some reason and the default one is being use instead. Unfortunately I have no idea why.
I have a simple compositing system which is supposed to render different textures and a background texture into an FBO. It also renders some primitives.
Here's an example:
I'm rendering using a simple GLSL shader for the texture and another one for the primitive. Also, I'm waiting for each shader to finish using glFinish after each glDrawArrays call.
So basically:
tex shader (background tex)
tex shader (tex 1)
primitive shader
tex shader (tex 2)
tex shader (tex 3)
When I only do this once, it works. But if I do another render pass directly after the first one finished, some textures just aren't rendered.
The primitive however is always rendered.
This doesn't happen always, but the more textures I draw, the more often this occurs.
Thus, I'm assuming that this is a timing problem.
I tried to troubleshoot for the last two days and I just can't find the reason for this.
I'm 100% sure that the textures are always valid (I downloaded them using glGetTexImage to verify).
Here are my texture shaders.
Vertex shader:
#version 150
uniform mat4 mvp;
in vec2 inPosition;
in vec2 inTexCoord;
out vec2 texCoordV;
void main(void)
{
texCoordV = inTexCoord;
gl_Position = mvp * vec4(inPosition, 0.0, 1.0);
}
Fragment shader:
#version 150
uniform sampler2D tex;
in vec2 texCoordV;
out vec4 fragColor;
void main(void)
{
fragColor = texture(tex, texCoordV);
}
And here's my invocation:
NSRect drawDestRect = NSMakeRect(xPos, yPos, str.texSize.width, str.texSize.height);
NLA_VertexRect rect = NLA_VertexRectFromNSRect(drawDestRect);
int texID = 0;
NLA_VertexRect texCoords = NLA_VertexRectFromNSRect(NSMakeRect(0.0f, 0.0f, 1.0f, 1.0f));
NLA_VertexRectFlipY(&texCoords);
[self.texApplyShader.arguments[#"inTexCoord"] setValue:&texCoords forNumberOfVertices:4];
[self.texApplyShader.arguments[#"inPosition"] setValue:&rect forNumberOfVertices:4];
[self.texApplyShader.arguments[#"tex"] setValue:&texID forNumberOfVertices:1];
GetError();
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, str.texName);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
glFinish();
The setValue:forNumberOfCoordinates: function is an object-based wrapper around OpenGL's parameter application functions. It basically does this:
glBindVertexArray(_vertexArrayObject);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBufferObject);
glBufferData(GL_ARRAY_BUFFER, bytesForGLType * numVertices, value, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray((GLuint)self.boundLocation);
glVertexAttribPointer((GLuint)self.boundLocation, numVectorElementsForType, GL_FLOAT, GL_FALSE, 0, 0);
Here are two screenshots of what it should look like (taken after first render pass) and what it actually looks like (taken after second render pass):
https://www.dropbox.com/s/0nmquelzo83ekf6/GLRendering_issues_correct.png?dl=0
https://www.dropbox.com/s/7aztfba5mbeq5sj/GLRendering_issues_wrong.png?dl=0
(in this example, the background texture is just black)
The primitive shader is as simple as it gets:
Vertex:
#version 150
uniform mat4 mvp;
uniform vec4 inColor;
in vec2 inPosition;
out vec4 colorV;
void main (void)
{
colorV = inColor;
gl_Position = mvp * vec4(inPosition, 0.0, 1.0);
}
Fragment:
#version 150
in vec4 colorV;
out vec4 fragColor;
void main(void)
{
fragColor = colorV;
}
Found the issue... I didn't realize that the FBO is drawn to the screen already after the first render pass. This happens on a different thread and wasn't locked properly.
Apparently the context was switched while the compositing took place which explains why it caused different issues randomly depending on when the second thread switched the context.
Im making a 2D side scroller game and I am currently implementing lights. The lights are just a light gradient texture rendered on top of the terrain multiplied to make it brighten up the area. However, I dont know how to nor understand how to do Ambient lighting. The following picture sums up what I have and the bottom part is what I want.
I am open to answers regarding shaders for I know how to use them.
I ended up creating an FBO texture the size of the screen, clearing it with the color of the ambience and drawing in all nearby lights. Then, I passed it through a shader I made which takes in 2 textures for uniforms. The texture to draw and the light FBO itself. The shader multiplies the textures being drawn with the FBO and it came out nicely.
ambience.frag
uniform sampler2D texture1;
uniform sampler2D texture2;
varying vec2 texCoord;
void main( void ) {
vec4 color1 = vec4(texture2D(texture1, gl_TexCoord[0].st));
vec4 color2 = vec4(texture2D(texture2, texCoord));
gl_FragColor = color1*vec4(color2.r,color2.g,color2.b,1.0);
}
ambience.vs
varying vec2 texCoord;
uniform vec2 screen;
uniform vec2 camera;
void main(){
gl_Position = ftransform();
gl_TexCoord[0] = gl_MultiTexCoord0;
vec2 temp = vec2(gl_Vertex.x,gl_Vertex.y)-camera;
texCoord = temp/screen;
}
I have OpenGL program that I want to texture sphere with bitmap of earth. I prepared mesh in Blender and exported it to OBJ file. Program loads appropriate mesh data (vertices, uv and normals) and bitmap properly- I have checked it texturing cube with bone bitmap.
My program is texturing sphere, but incorrectly (or in the way I don't expect). Each triangle of this sphere includes deformed copy of this bitmap. I've checked bitmap and uv seems to be ok. I've tried many sizes of bitmap (powers of 2, multiples of 2 etc).
Here's the texture:
Screenshot of my program (like It would ignore my UV coords):
Mappings of UVs in Blender I've done in this way:
Code setting texture after loading it (apart from code adding texture to VBO- I think it's ok):
GLuint texID;
glGenTextures(1,&texID);
glBindTexture(GL_TEXTURE_2D,texID);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,width,height,0,GL_BGR,GL_UNSIGNED_BYTE,(GLvoid*)&data[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP);
Is there needed any extra code to map this texture properly?
[Edit]
Initializing textures (earlier presented code is in LoadTextureBMP_custom() function)
bool Program::InitTextures(string texturePath)
{
textureID = LoadTextureBMP_custom(texturePath);
GLuint TBO_ID;
glGenBuffers(1,&TBO_ID);
glBindBuffer(GL_ARRAY_BUFFER,TBO_ID);
glBufferData(GL_ARRAY_BUFFER,uv.size()*sizeof(vec2),&uv[0],GL_STATIC_DRAW);
return true;
}
My main loop:
bool Program::MainLoop()
{
bool done = false;
mat4 projectionMatrix;
mat4 viewMatrix;
mat4 modelMatrix;
mat4 MVP;
Camera camera;
shader.SetShader(true);
while(!done)
{
if( (glfwGetKey(GLFW_KEY_ESC)))
done = true;
if(!glfwGetWindowParam(GLFW_OPENED))
done = true;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Tutaj przeksztalcenia macierzy
camera.UpdateCamera();
modelMatrix = mat4(1.0f);
viewMatrix = camera.GetViewMatrix();
projectionMatrix = camera.GetProjectionMatrix();
MVP = projectionMatrix*viewMatrix*modelMatrix;
// Koniec przeksztalcen
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,textureID);
shader.SetShaderParameters(MVP);
SetOpenGLScene(width,height);
glEnableVertexAttribArray(0); // Udostepnienie zmiennej Vertex Shadera => vertexPosition_modelspace
glBindBuffer(GL_ARRAY_BUFFER,VBO_ID);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,(void*)0);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER,TBO_ID);
glVertexAttribPointer(1,2,GL_FLOAT,GL_FALSE,0,(void*)0);
glDrawArrays(GL_TRIANGLES,0,vert.size());
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glfwSwapBuffers();
}
shader.SetShader(false);
return true;
}
VS:
#version 330
layout(location = 0) in vec3 vertexPosition;
layout(location = 1) in vec2 vertexUV;
out vec2 UV;
uniform mat4 MVP;
void main()
{
vec4 v = vec4(vertexPosition,1.0f);
gl_Position = MVP*v;
UV = vertexUV;
}
FS:
#version 330
in vec2 UV;
out vec4 color;
uniform sampler2D texSampler; // Uchwyt tekstury
void main()
{
color = texture(texSampler, UV);
}
I haven't done any professional GL programming, but I've been working with 3D software quite a lot.
your UVs are most likely bad
your texture is a bad fit to project on a sphere
considering UVs are bad, you might want to check your normals as well
consider an ISOSPHERE instead of a regular one to make more efficient use of polygons
You are currently using a flat texture with flat mapping, which may give you very ugly results, since you will have very low resolution in the "outer" perimeter and most likely a nasty seam artifact where the two projections meet if you like... rotate the planet or something.
Note that you don't have to have any particular UV map, it just needs to be correct with the geometry, which it doesn't look like it is right now. The spherical mapping will take care for the rest. You could probably get away with a cylindrical map as well, since most Earth textures are in a suitable projection.
Finally, I've got the answer. Error was there:
bool Program::InitTextures(string texturePath)
{
textureID = LoadTextureBMP_custom(texturePath);
// GLuint TBO_ID; _ERROR_
glGenBuffers(1,&TBO_ID);
glBindBuffer(GL_ARRAY_BUFFER,TBO_ID);
glBufferData(GL_ARRAY_BUFFER,uv.size()*sizeof(vec2),&uv[0],GL_STATIC_DRAW);
}
There is the part of Program class declaration:
class Program
{
private:
Shader shader;
GLuint textureID;
GLuint VAO_ID;
GLuint VBO_ID;
GLuint TBO_ID; // Member covered with local variable declaration in InitTextures()
...
}
I've erroneously declared local TBO_ID that covered TBO_ID in class scope. UVs were generated with crummy precision and seams are horrible, but they weren't problem.
I have to admit that information I've supplied is too small to enable help. I should have put all the code of Program class. Thanks everybody who tried to.