I need to render an OpenGL scene to a texture in order to then manipulate that texture in a shader. I've solved this by using Framebuffer Objects, which I think I understand fairly well by now. At many points in my effect pipeline, I need to render a fullscreen quad and texture it with the dynamically rendered texture, which is where my problem is.
This is what my scene looks like: https://www.mathematik.uni-marburg.de/~thomak/planet.jpg
I render this to a texture and map that texture to a fullscreen quad. However, the resulting image is distorted in this way: https://www.mathematik.uni-marburg.de/~thomak/planettexture.jpg
Here is the code that renders the quad and sets the texture coordinates:
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glBegin(GL_QUADS);
glTexCoord2i(0, 0);
glVertex3i(-1, -1, -1);
glTexCoord2i(0, 1);
glVertex3i( 1, -1, -1);
glTexCoord2i(1, 1);
glVertex3i( 1, 1, -1);
glTexCoord2i(1, 0);
glVertex3i(-1, 1, -1);
glEnd();
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
And the shader code is here:
sampler2D BlitSamp = sampler_state
{
MinFilter = LINEAR;
MagFilter = LINEAR;
MipFilter = LINEAR;
AddressU = Clamp;
AddressV = Clamp;
};
float4 AlphaClearPS(float2 texcoords : TEXCOORD0) : COLOR
{
return float4(tex2D(BlitSamp, texcoords).rgb, 1.0f);
}
Where BlitSamp is the texture I rendered to and then passed to the shader. What could be going on here?
It's possible that your tex-coords are off. Your code, my comments:
glTexCoord2i(0, 0); //Bottom-Left
glVertex3i(-1, -1, -1); //Bottom-Left
glTexCoord2i(0, 1); //Top-Left
glVertex3i( 1, -1, -1); //Bottom-Right???
glTexCoord2i(1, 1); //Top-Right
glVertex3i( 1, 1, -1); //Top-Right
glTexCoord2i(1, 0); //Bottom-Right
glVertex3i(-1, 1, -1); //Bottom-Left??
Your code to render the quad looks fine so that would point to a mismatch in the size of the quad and the size of the viewport.
Could you have swapped the width and height when you created the render texture, by any chance ?
Related
I have this code
glColor3f(1, 0, 0);// red quad
glBegin(GL_QUADS);
glVertex3f(-1, 0, -0.1);
glVertex3f(1, 0, -0.1);
glVertex3f(1, 1, -0.1);
glVertex3f(-1, 1, -0.1);
glEnd();
glColor3f(0, 1, 0); //green quad
glBegin(GL_QUADS);
glVertex3f(-1, 0, -0.2);
glVertex3f(1, 0, -0.2);
glVertex3f(1, 1, -0.2);
glVertex3f(-1, 1, -0.2);
glEnd();
glutSwapBuffers();
Using default projection matrix, the one that appears is my green quad.
If we're looking to negative z (from 1 to -1), shouldn't the green quad behind the red quad?
All matrices in compatibility mode OpenGL start off as identity matrices; they don't apply any transformations.
In Normalized Device Coordinates, +Z is into the window; you're looking at +Z. Matrices and shaders can, of course, change this.
Also make sure that depth testing is enabled and you create your window with a depth buffer.
If red quad is outside frustum's near and far plane then your red quad will not be visible because it gets clipped out. More information
I've injected a DLL into a game process to make a overlay interface, but the problem is that alpha values are being "cropped" (not rendering at all)
I've tested several alpha values and it seems to fail if alpha is below 0.3.
To illustrate what happens, the image that I'm trying to render is:
and the game redering the image is:
What is exactly happening here? Its the current state of opengl? I'm new to the API, and I have no idea why this happens.
More information:
The texture is being created from a buffer with:
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, this->width, this->height, GL_BGRA_EXT, GL_UNSIGNED_BYTE, this->buffer);
I receive this buffer from Awesomium, and the values are right... I've checked the alpha values.
The rendering is done using this function (I've tried calling game's texture rendering function too but the same problem happens):
void DrawTextureExt(int texture, float x, float y, float width, float height)
{
glPushAttrib(GL_ALL_ATTRIB_BITS);
{
glPushMatrix();
{
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// glBlendFunc(GL_ONE, GL_ONE); << tried this too.. ugly results
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glTranslatef(x, y, 0.0);
glRotatef(0, 0.0, 0.0, 1.0);
glTranslatef(-x, -y, 0.0);
glBegin(GL_QUADS);
{
glTexCoord2f(0, 0); glVertex2f(x, y);
glTexCoord2f(0, 1); glVertex2f(x, y + height);
glTexCoord2f(1, 1); glVertex2f(x + width, y + height);
glTexCoord2f(1, 0); glVertex2f(x + width, y);
}
glEnd();
glBindTexture(GL_TEXTURE_2D, 0);
}
glPopMatrix();
}
glPopAttrib();
}
Sound to me like the program you're hooking into is using alpha tests at the end of it's rendering function (which would make sense) and left alpha testing enabled at the end, which you are then running into. Try what happens if you disable the alpha test first thing in your hook. (glDiable(GL_ALPHA_TEST)).
I'm working on a little OpenTK based 2D graphics library and I'm trying to apply a shader to a Surface object:
public void ApplyTo(Surface surface)
{
using (Surface pong = new Surface())
{
pong.Create(surface.Width, surface.Height);
pong.Clear(0, 0, 0, 0);
GL.Viewport(0, 0, surface.Width, surface.Height);
GL.LoadIdentity();
GL.Ortho(0, 1.0, 1.0, 0.0, 0.0, 4.0);
GL.UseProgram(0);
surface.BindTexture();
pong.BindFramebuffer();
GL.Begin(PrimitiveType.Quads);
GL.TexCoord2(0.0f, 1.0f);
GL.Vertex3(0, 0, 0);
GL.TexCoord2(0.0f, 0.0f);
GL.Vertex3(0, 1, 0);
GL.TexCoord2(1.0f, 0.0f);
GL.Vertex3(1, 1, 0);
GL.TexCoord2(1.0f, 1.0f);
GL.Vertex3(1, 0, 0);
GL.End();
Use(); // calls GL.UseProgram()
pong.BindTexture();
surface.BindFramebuffer();
GL.Begin(PrimitiveType.Quads);
GL.TexCoord2(0.0f, 1.0f);
GL.Vertex3(0, 0, 0);
GL.TexCoord2(0.0f, 0.0f);
GL.Vertex3(0, 1, 0);
GL.TexCoord2(1.0f, 0.0f);
GL.Vertex3(1, 1, 0);
GL.TexCoord2(1.0f, 1.0f);
GL.Vertex3(1, 0, 0);
GL.End();
GL.BindTexture(TextureTarget.Texture2D, 0);
GL.BindFramebuffer(FramebufferTarget.Framebuffer, 0);
}
}
This somehow clears the surface instead of applying the shader. My suspicion is that I'm doing something wrong with the viewport / projection, but what? I hope the provided code is sufficient.
For full screen rendering on a quad, I use that projection:
OpenTK.Matrix4 ortho = OpenTK.Matrix4.CreateOrthographicOffCenter(-1, 1, -1, 1, 1, -1);
Theres a division by 0 somewhere when your vertices are exactly on the near plane, you may also move a bit further on the z axis.
I'm fairly experienced with opengl and GLSL. For my engine I wanted to implement deferred lighting, knowing it was not going to be a trivial task. After a few hours I was able to get things mostly working. Here is a screenshot of all the buffers I've rendered:
The upper left is the normals, the upper right is the albedo, the lower left is the position and the lower right is the final render. (There is only one light being rendered right now.)
I use various shaders to render all the things to a frame buffer. I had previously used a forward rendering lighting shader. To hopefully provide the same results, I used the same data from that vertex shader to render the different buffers. The light source moves and changes based on the position of my camera, unlike my forward renderer. Here is the code for the vertex shaders (the fragment ones just render the pixels they got from the vertex one)
Position shader:
varying vec4 pos;
void main(void)
{
gl_Position =gl_ModelViewProjectionMatrix*gl_Vertex;
pos = gl_ModelViewMatrix*gl_Vertex;
}
Normal shader
varying vec3 normal;
void main(void)
{
gl_Position =gl_ModelViewProjectionMatrix*gl_Vertex;
normal = normalize(gl_NormalMatrix*gl_Normal);
}
For the albedo I just use opengl's regular shader and just bind textures.
Here is the final light shader which is being rendered as a quad over the screen:
uniform sampler2D positionMap;
uniform sampler2D normalMap;
uniform sampler2D albedoMap;
varying vec2 texcoord;
uniform mat4 matrix;
void main()
{
vec3 position = vec3(texture2D(positionMap,texcoord));
vec3 normal = vec3(texture2D(normalMap,texcoord));
vec3 L = normalize(gl_LightSource[0].position.xyz - position);
float l = length(L)/5.0;
float att = 1.0/(l*l+l);
//render sun light
vec4 diffuselight = max(dot(normal,L), 0.0)*vec4(att,att,att,att);
diffuselight = clamp(diffuselight, 0.0, 1.0)*2.0;
vec4 amb = vec4(.2,.2,.2,0);
vec4 texture = texture2D(albedoMap,texcoord);
gl_FragColor = ((diffuselight)+amb)*texture;
}
This has a lot of functions that are referenced elsewhere, but I think you can get the general basis from the pictures and the code. This is the main rendering function:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//render skybox
glLoadIdentity();
renderSkybox();
//skybox.renderObject();
glLoadIdentity();
renderViewModel();
renderCamera();
glMatrixMode(GL_MODELVIEW);
GLfloat position[] = {-Lighting.x,-Lighting.y,-Lighting.z,1};
glLightfv(GL_LIGHT0, GL_POSITION, position);
glDisable(GL_LIGHTING);
glm::mat4 modelView,projection,final;
glGetFloatv(GL_MODELVIEW_MATRIX, &modelView[0][0]);
glGetFloatv(GL_PROJECTION_MATRIX, &projection[0][0]);
final=modelView*projection;
Lighting.setupDepthImage();
glLoadIdentity();
for (int i = 0; i < objects.size(); i++)
{
objects[i].renderObjectForDepth();
}
Lighting.finishDepthImage();
//render the 3 buffers
//normal buffer
glBindFramebuffer(GL_FRAMEBUFFER, Lighting.Normal.frameBuffer);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
for (int i = 0; i < objects.size(); i++)
{
objects[i].renderObjectWithProgram(Lighting.normalShader);
}
//albedo
glBindFramebuffer(GL_FRAMEBUFFER, Lighting.Albedo.frameBuffer);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
for (int i = 0; i < objects.size(); i++)
{
objects[i].renderObjectWithProgram(0);
}
//position
glBindFramebuffer(GL_FRAMEBUFFER, Lighting.Position.frameBuffer);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
for (int i = 0; i < objects.size(); i++)
{
objects[i].renderObjectWithProgram(Lighting.positionShader);
}
//go back to rendering directly to the screen
glBindFramebuffer(GL_FRAMEBUFFER, 0);
renderCamera();
glTranslatef(-test.position.x, test.position.y, -test.position.z);
test.updateParticle(1);
//render the buffers for debugging
renderViewModel();
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 1280, 800, 0, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
//render the full screen quad for the sun
glUseProgram(Lighting.sunShader);
glUniform1i(glGetUniformLocation(Lighting.sunShader,"normalMap"),0);
glUniform1i(glGetUniformLocation(Lighting.sunShader,"albedoMap"),1);
glUniform1i(glGetUniformLocation(Lighting.sunShader,"positionMap"),2);
glUniformMatrix4fv(glGetUniformLocation(Lighting.sunShader, "matrix"), 1, GL_FALSE, &final[0][0]);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, Lighting.Normal.texture);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, Lighting.Albedo.texture);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, Lighting.Position.texture);
glBindFramebuffer(GL_FRAMEBUFFER, Lighting.debugFinal.frameBuffer);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBegin(GL_QUADS);
glTexCoord2f(0, 1);
glVertex2f(0, 0);
glTexCoord2f(1, 1);
glVertex2f(1280, 0);
glTexCoord2f(1, 0);
glVertex2f(1280, 800);
glTexCoord2f(0, 0);
glVertex2f(0, 800);
glEnd();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, 0);
glUseProgram(0);
//normals
glBindTexture(GL_TEXTURE_2D,Lighting.Normal.texture);
glBegin(GL_QUADS);
glTexCoord2f(0, 1);
glVertex2f(0, 0);
glTexCoord2f(1, 1);
glVertex2f(640, 0);
glTexCoord2f(1, 0);
glVertex2f(640, 400);
glTexCoord2f(0, 0);
glVertex2f(0, 400);
glEnd();
//albedo
glBindTexture(GL_TEXTURE_2D,Lighting.Albedo.texture);
glBegin(GL_QUADS);
glTexCoord2f(0, 1);
glVertex2f(640, 0);
glTexCoord2f(1, 1);
glVertex2f(1280, 0);
glTexCoord2f(1, 0);
glVertex2f(1280, 400);
glTexCoord2f(0, 0);
glVertex2f(640, 400);
glEnd();
//position
glBindTexture(GL_TEXTURE_2D,Lighting.Position.texture);
glBegin(GL_QUADS);
glTexCoord2f(0, 1);
glVertex2f(0, 400);
glTexCoord2f(1, 1);
glVertex2f(640, 400);
glTexCoord2f(1, 0);
glVertex2f(640, 800);
glTexCoord2f(0, 0);
glVertex2f(0, 800);
glEnd();
//final image
glBindTexture(GL_TEXTURE_2D,Lighting.debugFinal.texture);
glBegin(GL_QUADS);
glTexCoord2f(0, 1);
glVertex2f(640, 400);
glTexCoord2f(1, 1);
glVertex2f(1280, 400);
glTexCoord2f(1, 0);
glVertex2f(1280, 800);
glTexCoord2f(0, 0);
glVertex2f(640, 800);
glEnd();
View3D();
SDL_GL_SwapWindow(window);
glLoadIdentity();
There are a few unrelated things in here, just ignore them. As you saw, I get the light's position using GLSL's default method. I think that because I am in an orthographic view, something is screwing with the light's position. Could this be the problem, or is there something else, perhaps in the calculation of the normals,etc?
People will probably not find this useful but I have solved my problem. I was using the regular opengl lights for lighting in the shader. When I set the position I made the w value 1, which would make it a directional light, not a point light, and therefore gave the light moving behavior.
As a side not I changed the position to reconstruct from the depth buffer as well as a few other things to improve the G-buffer.
I am rendering a chess board, using 2 different textures. One for the black squares and one for the white squares. However instead of each different square having their own texture, they all take on the last texture that I bound calling glBindTexture(GL_TEXTURE_2D, id);.
This is my approach:
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS);
// square 0, 0 ( front left )
glBindTexture(GL_TEXTURE_2D, textureBlackSquare->texID);
glNormal3f(0, 1, 0);
glTexCoord2f(0, 0); glVertex3f(-8.0, 0.5, 8.0);
glTexCoord2f(1, 0); glVertex3f(-6.0, 0.5, 8.0);
glTexCoord2f(1, 1); glVertex3f(-6.0, 0.5, 6.0);
glTexCoord2f(0, 1); glVertex3f(-8.0, 0.5, 6.0);
glEnd();
glBegin(GL_QUADS);
// square 1, 0
glBindTexture(GL_TEXTURE_2D, textureWhiteSquare->texID);
glTexCoord2f(0, 0); glVertex3f(-6.0, 0.5, 8.0);
glTexCoord2f(1, 0); glVertex3f(-4.0, 0.5, 8.0);
glTexCoord2f(1, 1); glVertex3f(-4.0, 0.5, 6.0);
glTexCoord2f(0, 1); glVertex3f(-6.0, 0.5, 6.0);
glEnd();
When I run this code, both quads have the white texture bound. How do I get each quad to have its own texture?
You cannot call glBindTexture in the middle of glBegin/End. You can only call vertex functions within begin/end.
Also, why don't you just make a single texture as an 8x8 checkerboard, and then just render a single quad to draw the whole checkerboard?
From the documentation:
GL_INVALID_OPERATION is generated if glBindTexture is executed between
the execution of glBegin and the corresponding execution of glEnd.
You forgot to check for errors, and thus missed that your program is invalid.
You can't bind a texture within a glBegin-glEnd block. Also you should avoid switching textures where possible, since switching the texture is among the most expensive things you can ask the GPU to do (a texture switch invalidates all texel fetch caches).
Instead you sort your scene objects by the texture they use and group them by this. So you first render all checkerboard quads using the first texture (say white), and after that all the quads using the second texture (black then).