I am confused about shadow mapping. Here's what I've understood (folowing steps are not working:) )
How to get profit (please don't get confused about the code, it is roughly because I write on Java):
1. Create empty depthTexture (mine is 1024x1024) with parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE) and
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, GL_NULL)
2. Create FBO and attach that texture to it
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthTexture, 0)
3. Setup new projection and view matrix for lighting camera
I used the same as my main camera just with another coords, because scene looks better in it (I tried it out, so there's no problem... I guess ..).
4. Create new little shader to determinate gl_Position for FBO and modify the main shader with fancy stuff (bias matrix * lightCamera matrix * vertex, sampler2Dshadow and more)
5. Of course uniform everything and do something.
And now RENDER LOOP
1. Of course uniform everything and do something(again).
2. Bind FBO, bind little shader, glViewport(0, 0, 1024, 1024), colorMask to false, clear depth buffer, glCullFace(GL_FRONT), glBindTexture(GL_TEXTURE_2D, 0)
3. Render everything using only vertex position attribs completely for nothing
4. Unbind FBO, rebind program to main shader (that one with fancy stuff), glViewport, enable colorMask, glCullFace(GL_BACK)
5. Render the scene normally without even thinking about "How the hell main shader gonna get sampler2DShadow because we dont bind it, dont uniform it, and don't even touch it"
6. Watch countless glitches, bugs and pixel orgy
Actually I tried to uniform depthTexture to sampler, but I've got black screen only. And even if I dont render FBO, I get the same picture when I do that.
Can someone explain, what am I missing?
I feel like shader uses diffuse texture twice: as a diffuse texture and as a depthTexture, but I don't know how to give it that depthTexture.
here is a good link to read: http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-16-shadow-mapping/
and here: http://www.paulsprojects.net/tutorials/smt/smt.html (although the second link uses old fixed function opengl)
in general:
create one depth texture
attach this texture to FBO
bind FBO, setup proper viewport
render scene from light pos, save only depth values to your texture
unbind FBO and setup final scene vieport and camera position
render scene normally using shadow test (sampler2DShadow)
you can render your depth map always (in render loop) or only when light pos changes.
I do not know why you are rendering you scene normally twice... are you using Z-prepass, or something? just try the basic version I think.
my old code with shadow maps:
void RenderShadowMap() {
currDepth->Bind();
glClear(GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gLightCam.SetProjectionMatrix();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gLightCam.SetViewMatrix();
glColorMask(false, false, false, false);
glUseProgram(0); // draw without any shaders... just default depth
glEnable(GL_POLYGON_OFFSET_FILL);
glPolygonOffset(offFactor, offUnits);
SimpleScene(false); // floor does not cast shadow so do not render it
glDisable(GL_POLYGON_OFFSET_FILL);
glColorMask(true, true, true, true);
}
render scene:
// compose shadow matrix:
MATRIX4X4 bias(0.5f, 0.0f, 0.0f, 0.0f,
0.0f, 0.5f, 0.0f, 0.0f,
0.0f, 0.0f, 0.5f, 0.0f,
0.5f, 0.5f, 0.5f, 1.0f);
MATRIX4X4 *invCam = gSphericalCam.GetInvViewMatrix();
MATRIX4X4 smMat = (*gLightCam.GetViewProjMatrix()) * (*invCam);
gShaderProgramManager->GetProgram("shadow")->Use();
gShaderProgramManager->GetProgram("shadow")->SetMatrix("shadowMat", &smMat);
gShaderProgramManager->GetProgram("shadow")->SetBool("useShadow", currCam != &gLightCam && useShadow);
gShaderProgramManager->GetProgram("shadow")->SetFloat("shadowMapSize", (float)currDepth->GetWidth());
glActiveTexture(GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
glColor3f(1.0f, 1.0f, 1.0f);
gTextureManager->Bind("default");
glActiveTexture(GL_TEXTURE1);
currDepth->BindDepthAsTexture();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_COMPARE_R_TO_TEXTURE_ARB);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC_ARB, GL_LEQUAL);
SimpleScene();
Related
I want to make a program that shows the earth with a space texture as the background.
The earth is a 3D Uniform with a earth texture (.bmp).
The space with the stars is a texture (.bmp).
I have summarized what I have to do:
Create a new Model Matrix
Position it at the same place where the camera is
Disable depth test before drawing
Reverse culling
This is the Load function:
void load(){
//Load The Shader
Shader simpleShader("src/shader.vert", "src/shader.frag");
g_simpleShader = simpleShader.program;
// Create the VAO where we store all geometry (stored in g_Vao)
g_Vao = gl_createAndBindVAO();
//Create vertex buffer for positions, colors, and indices, and bind them to shader
gl_createAndBindAttribute(&(shapes[0].mesh.positions[0]), shapes[0].mesh.positions.size() * sizeof(float), g_simpleShader, "a_vertex", 3);
gl_createIndexBuffer(&(shapes[0].mesh.indices[0]), shapes[0].mesh.indices.size() * sizeof(unsigned int));
gl_createAndBindAttribute(uvs, uvs_size, g_simpleShader, "a_uv", 2);
gl_createAndBindAttribute(normal, normal_size, g_simpleShader, "a_normal", 2);
//Unbind Everything
gl_unbindVAO();
//Store Number of Triangles (use in draw())
g_NumTriangles = shapes[0].mesh.indices.size() / 3;
//Paths of the earth and space textures
Image* image = loadBMP("assets/earthmap1k.bmp");
Image* space = loadBMP("assets/milkyway.bmp");
//Generate Textures
glGenTextures(1, &texture_id);
glGenTextures(1, &texture_id2);
//Bind Textures
glBindTexture(GL_TEXTURE_2D, texture_id);
glBindTexture(GL_TEXTURE_2D, texture_id2);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
//We assign your corresponding data
glTexImage2D(GL_TEXTURE_2D,1,GL_RGB,image->width, image->height,GL_RGB,GL_UNSIGNED_BYTE,image->pixels);
glTexImage2D(GL_TEXTURE_2D,1,GL_RGB,space->width, space->height,GL_RGB,GL_UNSIGNED_BYTE,space->pixels);
}
This is the Draw function:
void draw(){
//1. Enable/Disable
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDisable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glCullFace(GL_FRONT);
//2. Shader Activation
glUseProgram(g_simpleShader);
//3. Get All Uniform Locations
//Space:
GLuint model_loc2 = glGetUniformLocation (g_simpleShader, "u_model");
GLuint u_texture2 = glGetUniformLocation(g_simpleShader, "u_texture2");
GLuint u_light_dir2 = glGetUniformLocation(g_simpleShader,"u_light_dir2");
//Earth
GLuint model_loc = glGetUniformLocation(g_simpleShader, "u_model");
GLuint projection_loc = glGetUniformLocation(g_simpleShader, "u_projection");
GLuint view_loc = glGetUniformLocation(g_simpleShader, "u_view");
GLuint u_texture = glGetUniformLocation(g_simpleShader, "u_texture");
GLuint u_light_dir = glGetUniformLocation(g_simpleShader, "u_light_dir");
//4. Get Values From All Uniforms
mat4 model_matrix2 = translate(mat4(1.0f), vec3(1.0f,-3.0f,1.0f));
mat4 model_matrix = translate(mat4(1.0f),vec3(0.0f,-0.35f,0.0f);
mat4 projection_matrix = perspective(60.0f,1.0f,0.1f,50.0f);
mat4 view_matrix = lookAt(vec3( 1.0f, -3.0f, 1.0f),vec3(0.0f, 0.0f, 0.0f), vec3(0.0f, 1.0f, 0.0f)glm::vec3(0,1,0));
//5. Upload Uniforms To Shader
glUniformMatrix4fv(model_loc2, 1, GL_FALSE, glm::value_ptr(model_matrix2));
glUniformMatrix4fv(model_loc, 1, GL_FALSE, glm::value_ptr(model_matrix));
glUniformMatrix4fv(projection_loc, 1, GL_FALSE, glm::value_ptr(projection_matrix));
glUniformMatrix4fv(view_loc, 1, GL_FALSE, glm::value_ptr(view_matrix));
glUniform1i(u_texture, 0);
glUniform3f(u_light_dir, g_light_dir.x, g_light_dir.y, g_light_dir.z);
glUniform1i(u_texture2, 1);
glUniform3f(u_light_dir2, g_light_dir.x, g_light_dir.y, g_light_dir.z);
//6. Activate Texture Unit 0 and Bind our Texture Object
glActiveTexture(GL_TEXTURE0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture_id);
glBindTexture(GL_TEXTURE_2D, texture_id2);
//7. Bind VAO
gl_bindVAO(g_Vao);
//8. Draw Elements
glDrawElements(GL_TRIANGLES, 3 * g_NumTriangles, GL_UNSIGNED_INT, 0);
}
Also I have two Fragment Shaders:
The first one returns this:
fragColor = vec4(final_color, 1.0);
The second one returns this:
fragColor = vec4(texture_color.xyz, 1.0);
Also the Vertex Shader returns the position of the vertex:
gl_Position = u_projection * u_view * u_model * vec4( a_vertex , 1.0 );
When I compile, it only shows the earth while it should show the earth and the space as background. I have reviewed the code several times but I can not find out what it is.
Suposed result:
My Result
If I see it right among other things you are wrongly binding textures
glActiveTexture(GL_TEXTURE0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture_id);
glBindTexture(GL_TEXTURE_2D, texture_id2);
should be:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture_id);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture_id2);
but I prefer that last set active units is 0 ...
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture_id2);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture_id);
that will save you a lot of troubles when you start combine code with single texture unit code ... Also hope you are properly unbinding the used texture units for the same reasons...
You got ugly seam on the edge 0/360deg of longitude
this might be caused by wrongly computed normal for lighting, wrong not seamless texture or just by forgeting to duplicate the edge points with correct texture coordinates for the last patch. See:
Applying map of the earth texture a Sphere
You can also add atmosphere,bump map, clouds to your planet:
Bump-map a sphere with a texture map
Andrea is right...
set matrices as unit matrix and render (+/-)1.0 rectangle at z=0.0 +/- aspect ratio correction without depth test, face culling and depth write ... That way you will avoid jitter and flickering stuff due to floating point errors.
Skybox is better but there are also other options to enhance
Is it possible to make realistic n-body solar system simulation in matter of size and mass?
and all the sublinks in there especially stars. You can combine skybox and stellar catalog together and much much more...
Hi, I am trying to display two objects using OpenGL viz., 1) a rotating cube with a mix of two textures (a wooden crate pattern and a smiley) in the foreground and 2) rectangular plate with just one texture (dark grey wood) as a background. When I comment out the part of the code governing the display of rectangular plate, the rotating cube displays both the textures (wooden crate and smiley). Otherwise, the cube displays only the wooden crate texture and the dark grey wood texture is also displayed on the rectangular plate, i.e. the smiley texture disappears from the rotating cube. Please find the images 1) http://oi68.tinypic.com/2la4r3c.jpg (with the rectangular plate portion of code commented) and 2) http://i67.tinypic.com/9u9rpf.jpg (without the rectangular plate portion of code commented). The relavant portion of the code is pasted below
// Rotating Cube ===================================================
// Texture of wooden crate
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture1);
glUniform1i(glGetUniformLocation(ourShader_box.Program, "ourTexture1"), 0);
// Texture of a smiley
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture2);
glUniform1i(glGetUniformLocation(ourShader_box.Program, "ourTexture2"), 1);
// lets use the box shader for the cube
ourShader_box.Use();
// transformations for the rotating cube ---------------------------------
glm::mat4 model_box, model1, model2;
glm::mat4 view_box;
glm::mat4 perspective;
perspective = glm::perspective(45.0f, (GLfloat)width_screen/(GLfloat)height_screen, 0.1f, 200.0f);
model1 = glm::rotate(model_box, (GLfloat)glfwGetTime()*1.0f, glm::vec3(0.5f, 1.0f, 0.0f));
model2 = glm::rotate(model_box, (GLfloat)glfwGetTime()*1.0f, glm::vec3(0.0f, 1.0f, 0.5f));
model_box = model1 * model2;
view_box= glm::translate(view_box, glm::vec3(1.0f, 0.0f, -3.0f));
GLint modelLoc_box = glGetUniformLocation(ourShader_box.Program, "model");
GLint viewLoc_box = glGetUniformLocation(ourShader_box.Program, "view");
GLint projLoc_box = glGetUniformLocation(ourShader_box.Program, "perspective");
glUniformMatrix4fv(modelLoc_box, 1, GL_FALSE, glm::value_ptr(model_box));
glUniformMatrix4fv(viewLoc_box, 1, GL_FALSE, glm::value_ptr(view_box));
glUniformMatrix4fv(projLoc_box, 1, GL_FALSE, glm::value_ptr(perspective));
// --------------------------------------------------------------------
// Draw calls
glBindVertexArray(VAO_box);
glDrawArrays(GL_TRIANGLES, 0, 36);
glBindVertexArray(0);
// Rectangular Plate =====================================================
// Background Shader
ourShader_bg.Use();
// Texture of dark grey wood
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, texture_wood);
glUniform1i(glGetUniformLocation(ourShader_bg.Program, "ourTexture3"), 2);
// Transformations -------------------------------------------
glm::mat4 model_bg;
glm::mat4 view_bg;
GLint modelLoc_bg = glGetUniformLocation(ourShader_bg.Program, "model");
GLint viewLoc_bg= glGetUniformLocation(ourShader_bg.Program, "view");
GLint projLoc_bg = glGetUniformLocation(ourShader_bg.Program, "perspective");
glUniformMatrix4fv(modelLoc_bg, 1, GL_FALSE, glm::value_ptr(model_bg));
glUniformMatrix4fv(viewLoc_bg, 1, GL_FALSE, glm::value_ptr(view_bg));
glUniformMatrix4fv(projLoc_bg, 1, GL_FALSE, glm::value_ptr(perspective));
// -----------------------------------------------------------
// Draw calls
glBindVertexArray(VAO_bg);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindVertexArray(0);
// =================================================================
I have a two questions regarding this code.
Why is the smiley disappearing?
Is this how multiple objects are supposed to be rendered? I know OpenGL does not care about objects, it only cares about vertices, but in this case these are separate, disjoint objects. So, should I be organizing them as two VBO's bound to a single VAO or as separate VBO's each bound to two VAO's for each object? Or is the case such that, either way is fine - depends on coder's choice and elegance of code?
You are using the same shader, same matrices and you have the same geometry type for the two objects (triangles), so why set the shader twice ?
Did you try to;
Set shader
Bind buffer #1
Bind texture #1
Draw object #1
Bind buffer #2
Bind texture #2
Draw object #2
I've just implemented deferred rendering and am having trouble getting my skybox working. I try rendering my skybox at the very end of my rendering loop and all I get is a black screen. Here's the rendering loop:
//binds the fbo
gBuffer.Bind();
//the shader that writes info to gbuffer
geometryPass.Bind();
glDepthMask(GL_TRUE);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
//draw geometry
geometryPass.SetUniform("model", transform.GetModel());
geometryPass.SetUniform("mvp", camera.GetViewProjection() * transform.GetModel());
mesh3.Draw();
geometryPass.SetUniform("model", transform2.GetModel());
geometryPass.SetUniform("mvp", camera.GetViewProjection() * transform2.GetModel());
sphere.Draw();
glDepthMask(GL_FALSE);
glDisable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glClear(GL_COLOR_BUFFER_BIT);
//shader that calculates lighting
pointLightPass.Bind();
pointLightPass.SetUniform("cameraPos", camera.GetTransform().GetPosition());
for (int i = 0; i < 2; i++)
{
pointLightPass.SetUniformPointLight("light", pointLights[i]);
pointLightPass.SetUniform("mvp", glm::mat4(1.0f));
//skybox.GetCubeMap()->Bind(9);
quad.Draw();
}
//draw skybox
glEnable(GL_DEPTH_TEST);
skybox.Render(camera);
window.Update();
window.SwapBuffers();
The following is the skybox's render function
glCullFace(GL_FRONT);
glDepthFunc(GL_LEQUAL);
m_transform.SetPosition(camera.GetTransform().GetPosition());
m_shader->Bind();
m_shader->SetUniform("mvp", camera.GetViewProjection() * m_transform.GetModel());
m_shader->SetUniform("cubeMap", 0);
m_cubeMap->Bind(0);
m_cubeMesh->Draw();
glDepthFunc(GL_LESS);
glCullFace(GL_BACK);
And here is the skybox's vertex shader:
layout (location = 0) in vec3 position;
out vec3 TexCoord;
uniform mat4 mvp;
void main()
{
vec4 pos = mvp * vec4(position, 1.0);
gl_Position = pos.xyww;
TexCoord = position;
}
The skybox's fragment shader just sets the output color to texture(cubeMap, TexCoord).
As you can see from the vertex shader, I'm setting the position's z component to be w so that it will always have a depth of 1. I am also setting the depth function to be GL_LEQUAL so that it will fail the depth test. Should this not only draw the skybox in places where other objects weren't already drawn? Why does it result in a black screen?
I know I have set up the skybox correctly because if I just draw the skybox by itself it shows up just fine.
I can briefly see for a split second the geometry that should be drawn before the skybox is drawn on top of everything.
Since you're using double buffering, seeing different things must be due to a different frame being drawn. The depth buffer in the default framebuffer isn't being cleared, which I believe is the cause of the temporal instability at least.
In your case, you want the default depth buffer to be the same as the GBuffer when you draw the skybox. A quick way to achieve this is with glBlitFramebuffer, also avoiding the need to clear it:
glBindFramebuffer(GL_READ_FRAMEBUFFER, gbuffer);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(..., GL_DEPTH_BUFFER_BIT, ...);
Now to explain the black screen when the skybox fills the screen. Without the depth test, of course the skybox just draws. With the depth test, the skybox still draws on the first frame, but shortly after the second frame clears only the colour buffer. The depth buffer still contains stale skybox values so it does not get re-draw for this frame and you're left with black...
However your geometry pass draws without depth testing enabled, so this should still be visible even if the skybox isn't. Also this would only happen with GL_LESS and you have GL_LEQUAL. And you have glDepthMask false, which means nothing should write to the default depth buffer in your code. This points to the depth buffer containing other values, perhaps uninitialized, but in my experience it's initially zero. Also this still happens when the skybox doesn't fill the screen, drawn as a cube away from the camera, which blows away that argument. Now, perhaps if the geometry failed to draw in the second frame that would explain it. For that matter blatant driver bugs would too, but I'm not seeing any problems in the given code.
TLDR: Many unexplained things, so **I tried it myself and can't reproduce your problem...
Here's a quick example based on your code and it works fine for me...
(green sphere is the geometry, red cube is the skybox)
gl_Position = pos:
Note the yellow from additive blending even if the skybox is drawn over the top. I would have thought you'd be seeing this too.
gl_Position = pos.xyww:
Now for the code...
//I haven't enabled back face culling, but that shouldn't affect anything
//binds the fbo
fbo.bind();
//the shader that writes info to gbuffer
//geometryPass.Bind(); //fixed pipeline for now
glDepthMask(GL_TRUE);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
glColor3f(0,1,0);
fly.uploadCamera(); //glLoadMatrixf
sphere.draw();
glDepthMask(GL_FALSE);
glDisable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
fbo.unbind(); //glBindFramebuffer(GL_FRAMEBUFFER, 0);
glClear(GL_COLOR_BUFFER_BIT);
//shader that calculates lighting
drawtex.use();
//pointLightPass.SetUniform("cameraPos", camera.GetTransform().GetPosition());
drawtex.set("tex", *(Texture2D*)fbo.colour[0]);
for (int i = 0; i < 2; i++)
{
//pointLightPass.SetUniformPointLight("light", pointLights[i]);
//pointLightPass.SetUniform("mvp", glm::mat4(1.0f));
//skybox.GetCubeMap()->Bind(9);
drawtex.set("modelviewMat", mat44::identity());
quad.draw();
}
drawtex.unuse();
//draw skybox
glEnable(GL_DEPTH_TEST);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, fbo.size.x, fbo.size.y, 0, 0, fbo.size.x, fbo.size.y, GL_DEPTH_BUFFER_BIT, GL_NEAREST);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
//glCullFace(GL_FRONT);
glDepthFunc(GL_LEQUAL);
//m_transform.SetPosition(camera.GetTransform().GetPosition());
skybox.use();
skybox.set("mvp", fly.camera.getProjection() * fly.camera.getInverse() * mat44::translate(1,0,0));
//m_shader->SetUniform("mvp", camera.GetViewProjection() * m_transform.GetModel());
//m_shader->SetUniform("cubeMap", 0);
//m_cubeMap->Bind(0);
cube.draw();
skybox.unuse();
glDepthFunc(GL_LESS);
//glCullFace(GL_BACK);
//window.Update();
//window.SwapBuffers();
So I have a very complicated R^4 -> R^4 function, which I need to calculate for a lot of input glm::vec4s, in real time, so I want to do it on the GPU, for all vec4s parallel.
What I figured is that I would create a GL_RGBA32F texture, 1920x1 resolution (1920 is enough for my purposes), copy my input data onto the texture, then call a drawing of a line, so the rasterizer calls a fragment for each of my vec4s. Then either write the results back to the texture using imageload/store or render it to a 1920x1 framebuffer, and read it from there.
Problem is that for some reason opengl can't read my GL_RGBA32F texture.
Here is my code:
Setting up the texture (currently loaded with dummy data):
glm::vec4 texturedata[1920];
for (unsigned int i = 0; i < 1920; i++)
{
texturedata[i] = glm::vec4(1.0f, 1.0f, 1.0f, 1.0f);
}
glGenTextures(1, &datatexture);
glBindTexture(GL_TEXTURE_2D, datatexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 1920, 1, 0, GL_RGBA, GL_FLOAT, texturedata);
Before each rendering:
glUseProgram(mprogram);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, datatexture);
glBindVertexArray(rasterizertriggervao);
glUniform1i(glGetUniformLocation(mprogram, "datatexture"), 0);
glDrawArrays(GL_LINES, 0, 2);
The rasterizertriggervao is 2 floats: -1, 1, and the vertex shader draws a nice line through the middle of my screen from that.
Fragment shader:
layout(binding = 0) uniform sampler2D datatexture;
out vec4 x;
void main()
{
x = vec4( (texture(datatexture, vec2(gl_FragCoord.x/1920.0, 0.0))).x, 0.0f, 0.0f, 1.0f );
}
So this should draw a nice red line in the middle of my screen for me. It draws a black one. The rasterizer called all 1920x1 fragments, and the texture is correctly copied to the GPU (I have Nvidia Nsight installed, which allows me to debug the GPU, check the contents of textures and whatnot on the GPU directly, and I checked, the texture is full of 1.0f).
However for some reason the sampling doesn't work.
I know that there are better ways to do GPGPU but this thing has to fit into a much bigger program nicely, and this is the way I need it to work, through textures :)
Your seem not to set the texture filter modes for your texture. Now, GL's defaults are (unfortunately) to use mip-mapping. But your texture is not mipmap-complete, so sampling from it will not work. You should add glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_NEAREAST) and probably also glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_NEAREAST).
As Andon M. Coleman already pointed out in the comments, you are not sampling the texture at the correct location. You should use vec2(gl_FragCoord.x/1920.0 + 0.5/1920.0, 0.5) in your case. I also agree with Andon M. Coleman's suggestion to directly use texelFetch(), since you can directly use the integer value `gl_FragCoord.x'.
I'm rendering a quad with an outline. I do this by rendering a quad of the required size in black, then rendering the "inner" quad smaller than the first. I also want this outlined quad to be transparent.
My current attempt renders as so:
However the problem I have is that because I'm essentially rendering two quads on top of each other, the black "main" quad blends through to the smaller "inner" quad. I don't want this to happen as I want the blending to only blend the background and the "inner" quad colors.
Other than rendering separate quads/triangles for the outline is there a way (ideally not using shaders as I'm using fixed pipeline) to make the blending ignore/not render the "main" quad when blending the "inner" quad?
Relevant code:
Blend mode:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable( GL_BLEND );
Rendering the box:
struct vertex
{
GLfloat x,y;
};
void draw_box()
{
glPushMatrix();
glTranslatef( -3.0f, 1.12f, 1.0f );
// Draw a huge black quad
const vertex vertices2[] =
{
{ -1.0f, -1.0f },
{ 0.0f, 0.0f },
{ -1.0f, 0.0f },
{ -1.0f, -1.0f },
{ 0.0f, 0.0f },
{ 0.0f, -1.0f },
};
glVertexPointer(2, GL_FLOAT, 0, vertices2);
glColor4f(0.0f, 0.0f, 0.0f, 0.5f);
glDrawArrays(GL_TRIANGLES, 0, 6);
// Draw a smaller grey quad
const GLfloat space = 0.04f;
const vertex vertices3[] =
{
{ -1.0f+space, -1.0f+space },
{ 0.0f-space, 0.0f-space },
{ -1.0f+space, 0.0f-space },
{ -1.0f+space, -1.0f+space },
{ 0.0f-space, 0.0f-space },
{ 0.0f-space, -1.0f+space },
};
glVertexPointer(2, GL_FLOAT, 0, vertices3);
glColor4f(0.5f, 0.5f, 0.5f, 0.5f);
glDrawArrays(GL_TRIANGLES, 0, 6);
glPopMatrix();
}
The best way to do this is probably to split up the border into four trapezoidal quads, but since you explicitly mentioned you didn't want to go down that path, there are a couple of other options you could pursue.
The next best method is probably to use the depth buffer: glEnable(GL_DEPTH_TEST). Draw the inside quad first (with blending enabled). Then draw the outer quad, but using coordinates such that it's depth is greater than the first quad (it is "farther away"). If depth testing is enabled when the border quad is drawn, it will fail for the quad you already drew, since those pixels have a smaller depth value. Note that using this method, if you are writing to the depth buffer when rendering the things behind these quads, you may run into some issues with parts of the background being considered closer than the new quads. You can solve this for the inner quad by configuring the depth test to always succeed, even when the test fails: glDepthFunc(GL_ALWAYS). This won't help for the outer quad, however. In that case the next method might be helpful:
You can use the stencil buffer in a similar manner to the depth buffer. When rendering the inside quad, you set the stencil value (or one bit of it at least) to a specific value:
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_ALWAYS, 1, GLuint(-1);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
Then when you draw the outer quad, configure the stencil test to succeed only if a fragment does not have that value:
glStencilFunc(GL_NOT_EQUAL, 1, 1);
Essentially, this is the same thing the version above was doing with the depth buffer, but uses the stencil buffer instead. You may want to take a look at the documentation for glStencilFunc and glStencilOp.
There may be other ways to achieve this behavior, but all the other solutions I can think of involve using custom shaders, which you indicated is not acceptable. Personally, I'd just split up the border into 4 quads. Sure, it's 3 more quads than the other methods, but it doesn't require any extra OpenGL features or state changes.