I need to flip my texture vertically when copying it into another texture.I know about 3 simple ways to do it:
1 . Blit from once FBO into another using full screen quad (and flip in frag shader)
2 . Blit using glBlitFrameBuffer.
3 . Using glCopyImageSubData
I need to perform this copy between 2 textures which aren't attached to any FBO so I am trying to avoid first 2 solutions.I am trying the third one.
Doing it like this:
glCopyImageSubData(srcTex ,GL_TEXTURE_2D,0,0,0,0,targetTex,GL_TEXTURE_2D,0,0,width ,0,height,0,1);
It doesn't work.The copy returns garbage.Is this method supposed to be able to flip when reading?Is there an alternative FBO unrelated method(GPU side only)?
Btw:
glCopyTexSubImage2D(GL_TEXTURE_2D,0,0,0,0,height ,width,0 );
Doesn't work too.
Rendering a textured quad to a pbo by drawing the inverted quad would work.
Or you could go with a simple fragment shader doing a imageLoad + imageStore by inverting the y coordinate with 2 bound image buffers.
glBindImageTexture(0, copyFrom, 0, GL_FALSE, 0, GL_READ_ONLY, GL_RGBAUI32);
glBindImageTexture(1, copyTo, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBAUI32);
the shader would look something like:
layout(binding = 0, rbga32ui) uniform uimage2d input_buffer;
layout(binding = 1, rbga32ui) uniform uimage2d output_buffer;
uniform float u_texHeight;
void main(void)
{
vec4 color = imageLoad( input_buffer, ivec2(gl_FragCoord.xy) );
imageStore( output_buffer, ivec2(gl_FragCoord.x,u_texHeight-gl_FragCoord.y-1), color );
}
You'll have to tweak it a little, but I know it works I used it before.
Hope this helps
Related
I want to apply two textures on the same object (actually just a 2D rectangle) in order to blend them. I thought I would achieve that by simply calling glDrawElements with the first texture, then binding the other texture and calling glDrawElements a second time. Like this:
//create vertex buffer, frame buffer, depth buffer, texture sampler, build and bind model
//...
glEnable(GL_BLEND);
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ZERO);
glBlendEquation(GL_FUNC_ADD);
// Clear the screen
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Bind our texture in Texture Unit 0
GLuint textureID;
//create or load texture
//...
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
// Set our sampler to use Texture Unit 0
glUniform1i(textureSampler, 0);
// Draw the triangles !
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, (void*)0);
//second draw call
GLuint textureID2;
//create or load texture
//...
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID2);
// Set our sampler to use Texture Unit 0
glUniform1i(textureSampler, 0);
// Draw the triangles !
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, (void*)0);
Unfortunately, the 2nd texture is not drawn at all and I only see the first texture. If I call glClear between the two draw calls, it correctly draws the 2nd texture.
Any pointers? How can I force OpenGL to draw on the second call?
As an alternative to the approach you followed so far I would like to suggest using two texture samplers within your GLSL shader and perform the blending there. This way, you would be done with just one draw call, thus reducing CPU/GPU interaction. To do so, just define to texture samplers in your shader like
layout(binding = 0) uniform sampler2D texture_0;
layout(binding = 1) uniform sampler2D texture_1;
Alternatively, you can use a sampler array:
layout(binding = 0) uniform sampler2DArray textures;
In your application, setup the textures and samplers using
enum Sampler_Unit{BASE_COLOR_S = GL_TEXTURE0 + 0, NORMAL_S = GL_TEXTURE0 + 2};
glActiveTexture(Sampler_Unit::BASE_COLOR_S);
glBindTexture(GL_TEXTURE_2D, textureBuffer1);
glTexStorage2D( ....)
glActiveTexture(Sampler_Unit::NORMAL_S);
glBindTexture(GL_TEXTURE_2D, textureBuffer2);
glTexStorage2D( ....)
Thanks to #tkausl for the tip.
I had depth testing enabled during the initialization phase.
// Enable depth test
glEnable(GL_DEPTH_TEST);
// Accept fragment if it closer to the camera than the former one
glDepthFunc(GL_LESS);
The option needs to be disabled in my case, for the blend operation to work.
//make sure to disable depth test
glDisable(GL_DEPTH_TEST);
I have the following rendering flow:
Clear MSAA x8 FBO color attachment (RGBA) with {0.0f,0.0f,0.0f,0.0f}
Issue single draw call (draw rectangular shape).
Bind Resolve FBO (just single RGBA color attachment)
Blit MSAA FBO into Resolve FBO.
What I noticed that when I clear MSAA FBO to all zeros, AA still has some aliasing artifacts along the diagonal edges.
But if I clear with {0.0f,0.0f,0.0f,1.0f} ,anti-aliasing looks ok.
To verify that the problem is with alpha channel,I attached color texture to resolve FBO that has GL_RGB format instead of GL_RGBA (same as resolving into screen framebuffer which doesn't show this issue),and in this case, the problem doesn't appear.
So my question is, why does it happen when MSAA RT cleared with zero alpha?
And how can I solve it,except clearing alpha to one? Why I need it cleared to zero? Because this is intermediate pass,results of which are used later in alpha compositing.
Here is some of the important parts of the code. I omit pure OpenGL init logic for textures and FBOs setup as it's abstracted behind C++ API,which is well tested in production. Both FBO use texture attachments of RGBA format.
Render loop:
const float clearColorZero[4] = { 0.0f,0.0f,0.0f,0.0f };
glClearNamedFramebufferfv(msaaFBO, GL_COLOR, 0, clearColorZero);
glBindFramebuffer(GL_FRAMEBUFFER, msaaFBO);
glViewport(0, 0,width,height);
//Bind shader program and mesh:
BindProgram(mainProg);
BindMesh(mesh);
glProgramUniformMatrix4fv(mainProg, 0, 1, GL_FALSE,glm::value_ptr(matrixMVP));
Draw(mesh);//calls glDrawArrays
//Resolve MSAA
glBlitNamedFramebuffer(
msaaFBO,
resolveFBO,
0, 0, width, height,
0, 0, width, height,
GL_COLOR_BUFFER_BIT,GL_NEAREST);
Simple fragment shader:
#version 450 core
out vec4 o_color;
void main()
{
o_color = vec4(1.0,228.0 /255.0,123.0/255.0,1.0);
}
Blit results with MSAA FBO alpha cleared to zero:
Blit results with MSAA FBO alpha cleared to one:
PS: Same happens if I resolve MSAA manually inside shader:
#version 450 core
#define NUM_MSAA_SAMPLES 8
layout(binding = 0) uniform sampler2DMS colorMap;
out vec4 o_color;
void main()
{
vec4 color = texelFetch(colorMap,ivec2(gl_FragCoord.xy), 0);
for(int i = 1; i < NUM_MSAA_SAMPLES; ++i)
{
vec4 samplePoint = texelFetch(colorMap, ivec2(gl_FragCoord.xy), i);
color += samplePoint;
}
color /= float(NUM_MSAA_SAMPLES);
o_color= color;
}
Here is magnified version of the screenshots that shows:
MSAA FBO cleared to zero:
MSAA FBO cleared to one:
I also can't see any changes even if I disable blending with glDisable(G_BLEND)
I'm trying to access a DepthComponent Texture in my GLSL Shader of version 400.
The program does a two pass rendering. In the first pass I render all the geometry and colors to a Framebuffer on which I have a ColorAttachment and DepthAttachment. The DepthAttachment is bound like this:
(Note: I'm using C# with OpenTK, which is strongly typed, in my code examples.)
GL.FramebufferTexture2D(FramebufferTarget.Framebuffer, FramebufferAttachment.DepthAttachment, TextureTarget.Texture2D, depthTexture.ID, 0);
The depth Texture has an internal pixel format of DepthComponent32f, pixel format of DepthComponent and Float as pixel type. All the other properties have default values.
The second pass renders the framebuffers color image onto the screen using the following shader:
#version 400
uniform sampler2D finalImage;
in vec2 texCoords;
out vec4 fragColor;
void main(){
fragColor = vec4(texture2D(finalImage, texCoords.xy).rgb, 1.0);
}
But now I want to read the depth Texture(DepthComponent) instead of the color Texture(RGBA).
I tried a lot of things like disabling TextureCompareMode, using shadow2DSampler with shadow2DProj(sampler, vec4(texCoords.xy, 0.0, 1.0)) or just textureProj(sampler, vec3(texCoords.xy, 0.0)). But it returns only 1 or 0, depends on which configuration I use.
To be sure that my depth Texture is ok, I've read the pixels back to a float array like this:
GL.ReadPixels(0, 0, depthTexture.Width, depthTexture.Height, PixelFormat.DepthComponent, PixelType.Float, float_array);
Everything seems to be correct, its showing me 1.0 for empty space and values between 0.99 and 1.0 for visible objects.
Edit
Here is a code example how my process looks like:
Init code
depthTexture= new GLEXTexture2D(width, height);
depthTexture.TextureCompareMode = TextureCompareMode.None;
depthTexture.CreateMutable(PixelInternalFormat.DepthComponent32f, PixelFormat.DepthComponent, PixelType.Float);
***CreateMutable Function***
ReserveTextureID();
GLEX.glBeginTexture2D(ID);
GL.TexImage2D(TextureTarget.Texture2D, 0, pInternalFormat, width, height, 0, pFormat, pType, IntPtr.Zero);
ApplyOptions();
MarkReserved(true);
GLEX.glEndTexture2D();
(Framebuffer attachment mentioned above)
Render pass 1
GL.BindFramebuffer(FramebufferTarget.Framebuffer, drawBuffer.ID);
GL.Viewport(0, 0, depthTexture.Width, depthTexture.Height);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit | ClearBufferMask.StencilBufferBit);
GL.Enable(EnableCap.DepthTest);
GL.ClearColor(Color.Gray);
GL.UseProgram(geometryPassShader.ID);
geometry_shaderUniformMVPM.SetValueMat4(false, geometryImageMVMatrix * geometryImageProjMatrix);
testRectangle.Render(PrimitiveType.QuadStrip);
GL.UseProgram(0);
GL.BindFramebuffer(FramebufferTarget.Framebuffer, 0);
Render pass 2
GL.Viewport(0, 0, depthTexture.Width, depthTexture.Height);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit | ClearBufferMask.StencilBufferBit);
GL.ClearColor(Color.White);
GL.UseProgram(finalImageShader.ID);
GL.ActiveTexture(TextureUnit.Texture0);
depthTexture.Bind();
final_shaderUniformMVPM.SetValueMat4(false, finalImageMatrix);
screenQuad.Render(PrimitiveType.Quads);
GL.UseProgram(0);
GL.BindTexture(TextureTarget.Texture2D, 0);
A few hours later I found the solution.
The Problem was the MinFilter. Like the khronos group said on glTexParameter:
The initial value of GL_TEXTURE_MIN_FILTER is GL_NEAREST_MIPMAP_LINEAR.
I changed the MinFilter of my depth Texture to GL_NEAREST (where GL_LINEAR is also legal) and now the depth values in the GLSL shader are right (after linearization of course).
Additional Info:
There are some extensions for MagFilter like LINEAR_DETAIL_ALPHA_SGIS. I`ve tried some of these, the depth value correctness was not affected.
So I have a very complicated R^4 -> R^4 function, which I need to calculate for a lot of input glm::vec4s, in real time, so I want to do it on the GPU, for all vec4s parallel.
What I figured is that I would create a GL_RGBA32F texture, 1920x1 resolution (1920 is enough for my purposes), copy my input data onto the texture, then call a drawing of a line, so the rasterizer calls a fragment for each of my vec4s. Then either write the results back to the texture using imageload/store or render it to a 1920x1 framebuffer, and read it from there.
Problem is that for some reason opengl can't read my GL_RGBA32F texture.
Here is my code:
Setting up the texture (currently loaded with dummy data):
glm::vec4 texturedata[1920];
for (unsigned int i = 0; i < 1920; i++)
{
texturedata[i] = glm::vec4(1.0f, 1.0f, 1.0f, 1.0f);
}
glGenTextures(1, &datatexture);
glBindTexture(GL_TEXTURE_2D, datatexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 1920, 1, 0, GL_RGBA, GL_FLOAT, texturedata);
Before each rendering:
glUseProgram(mprogram);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, datatexture);
glBindVertexArray(rasterizertriggervao);
glUniform1i(glGetUniformLocation(mprogram, "datatexture"), 0);
glDrawArrays(GL_LINES, 0, 2);
The rasterizertriggervao is 2 floats: -1, 1, and the vertex shader draws a nice line through the middle of my screen from that.
Fragment shader:
layout(binding = 0) uniform sampler2D datatexture;
out vec4 x;
void main()
{
x = vec4( (texture(datatexture, vec2(gl_FragCoord.x/1920.0, 0.0))).x, 0.0f, 0.0f, 1.0f );
}
So this should draw a nice red line in the middle of my screen for me. It draws a black one. The rasterizer called all 1920x1 fragments, and the texture is correctly copied to the GPU (I have Nvidia Nsight installed, which allows me to debug the GPU, check the contents of textures and whatnot on the GPU directly, and I checked, the texture is full of 1.0f).
However for some reason the sampling doesn't work.
I know that there are better ways to do GPGPU but this thing has to fit into a much bigger program nicely, and this is the way I need it to work, through textures :)
Your seem not to set the texture filter modes for your texture. Now, GL's defaults are (unfortunately) to use mip-mapping. But your texture is not mipmap-complete, so sampling from it will not work. You should add glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_NEAREAST) and probably also glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_NEAREAST).
As Andon M. Coleman already pointed out in the comments, you are not sampling the texture at the correct location. You should use vec2(gl_FragCoord.x/1920.0 + 0.5/1920.0, 0.5) in your case. I also agree with Andon M. Coleman's suggestion to directly use texelFetch(), since you can directly use the integer value `gl_FragCoord.x'.
I am trying to blend a 3D texture with a 2D one to make a terrain. The 3D texture has moss, sand, snow and the like, interpolated to enhance the illusion of heights. The 2D texture currently only has an orange line across meant to be a "road". This is my fragment shader:
# version 420
uniform sampler3D mainTexture;
uniform sampler2D roadTexture;
void main() {
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
// Yes, I am aware I am only returning the 2D texture value
// However this is for testing purposes only
// Doing gl_FragColor = diffuse3D + diffuse2D;
// Or any other operation returns the 3D texture only
gl_FragColor = diffuse2D;
}
And this is my drawing call:
void Terrain::Draw() {
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(glm::vec3), &v[0].x);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeof(glm::vec3), &n[0].x);
s.enable(); // simple glUseProgram call within my Shader object
glClientActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_3D);
glBindTexture(GL_TEXTURE_3D, id_texture);
s.setSampler("mainTexture",0); // Calls to glGetUniformLocation and glUniform1i
glTexCoordPointer(3, GL_FLOAT, sizeof(glm::vec3), &t[0].x);
glClientActiveTexture(GL_TEXTURE1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, id_texture_road);
s.setSampler("roadTexture",1); // Same as above
glTexCoordPointer(2, GL_FLOAT, sizeof(glm::vec2), &t2[0].x);
glPushMatrix();
glScalef(scalex,scaley,scalez);
glDrawElements(GL_TRIANGLES, sizei, GL_UNSIGNED_INT, index);
glPopMatrix();
s.disable(); // glUseProgram(0)
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_3D);
glDisable(GL_TEXTURE_2D);
}
Here is the code for my setSampler() method:
void Shader::setSampler(std::string name, GLint value)
{
GLuint loc = glGetUniformLocation(program, name.c_str());
if (loc>0)
{
glUniform1i(loc, value);
}
}
The result is a solid black color upon the whole terrain. I have sadly been unable to find information on sampler3D, but the diffuse3D variable in my fragment shader does compute to the correct texture, and my texture coordinates for the 2D texture are being correcly sent to the fragment shader (I know this because I used them to color the terrain for testing and got a smooth gradinent from green to red, what you would expect using only the first 2 coordinates). I also checked the values passed to my setSampler() method and I do get the 0 and 1, and the 1 and 2 locations corresponding to them.
All of the help I can find on this issue is around the vicinity of the advice provided here, which I have already implemented).
Can anybody assist?
EDIT: So, just for kicks, I swapped my texture units so the 2D texture became unit 0 and the 3D became unit 1. Now only the 2D texture is rendered. But my texture units are passed correctly (at least in appearence) to the shader. Any clues?
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
gl_FragColor = diffuse2D;
Let's pretend that this wasn't using shaders. Let's pretend you were just writing a function in C++ that returns a value.
int FuncName(int val1, int val2)
{
int test1 = Compute(val1);
int test2 = Compute(val2);
return test2;
}
What will this function return? Obviously, it returns Compute(val2), completely ignoring the value of test1. It won't magically combine test1 and test2. They're separate values, and therefore, they remain separate unless you explicitly combine them.
Just like your fragment shader.
Shaders aren't magic; they're programming. They only do what you tell them to. So if you say, "get a value from a texture and then don't do anything with it", it will dutifully do exactly that. Though odds are good that the compiler will optimize out the texture fetch entirely.
If you want a "blend" of two textures, you must blend them. You must fetch from each texture, then use both values to compute a new color.
How exactly you do that depends entirely on you. Maybe your 2D texture has some alpha that represents how much of the 2D texture to show. I don't know; you didn't describe what your texture looks like or how exactly you plan to show the road in some places and not in others.
the reason you get a black color is simply that you don't set proper uniform variables.
# version 420
uniform sampler3D mainTexture;
uniform sampler2D roadTexture;
void main() {
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
gl_FragColor = diffuse2D;
}
what this shader is doing, is looking up the value of 'roadTexture' and displaying it. unfortunately, it has no clue which texture unit 'roadTexture' is currently bound to, and thus will acess texture unit 0, where your 3d texture is bound - so your're trying to access a 3d texture with 2d texcoords, which may well return all black. you'll need to query the uniform locations of your textures with glGetUniformLocation and then set them to the correct texture units ( 0/1, respectively ) with glUniform1i.
EDIT: also, you're using deprecated functionality, so your shader version directive should be changed to #version 420 compatibility - the default is core
You need to call glEnableClientState(GL_TEXTURE_COORD_ARRAY); again after you have made the second texture unit active with glClientActiveTexture(GL_TEXTURE1);
from http://www.opengl.org/sdk/docs/man2/xhtml/glEnableClientState.xml
enabling and disabling GL_TEXTURE_COORD_ARRAY affects the active client texture unit.
Just solved this problem. Apprently you still need glActiveTexture() in addition to glClientActiveTexture(). This was the code that worked, for anyone who gets the same problem:
glClientActiveTexture(GL_TEXTURE0);
glActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_3D, id_texture);
s.setSampler("mainTexture",0); // Calls to glGetUniformLocation and glUniform1i
glTexCoordPointer(3, GL_FLOAT, sizeof(glm::vec3), &t[0].x);
glClientActiveTexture(GL_TEXTURE1);
glActiveTexture(GL_TEXTURE1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, id_texture_road);
s.setSampler("roadTexture",1); // Same as above
glTexCoordPointer(2, GL_FLOAT, sizeof(glm::vec2), &t2[0].x);
// Drawing Calls
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE0);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glActiveTexture(GL_TEXTURE0);
Thanks for reading.