I create a terrain grid with 1024x1024 points, generate a normal map from that and try to use it in my vertex/fragment shader. Im getting strange results when i draw normals from normalmap compared to normals passed from the vertex shader. I try to improve quality by increase the texture resolution up to 8192, but doesnt remove strange aliasing effect.
The same happens when i get the normal from neighbouring pixels in a heightmap.
How can i fix this?
normalmap texture creation:
uint16_t channels = 3;
GLuint Texture = m_frameBuffers;
float* pData = new float[width * height * channels];
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
for (uint16_t y = 0; y < height; ++y)
{
for (uint16_t x = 0; x < width; ++x)
{
uint32_t index = y * width + x;
pData[index * channels + 0] = (normals[index].x) * 0.5 + 0.5;
pData[index * channels + 1] = (normals[index].y) * 0.5 + 0.5;
pData[index * channels + 2] = (normals[index].z) * 0.5 + 0.5;
}
}
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_FLOAT, pData);
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_REPEAT);
delete[] pData;
++m_frameBuffers;
return Texture;
fragment shader:
vec3 normal = texture(normalMap, VertexIn.texcoord).rgb;
normal = normalize(normal * 2.0 - 1.0);
normal = normalize(vec3(normalmatrix*vec4(normal,1.0)));
edit: i uploaded better pictures to compare. it shows same area of a 1024x1024 grid, first is from vertex normals, second uses normal map with RGB16F
Related
I am trying to load a simple model in openGL. Currently, I have a problem with the textures. The texture definitely does show up, but it is messed up and it also does not cover all the model (part of the model is black). I have tried several things to determine what is the source of the problem. I passed in a uniform red texture and it renders correctly. I used the texture coord as R and G value in the fragment shader and I get a red-green model so I assume the texture coordinate is also fine.
The texture is messed up, and part of it is black. This is just a simple minecraft character
the model's texture, which is a png
Here's how I am creating texture:
imageData = stbi_load(path.c_str(), &width, &height, &colorFormat, 0);
glGenTextures(1, &texID);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texID);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
My fragment shader:
void main(){
vec4 l = normalize(lightSourcePosition - positionEyeCoord);
vec4 normalFrag = normalize(normalEyeCoord);
if (hasTexture == 1){
vec4 textureKD = vec4(texture(textureSampler, texCoord).rgba);
vec4 diffuse = vec4(lightIntensity, 0.0) * textureKD * max(0.0, dot(l, normalFrag));
vec4 ambient = vec4(ambientIntensity, 0.0) * textureKD;
gl_FragColor = ambient + diffuse;
}
else {
vec4 diffuse = vec4(lightIntensity * KD, 0.0) * max(0.0, dot(l, normalFrag));
vec4 ambient = vec4(ambientIntensity * KA, 0.0);
gl_FragColor = ambient + diffuse;
}
}
Most likely, your character has been modeled using DirectX conventions for texturing. In DirectX, the texture coordinate origin is the top left of the image, whereas it is the bottom left corner in OpenGL.
There are a couple of things you can do (choose one):
When you load the model, replace all texture coordinates (u, v) by (u, 1 - v).
When you load the texture, flip the image vertically
In your shader, use vec2(texCoord.x, 1 - texCoord.y) for your texture coordinates.
I advise against using the third option for anything other than quick testing.
I am currently working on rendering two different video streams at the same time to two different OpenGL textures. I use an implementation of QAbstractVideoSurface to prepare each frame of the video and then I pass it to my OpenGL draw method. Each frame arrives in YUV coding so in order to get the RGB values I use GLSL.
The problem is the following: every time I try to draw more than one of these videos the first one plays correctly but the other does some kinkiness with the channels.
My vertex shader:
#version 420
attribute vec2 position;
attribute vec2 texcoord;
uniform mat4 modelViewProjectionMatrix;
varying vec2 v_texcoord;
void main()
{
gl_Position = modelViewProjectionMatrix * vec4(position, 0, 1);
v_texcoord = texcoord.xy;
}
My fragment shader:
#version 420
varying vec2 v_texcoord;
uniform sampler2D s_texture_y;
uniform sampler2D s_texture_u;
uniform sampler2D s_texture_v;
uniform float s_texture_alpha;
void main(void)
{
highp float y = texture2D(s_texture_y, v_texcoord).r;
highp float u = texture2D(s_texture_u, v_texcoord).r - 0.5;
highp float v = texture2D(s_texture_v, v_texcoord).r - 0.5;
highp float r = y + 1.402 * v;
highp float g = y - 0.344 * u - 0.714 * v;
highp float b = y + 1.772 * u;
gl_FragColor = vec4(r, g, b, s_texture_alpha);
}
The result is like the following picture:
Sometimes it gets it right, sometimes it's even worse, but the first video plays correctly all the time.
After fooling around a bit with the channels I found out that sometimes the u variable gets the same value as v in the fragment shader. My texture bindig is as it follows:
uniformSamplers[0] = functions.glGetUniformLocation(program, "s_texture_y");
uniformSamplers[1] = functions.glGetUniformLocation(program, "s_texture_u");
uniformSamplers[2] = functions.glGetUniformLocation(program, "s_texture_v");
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, textRef);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, image_width, image_height, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, (GLvoid*)(image));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
functions.glUniform1i(uniformSamplers[0], 0);
glActiveTexture(GL_TEXTURE0 + 1);
glBindTexture(GL_TEXTURE_2D, textRefU);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, image_width / 2, image_height / 2, 0, GL_LUMINANCE,
GL_UNSIGNED_BYTE, (GLvoid*)(image + image_width * image_height));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
functions.glUniform1i(uniformSamplers[1], 1);
glActiveTexture(GL_TEXTURE0 + 2);
glBindTexture(GL_TEXTURE_2D, textRefV);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, image_width / 2, image_height / 2, 0, GL_LUMINANCE,
GL_UNSIGNED_BYTE, (GLvoid*)(image + image_height * image_width + (image_width / 2) * (image_height / 2)));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
functions.glUniform1i(uniformSamplers[2], 2);
The problem was quite lame, I focused on the shaders and spent several days trying to find out the problem there. However the problem was in the texture reference intialization. All three references had the same value, thus I overwrote the same texture each time. With having all three on different value it works perfectly.
I'm trying to display some text using OpenGL with FreeType library. It's working, yet text looks not so smooth. In FreeType documentation it says that there's some antialiasing happing to the texture during loading, but it doesn't look that way in my case.
This is what I'm doing:
FT_Init_FreeType(&m_fontLibrary);
FT_New_Face(m_fontLibrary, "src/VezusLight.OTF", 0, &m_BFont);
FT_Set_Pixel_Sizes(m_BFont, 0, 80);
m_glyph = m_BFont->glyph;
GLuint tex;
glActiveTexture(GL_TEXTURE1);
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glUseProgram(m_textPipeline);
glUniform1i(m_texLocation, 1);
glUseProgram(0);
and then rendering:
glActiveTexture(GL_TEXTURE1);
glEnableVertexAttribArray(m_coordTex);
glBindBuffer(GL_ARRAY_BUFFER, m_VBO);
const char *p;
float x = x_i, y = y_i;
const char* result = text.c_str();
for (p = result; *p; p++)
{
if (FT_Load_Char(m_BFont, *p, FT_LOAD_RENDER))
continue;
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_ALPHA,
m_glyph->bitmap.width,
m_glyph->bitmap.rows,
0,
GL_ALPHA,
GL_UNSIGNED_BYTE,
m_glyph->bitmap.buffer
);
float x2 = x - 1024 + m_glyph->bitmap_left;
float y2 = y - 600 - m_glyph->bitmap_top;
float w = m_glyph->bitmap.width;
float h = m_glyph->bitmap.rows;
GLfloat box[4][4] = {
{ x2, -y2 - h, 0, 1 },
{ x2 + w, -y2 - h, 1, 1 },
{ x2, -y2, 0, 0 },
{ x2 + w, -y2, 1, 0 },
};
glBufferData(GL_ARRAY_BUFFER, 16 * sizeof(GLfloat), box, GL_DYNAMIC_DRAW);
glVertexAttribPointer(m_coordTex, 4, GL_FLOAT, GL_FALSE, 4 * sizeof(GLfloat), NULL);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
x += (m_glyph->advance.x >> 6);
y += (m_glyph->advance.y >> 6);
}
glDisableVertexAttribArray(m_coordTex);
Result looks like this:
Can anyone spot a problem in my code?
Two issues with your code.
First one is a buffer overflow: Texture coodinates in your box structure are vec2, however you tell glVertexAttribPointer it was a vec4 (the stride of 4*sizeof(float) is what matters, and the mismatched size parameters makes OpenGL read out of bounds 2 elements over the end of the box array).
That your texture looks pixelated stems from the fact that texture coordinates 0 and 1 do not come to lie on pixel centers, but the edges of the texture. Either use texelFetch in the fragment shader to address pixels by their pixel coordinate, or remap the texture extents to the range [0…1] properly like explained in https://stackoverflow.com/a/5879551/524368
I think for having transparent color or smooth or anti-aliased glyphs,
you must enable blending in opengl and also disable depth-testing. (you can find out why and how by searching in the internet).
Something like this:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
//and if it didn't work, then disable depth testing by uncommenting this:
//glDisable(GL_DEPTH_TEST);
hope it helps!
I'm trying to sample a depth texture into a compute shader and to copy it into an other texture.
The problem is that I don't get correct values when I read from the depth texture:
I've tried to check if the initial values of the depth texture were correct (with GDebugger), and they are. So it's the imageLoad GLSL function that retrieve wrong values.
This is my GLSL Compute shader:
layout (binding=0, r32f) readonly uniform image2D depthBuffer;
layout (binding=1, rgba8) writeonly uniform image2D colorBuffer;
// we use 16 * 16 threads groups
layout (local_size_x = 16, local_size_y = 16) in;
void main()
{
ivec2 position = ivec2(gl_GlobalInvocationID.xy);
// Sampling from the depth texture
vec4 depthSample = imageLoad(depthBuffer, position);
// We linearize the depth value
float f = 1000.0;
float n = 0.1;
float z = (2 * n) / (f + n - depthSample.r * (f - n));
// even if i try to call memoryBarrier(), barrier() or memoryBarrierShared() here, i still have the same bug
// and finally, we try to create a grayscale image of the depth values
imageStore(colorBuffer, position, vec4(z, z, z, 1));
}
and this is how I'm creating the depth texture and the color texture:
// generate the deth texture
glGenTextures(1, &_depthTexture);
glBindTexture(GL_TEXTURE_2D, _depthTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, wDimensions.x, wDimensions.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
// generate the color texture
glGenTextures(1, &_colorTexture);
glBindTexture(GL_TEXTURE_2D, _colorTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, wDimensions.x, wDimensions.y, 0, GL_RGBA, GL_FLOAT, NULL);
I fill the depth texture with depth values (bind it to a frame buffer and render the scene) and then I call my compute shader this way:
_computeShader.use();
// try to synchronize with the previous pass
glMemoryBarrier(GL_ALL_BARRIER_BITS);
// even if i call glFinish() here, the result is the same
glBindImageTexture(0, _depthTexture, 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32F);
glBindImageTexture(1, _colorTexture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA8);
glDispatchCompute((wDimensions.x + WORK_GROUP_SIZE - 1) / WORK_GROUP_SIZE,
(wDimensions.y + WORK_GROUP_SIZE - 1) / WORK_GROUP_SIZE, 1); // we divide the compute into groups of 16 threads
// try to synchronize with the next pass
glMemoryBarrier(GL_ALL_BARRIER_BITS);
with:
wDimensions = size of the context (and of the framebuffer)
WORK_GROUP_SIZE = 16
Do you have any idea of why I don't get valid depth values?
EDIT:
This is what the color texture looks like when I render a sphere:
and it seems that glClear(GL_DEPTH_BUFFER_BIT) doesn't do anything:
Even if I call it just before the glDispatchCompute() I still have the same image...
How can this be possible?
Actually, i discovered that you cannot send a depth texture as an image to a compute shader, even with the readonly keyword.
So i've replaced:
glBindImageTexture(0, _depthTexture, 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32F);
by:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _depthTexture);
and in my compute shader:
layout (binding=0, r32f) readonly uniform image2D depthBuffer;
by:
layout (binding = 0) uniform sampler2D depthBuffer;
and to sample it i just write:
ivec2 position = ivec2(gl_GlobalInvocationID.xy);
vec2 screenNormalized = vec2(position) / vec2(ctxSize); // ctxSize is the size of the depth and color textures
vec4 depthSample = texture2D(depthBuffer, screenNormalized);
and it works very well like this
I am trying to create a normal map in opengl that I can load into the shader and change dynamically, though currently i am stuck at how to create the texture.
I currently have this:
glActiveTexture(GL_TEXTURE7);
glGenTextures(1, &normals);
glBindTexture(GL_TEXTURE_2D, normals);
texels = new Vector3f*[256];
for(int i = 0; i < 256; ++i){
texels[i] = new Vector3f[256];
}
this->setup_normals();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, 3, 256, 256, 0, GL_RGB, GL_FLOAT, texels);
...
void setup_normals(){
for(int i = 0; i < 256; ++i){
for(int j = 0; j < 256; ++j){
texels[i][j][0] = 0.0f;
texels[i][j][1] = 1.0f;
texels[i][j][2] = 0.0f;
}
}
}
where Vector3f is: typedef float Vector3f[3];
and texels is: Vector3f** texels;
When I draw this texture to a screenquad using an orthogonal matrix( which works for textures loaded in) I get .
I am unsure why it does not appear fully green and also what is causing the black streaks to appear within it. Any help appreciated.
Your array needs to be contiguous since glTexImage2D() doesn't take any sort of stride or row mapping parameters:
texels = new Vector3f[256*256];