GLSL texture layout - opengl

How does the layout of the matrix (row-major vs column-major) affect the GLSL texture that it creates? Does the access to that texture changes in the shader? In case of the column-major matrix should I first change the matrix to row-major and then upload it to GPU as a texture?

See The OpenGL Shading Language 4.6, 5.4.2 Vector and Matrix Constructors, page 101:
To initialize a matrix by specifying vectors or scalars, the components are assigned to the matrix elements in column-major order.
mat4(float, float, float, float, // first column
float, float, float, float, // second column
float, float, float, float, // third column
float, float, float, float); // fourth column
This means if you store the matrices (mat4) in the lines of a 2 dimensional 4*N, RGBA texture, like this:
0 1 2 3
mat4 m0 m0[0].xyzw m0[1].xyzw m0[2].xyzw m0[3].xyzw
mat4 m1 m1[0].xyzw m1[1].xyzw m1[2].xyzw m1[3].xyzw
mat4 m2 m2[0].xyzw m2[1].xyzw m2[2].xyzw m2[3].xyzw
mat4 m3 .....
mat4 matArray[N]; // possibly std::vector<glm::mat4>( N );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 4, N, 0, GL_RGBA, GL_FLOAT, matArray);
then you can read the matrices from the texture in the shader like this:
uniform sampler2D matSampler;
void main()
{
mat4 m0 = mat4(
texelFetch( matSampler, ivec(0, 0), 0),
texelFetch( matSampler, ivec(1, 0), 0),
texelFetch( matSampler, ivec(2, 0), 0),
texelFetch( matSampler, ivec(3, 0), 0) );
mat4 m1 = mat4(
texelFetch( matSampler, ivec(0, 1), 0),
texelFetch( matSampler, ivec(1, 1), 0),
texelFetch( matSampler, ivec(2, 1), 0),
texelFetch( matSampler, ivec(3, 1), 0) );
.....
}

Related

OpenGL - Apply transformation to polygon in 3D space

I am trying to rotate a quad in a 3D space. The following code shows the vertex shader utilized to draw the quad:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aColor;
out vec3 ourColor;
uniform mat4 transform;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = transform*(projection*view*model*vec4(aPos, 1.0f));
ourColor = aColor;
}
The quad is displayed when transform is not multiplied to projection*view*model*vec4(aPos,1.0f) but is not displayed when it is multiplied as above.
The code for transformation:
trans=glm::rotate(trans,(float)(glfwGetTime()),glm::vec3(0.0,0.0,1.0));
float scaleAmount = sin(j*0.3);j=j+0.035;
trans=glm::scale(trans,glm::vec3(scaleAmount,scaleAmount,scaleAmount));
unsigned int transformLoc = glGetUniformLocation(shaderProgram, "transform");
glUniformMatrix4fv(transformLoc, 1, GL_FALSE, glm::value_ptr(trans));
glBindVertexArray(VAO);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
I have set the uniform present in the vertex shader as well.Why is it not rotating and scaling, or even appearing when I multiply transform with (projection*view*model*vec4(aPos,1.0f)) ?
Edit: I figured out that the problem is with scaling, since the code works with rotation only. The code does not work with scaling only.
Let's think only in 2D.
The quad is defined in "world" coordinates. To rotate it around some point move the quad to that point, then rotate and scale it and then move it back. Doing this with matrices is the same as transform * model where transform is something like
transform = moveback * scale * rotate * movetopoint
If scaleAmount == 0.0:
glm::mat4 trans( 1.0f );
float scaleAmount = 0.0f;
trans=glm::scale(trans,glm::vec3(scaleAmount,scaleAmount,scaleAmount));
then this would cause that trans is
{{0, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 0, 1}}
Since sin(0.0) == 0.0 it has to be ensured that in case of sin(j*0.3);, j is not equal 0.0.

Issues with shaders in Qt/OpenGL

How can I use different color outputs within a fragment shader?
Say, my vshader looks like this:
#version 330
uniform mat4 mvpmatrix;
layout(location=0) in vec4 position;
layout(location=1) in vec2 texcoord;
out vec2 out_texcoord;
void main()
{
gl_Position = mvpmatrix * position;
out_texcoord = texcoord;
}
// fshader
#version 330
uniform sampler2D texture;
in vec2 out_texcoord;
out vec4 out_color;
out vec4 out_color2;
void main()
{
out_color = texture2D(texture, out_texcoord);
// out_color2 = vec3(1.0, 1.0, 1.0, 1.0);
}
Accessing them like so:
m_program->enableAttributeArray(0); // position
m_program->setAttributeBuffer(0, GL_FLOAT, 0, 3, sizeof(Data));
m_program->enableAttributeArray(1); // texture
m_program->setAttributeBuffer(1, GL_FLOAT, sizeof(QVector3D), 2, sizeof(Data));
So far, everything uses the default output of the fragment shader, which is a texture. But how can access different fragment outputs ? Do I have to use layouts as well there? And, its probably a dumb question...but are layout locations of the vshader/fshader bound to each other? So, if I'm enabling my buffer on AttributeArray(1), i'm forced to use layout location 1 of BOTH shaders?
You can bind another attribute location for sending color information to your fragment shader any time but let me show you another trick :)
I use 2 attribute location, one to represent the location of the vertex and the other one to represent the color of the vertex.
glBindAttribLocation(program_, 0, "vs_in_pos");
glBindAttribLocation(program_, 1, "vs_in_col");
This is my mesh definition, where Vertex contain two 3D vector:
Vertex vertices[] = {
{glm::vec3(-1, -1, 1), glm::vec3(1, 0, 0)},
{glm::vec3(1, -1, 1), glm::vec3(1, 0, 0)},
{glm::vec3(-1, 1, 1), glm::vec3(1, 0, 0)},
{glm::vec3(1, 1, 1), glm::vec3(1, 0, 0)},
{glm::vec3(-1, -1, -1), glm::vec3(0, 1, 0)},
{glm::vec3(1, -1, -1), glm::vec3(0, 1, 0)},
{glm::vec3(-1, 1, -1), glm::vec3(0, 1, 0)},
{glm::vec3(1, 1, -1), glm::vec3(0, 1, 0)},
};
GLushort indices[] = {
// Front
0, 1, 2, 2, 1, 3,
// Back
4, 6, 5, 6, 7, 5,
// Top
2, 3, 7, 2, 7, 6,
// Bottom
0, 5, 1, 0, 4, 5,
// Left
0, 2, 4, 4, 2, 6,
// Right
1, 5, 3, 5, 7, 3
};
This will represent a cube. I will mix this pre-defined color with a calculated value. This means the color of the cube will be changed due to its position. Set up a 3D vector for RGB values and set up to use it in the fragment shader:
loc_col_ = glGetUniformLocation(program_, "color");
Now in my render function I place the cubes in a 2D circle, moving them, rotating them:
for (int i = 0; i < num_of_cubes_; ++i) {
double fi = 2 * PI * (i / (double) num_of_cubes_);
glm::mat4 position = glm::translate<float>(cubes_radius_ * cos(fi), cubes_radius_ * sin(fi), 0);
glm::mat4 crackle = glm::translate<float>(0, 0.1 * (sin(2 * PI * (SDL_GetTicks() / 500.0) + i)), 0);
glm::mat4 rotate = glm::rotate<float>(360 * (SDL_GetTicks() / 16000.0), 0, 0, 1);
world_ = position * crackle * rotate;
glm::vec3 color = glm::vec3((1 + cos(fi)) * 0.5, (1 + sin(fi)) * 0.5, 1 - ((1 + cos(fi)) * 0.5));
glUniformMatrix4fv(loc_world_, 1, GL_FALSE, &(world_[0][0]));
glUniform3fv(loc_col_, 1, &(color[0]));
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, 0);
}
You can see here I send not only the world matrix, but the color vector as well.
Linear interpolation in the fragment shader is achived by the mix() function:
#version 130
in vec3 vs_out_col;
in vec3 vs_out_pos;
out vec4 fs_out_col;
uniform vec3 color;
void main() {
fs_out_col = vec4(mix(color, vs_out_col, 0.5), 1);
}
Color is a value passed in the render while vs_out_col coming from the vertex shader which was arrived there in "channel" 1.
I hope you can understand me.
Layout locations on vertex and fragment shaders are independent. QT may be misleading with enableAttributeArray because in OpenGL this function is called glEnableVertexAttribArray - vertex is the keyword here. So you can pass per vertex data only into vertex shader, and then pass it into fragment shader using in/out (interpolation).
If you want to use multiple outputs from fragment shader you have to use locations and Output buffers.
This link should also be helpful, I'll summarize it later.

OpenGL vertex shader: weird matrix translation

I'm trying to move a triangle based on time using a matrix. But it does some weird stuff:
What it should do:
move on the x-axis
What it does:
The top point of the triangle is fixed and the other points seem to move around it in a circular movement and scale on the x, z axis (I'm still in 2d so I don't have depth).
My C++ Code:
...
GLfloat timeValue = glfwGetTime();
GLfloat offset = (sin(timeValue * 4) / 2);
GLfloat matrix[16] = {
1, 0, 0, offset,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1
};
GLuint uniform_m_transform = glGetUniformLocation(shader_program, "m_transform");
glUniformMatrix4fv(uniform_m_transform, 1, GL_FALSE, matrix);
...
My vertex shader:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 color;
out vec3 ourColor;
uniform mat4 m_transform;
void main()
{
ourColor = color;
gl_Position = m_transform * vec4(position, 1.0);
}
I don't know what I did wrong, according to the tutorial the matrix attribute I've set to offset should change the x-translation.
Do you know what's my mistake?
you are providing a row-major matrix, so you need to specify the transpose:
glUniformMatrix4fv(uniform_m_transform, 1, GL_TRUE, matrix);
Reference: glUniform, check the transpose parameter.

Diffuse lighting error on parallel surfaces

As a test, I created a simple quad. Here are its attributes:
Vertex vertices[] =
{
// Positions Normals
{vec3(-1,-1, 0), vec3(-1,-1, 1)}, // v0
{vec3( 1,-1, 0), vec3( 1,-1, 1)}, // v1
{vec3(-1, 1, 0), vec3(-1, 1, 1)}, // v2
{vec3( 1, 1, 0), vec3( 1, 1, 1)}, // v3
};
And I put it in my world space at (0.0, 0.0, -9.5). Then I put my point light position at (0.0, 0.0, -8.0). My camera is at the origin (0.0, 0.0, 0.0). When I run my program, this works as expected:
But then, when I replace this quad with 9 scaled down quads, put them all at -9.5 on Z (in other word, they are all parallel to each other on Z), my diffuse lighting gets a little weird
It looks like the corners are showing too much lighting, breaking the nice diffuse circle that we see on a regular quad.
Here is my shader program:
precision mediump int;
precision mediump float;
varying vec3 v_position;
varying vec3 v_normal;
#if defined(VERTEX)
uniform mat4 u_mvpMatrix;
uniform mat4 u_mvMatrix;
uniform mat3 u_normalMatrix;
attribute vec4 a_position;
attribute vec3 a_normal;
void main()
{
vec4 position = u_mvMatrix * a_position;
v_position = position.xyz / position.w;
v_normal = normalize(u_normalMatrix * a_normal);
gl_Position = u_mvpMatrix * a_position;
}
#endif // VERTEX
#if defined(FRAGMENT)
uniform vec3 u_pointLightPosition;
void main()"
{
vec3 viewDir = normalize(-v_position);
vec3 normal = normalize(v_normal);
vec3 lightPosition = u_pointLightPosition - v_position;
vec3 pointLightDir = normalize(lightPosition);
float distance = length(lightPosition);
float pointLightAttenuation = 1.0 / (1.0 + (0.25 * distance * distance));
float diffuseTerm = max(dot(pointLightDir, normal), 0.15);
gl_FragColor = vec4(diffuseTerm * pointLightAttenuation);
}
#endif // FRAGMENT
My uniforms are uploaded as followed (I'm using GLM):
const mat4 &view_matrix = getViewMatrix();
mat4 mv_matrix = view * getModelMatrix();
mat4 mvp_matrix = getProjectionMatrix() * mv_matrix;
mat3 normal_matrix = inverseTranspose(mat3(mv_matrix));
vec3 pointLightPos = vec3(view_matrix * vec4(getPointLightPos(), 1.0f));
glUniformMatrix4fv( mvpMatrixUniformID, 1, GL_FALSE, (GLfloat*)&mvp_matrix);
glUniformMatrix4fv( vpMatrixUniformID, 1, GL_FALSE, (GLfloat*)&mv_matrix);
glUniformMatrix3fv(normalMatrixUniformID, 1, GL_FALSE, (GLfloat*)&normal_matrix);
glUniform3f(pointLightPosUniformID, pointLightPos.x, pointLightPos.y, pointLightPos.z);
Am I doing anything wrong?
Thanks!
Without going too much into your code, I think everything is working just fine. I see a very similar result with a quick blender setup:
The issue is the interpolation of the normal doesn't produce a spherical bump.
It ends up being a patch like this (I simply subdivided a smooth shaded cube)...
If you want a more spherical bump, you could generate the normals implicitly in a fragment shader (for example as is done here (bottom image)), use a normal map, or use more tessellated geometry such as an actual sphere.

Palette swap using fragment shaders

I'm trying to sort out how can I achieve palette swap using fragment shaders (looking at this post https://gamedev.stackexchange.com/questions/43294/creating-a-retro-style-palette-swapping-effect-in-opengl) I am new to open gl so I'd be glad if someone could explain me my issue.
Here is code snippet which I am trying to reproduce:
http://www.opengl.org/wiki/Common_Mistakes#Paletted_textures
I set up Open GL environment so that I can create window, load textures, shaders and render my single square which is mapped to corners of window (when I resize window image get stretched too).
I am using vertex shader to convert coordinates from screen space to texture space, so my texture is stretched too
attribute vec2 position;
varying vec2 texcoord;
void main()
{
gl_Position = vec4(position, 0.0, 1.0);
texcoord = position * vec2(0.5) + vec2(0.5);
}
The fragment shader is
uniform float fade_factor;
uniform sampler2D textures[2];
varying vec2 texcoord;
void main()
{
vec4 index = texture2D(textures[0], texcoord);
vec4 texel = texture2D(textures[1], index.xy);
gl_FragColor = texel;
}
textures[0] is indexed texture (that one I'm trying to colorize)
Every pixel has color value of (0, 0, 0, 255), (1, 0, 0, 255), (2, 0, 0, 255) ... (8, 0, 0, 255) - 8 colors total, thats why it looks almost black. I want to encode my colors using value stored in "red channel".
textures[1] is table of colors (9x1 pixels, each pixel has unique color, zoomed to 90x10 for posting)
So as you can see from fragment shader excerpt I want to read index value from first texture, for example (5, 0, 0, 255), and then look up actual color value from pixel stored at point (x=5, y=0) in second texture. Same as written in wiki.
But instead of painted image I get:
Actually I see that I can't access pixels from second texture if I explicitly set X point like vec2(1, 0),vec2(2, 0), vec2(4, 0) or vec2(8, 0). But I can get colors when I use vec2(0.1, 0) or vec2(0.7, 0). Guess that happens because texture space is normalized from my 9x1 pixels to (0,0)->(1,1). But how can I "disable" that feature and simply load my palette texture so I could just ask "give me color value of pixel stored at (x,y), please"?
Every pixel has color value of (0, 0, 0, 255), (1, 0, 0, 255), (2, 0, 0, 255) ... (8, 0, 0, 255)
Wrong. Every pixel has the color values: (0, 0, 0, 1), (0.00392, 0, 0, 1), (0.00784, 0, 0, 1) ... (0.0313, 0, 0, 1).
Unless you're using integer or float textures (and you're not), your colors are stored as normalized floating point values. So what you think is "255" is really just "1.0" when you fetch it from the shader.
The correct way to handle this is to first transform the normalized values back into their non-normalized form. This is done by multiplying the value by 255. Then convert them into texture coordinates by dividing by the palette texture's width (- 1). Also, your palette texture should not be 2D:
#version 330 //Always include a version.
uniform float fade_factor;
uniform sampler2D palattedTexture;
uniform sampler1D palette;
in vec2 texcoord;
layout(location = 0) out vec4 outColor;
void main()
{
float paletteIndex = texture(palattedTexture, texcoord).r * 255.0;
outColor = texture(palette, paletteIndex / (textureSize(palette).x - 1));
gl_FragColor = texel;
}
The above code is written for GLSL 3.30. If you're using earlier versions, translate it accordingly.
Also, you shouldn't be using RGBA textures for your paletted texture. It's just one channel, so either use GL_LUMINANCE or GL_R8.