I'm trying to move a triangle based on time using a matrix. But it does some weird stuff:
What it should do:
move on the x-axis
What it does:
The top point of the triangle is fixed and the other points seem to move around it in a circular movement and scale on the x, z axis (I'm still in 2d so I don't have depth).
My C++ Code:
...
GLfloat timeValue = glfwGetTime();
GLfloat offset = (sin(timeValue * 4) / 2);
GLfloat matrix[16] = {
1, 0, 0, offset,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1
};
GLuint uniform_m_transform = glGetUniformLocation(shader_program, "m_transform");
glUniformMatrix4fv(uniform_m_transform, 1, GL_FALSE, matrix);
...
My vertex shader:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 color;
out vec3 ourColor;
uniform mat4 m_transform;
void main()
{
ourColor = color;
gl_Position = m_transform * vec4(position, 1.0);
}
I don't know what I did wrong, according to the tutorial the matrix attribute I've set to offset should change the x-translation.
Do you know what's my mistake?
you are providing a row-major matrix, so you need to specify the transpose:
glUniformMatrix4fv(uniform_m_transform, 1, GL_TRUE, matrix);
Reference: glUniform, check the transpose parameter.
Related
What I want to attrive is to render many small quads with this opengl function "glDrawArraysInstanced", the space between them is the same. For example, please refer to the follwing image:
The code is as follow:
void OpenGLShowVideo::displayBySmallMatrix()
{
// Now use QOpenGLExtraFunctions instead of QOpenGLFunctions as we want to
// do more than what GL(ES) 2.0 offers.
QOpenGLExtraFunctions *f = QOpenGLContext::currentContext()->extraFunctions();
f->glClearColor(9.f/255.0f, 14.f/255.0f, 15.f/255.0f, 1);
glClear(GL_COLOR_BUFFER_BIT);
f->glViewport(0, 0, this->width(), this->height());
m_displayByMatrixProgram->bind();
f->glActiveTexture(GL_TEXTURE0 + m_acRenderToScreenTexUnit);
f->glBindTexture(GL_TEXTURE_2D, m_renderWithMaskFbo->texture());
if (m_uniformsDirty) {
m_uniformsDirty = false;
m_displayByMatrixProgram->setUniformValue(m_samplerLoc, m_acRenderToScreenTexUnit);
m_proj.setToIdentity();
m_proj.perspective(INIT_VERTICAL_ANGLE, float(this->width()) / float(this->height()), m_fNearPlane, m_fFarPlane);
m_displayByMatrixProgram->setUniformValue(m_projMatrixLoc, m_proj);
QMatrix4x4 camera;
camera.lookAt(m_eye, m_eye + m_target, QVector3D(0, 1, 0));
m_displayByMatrixProgram->setUniformValue(m_camMatrixLoc, camera);
m_world.setToIdentity();
float fOffsetZ = m_fVerticalAngle / INIT_VERTICAL_ANGLE;
m_world.translate(m_fMatrixOffsetX, m_fMatrixOffsetY, fOffsetZ);
m_proj.scale(MATRIX_INIT_SCALE_X, MATRIX_INIT_SCALE_Y, 1.0f);
m_world.rotate(180, 1, 0, 0);
QMatrix4x4 wm = m_world;
m_displayByMatrixProgram->setUniformValue(m_worldMatrixLoc, wm);
QMatrix4x4 mm;
mm.setToIdentity();
m_displayByMatrixProgram->setUniformValue(m_myMatrixLoc, mm);
m_displayByMatrixProgram->setUniformValue(m_lightPosLoc, QVector3D(0, 0, 70));
QSize tmpSize = QSize(m_viewPortWidth, m_viewPortHeight);
m_displayByMatrixProgram->setUniformValue(m_resolutionLoc, tmpSize);
int whRatioVal = m_viewPortWidth / m_viewPortHeight;
m_displayByMatrixProgram->setUniformValue(m_whRatioLoc, whRatioVal);
}
m_geometries->bindBufferForArraysInstancedDraw();
f->glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 4, m_viewPortWidth * m_viewPortHeight);
}
And the vertex shader code is as follow:
#version 330
layout(location = 0) in vec4 vertex;
out vec3 color;
uniform mat4 mvp_matrix;
uniform mat4 projMatrix;
uniform mat4 camMatrix;
uniform mat4 worldMatrix;
uniform mat4 myMatrix;
uniform vec2 viewResolution;
uniform int whRatio;
uniform sampler2D sampler;
void main() {
int posX = gl_InstanceID % int(viewResolution.x);
int posY = gl_InstanceID / int(viewResolution.y);
if( posY % whRatio < whRatio) {
posY = gl_InstanceID / int(viewResolution.x);
}
ivec2 pos = ivec2(posX, posY);
vec2 t = vec2( pos.x * 3.0, pos.y * 3.0 );
mat4 wm = mat4(1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, t.x, t.y, 1, 1) * worldMatrix;
color = texelFetch(sampler,pos,0).rgb;
gl_Position = projMatrix * camMatrix * wm * vertex;
}
And the fragment shader is as follow:
#version 330 core
in vec3 color;
out vec4 fragColor;
void main() {
fragColor = vec4(color, 1.0);
}
However, when I move the camera far from the screen (by changing the [camera.lookAt (m_eye, m_eye + m_target, QVector3D (0, 1, 0);] "m_eye" parameter value), I got sth like this:
The space between quads is different, and the size of the quad is also different. But when I move the camera closer to the screen, it looks much better.
I think what you're seeing there is the result of rounding the coordinates to the nearest integer pixel coordinate.
To get something that looks more even, you want to use some form of anti-aliasing. The options that spring to mind are:
Enable some sort of full screen anti-aliasing like MSAA. This is simple to enable, but can have a significant performance cost.
Put your pattern in a texture, and tile that texture over a single quad. Texture filtering and mip maps should take care of the anti-aliasing for you, and it will probably be faster to render that way as well because you only need a single quad.
I am trying to rotate a quad in a 3D space. The following code shows the vertex shader utilized to draw the quad:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aColor;
out vec3 ourColor;
uniform mat4 transform;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = transform*(projection*view*model*vec4(aPos, 1.0f));
ourColor = aColor;
}
The quad is displayed when transform is not multiplied to projection*view*model*vec4(aPos,1.0f) but is not displayed when it is multiplied as above.
The code for transformation:
trans=glm::rotate(trans,(float)(glfwGetTime()),glm::vec3(0.0,0.0,1.0));
float scaleAmount = sin(j*0.3);j=j+0.035;
trans=glm::scale(trans,glm::vec3(scaleAmount,scaleAmount,scaleAmount));
unsigned int transformLoc = glGetUniformLocation(shaderProgram, "transform");
glUniformMatrix4fv(transformLoc, 1, GL_FALSE, glm::value_ptr(trans));
glBindVertexArray(VAO);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
I have set the uniform present in the vertex shader as well.Why is it not rotating and scaling, or even appearing when I multiply transform with (projection*view*model*vec4(aPos,1.0f)) ?
Edit: I figured out that the problem is with scaling, since the code works with rotation only. The code does not work with scaling only.
Let's think only in 2D.
The quad is defined in "world" coordinates. To rotate it around some point move the quad to that point, then rotate and scale it and then move it back. Doing this with matrices is the same as transform * model where transform is something like
transform = moveback * scale * rotate * movetopoint
If scaleAmount == 0.0:
glm::mat4 trans( 1.0f );
float scaleAmount = 0.0f;
trans=glm::scale(trans,glm::vec3(scaleAmount,scaleAmount,scaleAmount));
then this would cause that trans is
{{0, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 0, 1}}
Since sin(0.0) == 0.0 it has to be ensured that in case of sin(j*0.3);, j is not equal 0.0.
Right now I am working at creating a heightmap-based terrain grid, similar to the Lighthouse 3D Terrain Tutorial, except that I am using VBO's and EBO's. All has been going well until I have tried to texture my grid. Currently I am applying one texture that spans the entire grid. Using Window 7's sample Jellyfish picture, I end up with this:
For those familiar with the picture, you can see that it is being repeated several times throughout the terrain grid. This led me to believe that my UV coordinates were being corrupted. However, if I use a function that always returns 0 to determine the height at each grid vertex, I end up with this:
Now I am thoroughly confused, and I can't seem to find any other resources to help me.
My code is as follows:
generate_terrain() function:
QImage terrainImage;
terrainImage.load(imagePath.data());
int width = terrainImage.width();
int height = terrainImage.height();
float uStep = 1.0f / width;
float vStep = 1.0f / height;
grid = new std::vector<float>;
indices = new std::vector<unsigned short>;
for (int i = 0; i <= height-1; ++i) {
for (int j = 0; j <= width-1; ++j) {
QVector3D vertex1{j, heightFunction(terrainImage.pixel(j, i)), i};
QVector3D vertex2{j, heightFunction(terrainImage.pixel(j, i+1)), i+1};
QVector3D vertex3{j+1, heightFunction(terrainImage.pixel(j+1, i+1)), i+1};
QVector3D edge1 = vertex2 - vertex1;
QVector3D edge2 = vertex3 - vertex1;
QVector3D normal = QVector3D::crossProduct(edge1, edge2);
normal.normalize();
grid->push_back(vertex1.x());
grid->push_back(vertex1.y());
grid->push_back(vertex1.z());
grid->push_back(normal.x());
grid->push_back(normal.y());
grid->push_back(normal.z());
grid->push_back(j * uStep);
grid->push_back(i * vStep);
}
}
for (int i = 0; i < height-1; ++i) {
for (int j = 0; j < width-1; ++j) {
indices->push_back(i * width + j);
indices->push_back((i+1) * width + j);
indices->push_back((i+1) * width + (j+1));
indices->push_back((i+1) * width + (j+1));
indices->push_back(i * width + (j+1));
indices->push_back(i * width + j);
}
}
vertices = grid->size()/8;
indexCount = indices->size();
Texture Loading:
f->glGenTextures(1, &textureId);
f->glBindTexture(GL_TEXTURE_2D, textureId);
QImage texture;
texture.load(texturePath.data());
QImage glTexture = QGLWidget::convertToGLFormat(texture);
f->glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, glTexture.width(), glTexture.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, glTexture.bits());
f->glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
f->glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
Drawing:
f->glActiveTexture(GL_TEXTURE0);
f->glBindTexture(GL_TEXTURE_2D, textureId);
program->setUniformValue(textureUniform.data(), 0);
f->glBindBuffer(GL_ARRAY_BUFFER, vbo.bufferId());
f->glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 8*sizeof(float), 0);
f->glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 8*sizeof(float), (void *) (sizeof(float) * 3));
f->glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 8*sizeof(float), (void *) (sizeof(float) * 6));
f->glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo.bufferId());
f->glEnableVertexAttribArray(0);
f->glEnableVertexAttribArray(1);
f->glEnableVertexAttribArray(2);
f->glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, 0);
f->glDisableVertexAttribArray(2);
f->glDisableVertexAttribArray(1);
f->glDisableVertexAttribArray(0);
Shaders:
Vertex:
attribute vec3 vertex_modelspace;
attribute vec3 normal_in;
attribute vec2 uv_in;
uniform mat4 mvp;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
uniform vec3 lightPosition;
varying vec2 uv;
varying vec3 normal;
varying vec3 fragPos;
void main(void)
{
gl_Position = projection * view * model * vec4(vertex_modelspace, 1);
uv = uv_in;
normal = normal_in;
fragPos = vec3(model * vec4(vertex_modelspace, 1));
}
Fragment:
varying vec2 uv;
varying vec3 normal;
varying vec3 fragPos;
uniform sampler2D texture;
uniform vec3 lightPosition;
void main(void)
{
vec3 lightColor = vec3(0.6, 0.6, 0.6);
float ambientStrength = 0.2;
vec3 ambient = ambientStrength * lightColor;
vec3 norm = normalize(normal);
vec3 lightDirection = normalize(lightPosition - fragPos);
float diff = max(dot(norm, lightDirection), 0.0);
vec3 diffuse = diff * lightColor;
vec3 color = texture2D(texture, uv).rgb;
vec3 result = (ambient + diffuse) * color;
gl_FragColor = vec4(result, 1.0);
}
I am completely stuck, so any suggestions are welcome :)
P.S. I am also working at trying to get my lighting to look better, so any tips on that would be welcome as well.
Your code is assuming values for the attribute locations, which are the values used as the first argument to glVertexAttribPointer() and glEnableVertexAttribArray(). For example here:
f->glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 8*sizeof(float), 0);
f->glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 8*sizeof(float), (void *) (sizeof(float) * 3));
f->glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 8*sizeof(float), (void *) (sizeof(float) * 6));
you're assuming that the positions have location 0, the normals location 1, and the texture coordinates location 2.
This is not guaranteed by anything you have currently in your code. The order of the attribute declarations in the GLSL code does not define the location assignment. For example from the OpenGL 3.2 spec:
When a program is linked, any active attributes without a binding specified through BindAttribLocation will be automatically be bound to vertex attributes by the GL.
Note that this does not specify how the automatic assignment of the locations is done. This means that it's implementation dependent.
To fix this, there are two approaches:
You can call glBindAttribLocation() for all your attributes before the shader program is linked.
You can query the automatically assigned locations by calling glGetAttribLocation() after the program is linked.
In newer OpenGL versions (GLSL 3.30 and later, which is the version matching OpenGL 3.3), you also have the option to specify the location directly in the GLSL code, using qualifiers of the form layout(location=...).
None of these options has any major advantages over the others. Just use the one that works best based on your preferences and software architecture.
I am doing some basic OpenGL with Qt in order to try and get some simple geometry to render. I've created a cube between (-1,-1,-1) and (1,1,1) and rotated it so that the overall shape is recognisable, and I can see it if I use an orthographic projection matrix:
QMatrix4x4 orthographicMatrix(float top, float bottom, float left, float right, float near, float far)
{
return QMatrix4x4((2.0f)/(right-left), 0, 0, -(right+left)/(right-left),
0, (2.0f)/(top-bottom), 0, -(top+bottom)/(top-bottom),
0, 0, -(2.0f)/(far-near), -(far+near)/(far-near),
0, 0, 0, 1);
}
...
QMatrix4x4 Projection = orthographicMatrix(2, -2, 2, -2, 0.01f, 2);
The formula for generating orthographic/projection matrices is taken from https://solarianprogrammer.com/2013/05/22/opengl-101-matrices-projection-view-model/
This matrix allows me to see the cube if I translate the camera to 1 on Z:
However, if I use a perspective matrix (again using the formula from the linked page):
QMatrix4x4 perspectiveMatrix(float fov, float aspectRatio, float near, float far)
{
float top = near * qTan((M_PI/180.0f) * (fov/2.0f));
float bottom = -top;
float right = top * aspectRatio;
float left = -right;
return QMatrix4x4((2.0f-near)/(right-left), 0, (right+left)/(right-left), 0,
0, (2.0f-near)/(top-bottom), (top+bottom)/(top-bottom), 0,
0, 0, -(far+near)/(far-near), -(2.0f-far-near)/(far-near),
0, 0, -1, 0);
}
...
QMatrix4x4 Projection = perspectiveMatrix(60.0f, (float)width()/(float)height(), 0.01f, 2);
I get no geometry displaying at all from the same camera position:
I've checked the code many times and can't work out why nothing shows up. I gather it must be something to do with the perspective matrix but my code seems to follow exactly what is specified on the Solarian Programmer page. What could be wrong here?
For reference, my shaders are:
// Vertex
#version 330 core
in vec3 vertexPosition_modelspace;
uniform mat4 MVP;
void main()
{
vec4 v = vec4(vertexPosition_modelspace,1);
gl_Position = MVP * v;
}
// Fragment
#version 330 core
out vec3 color;
float remap(float inp)
{
while (inp>200.0) inp-=200.0;
return inp/200.0;
}
void main()
{
color = vec3(remap(gl_FragCoord.x), remap(gl_FragCoord.y), 0);
}
It seems you made some mistakes by implementing the perspective matrix from the reference.
For example, the first entry is (2.0f-near)/(right-left) in your code while it is (2.0f * near)/(right-left) on the website. Similar errors are also in other fields of the matrix.
I'm having an issue drawing multiple point lights in my scene. I am working on a simple maze-style game in OpenGL, where the maze is randomly generated. Each "room" in the maze is represented by a Room struct, like so:
struct Room
{
int x, z;
bool already_visited, n, s, e, w;
GLuint vertex_buffer, texture, uv_buffer, normal_buffer;
std::vector<glm::vec3>vertices, normals;
std::vector<glm::vec2>uvs;
glm::vec3 light_pos; //Stores the position of a light in the room
};
Each room has a light in it, the position of this light is stored in light_pos. This light is used in a simple per-vertex shader, like so:
layout(location = 0) in vec3 pos;
layout(location = 1) in vec2 uv_coords;
layout(location = 2) in vec3 normal;
uniform mat4 mvpMatrix;
uniform mat4 mvMatrix;
uniform vec3 lightpos;
out vec2 vs_uv;
out vec3 vs_normal;
out vec3 color;
void main()
{
gl_Position = mvpMatrix * vec4(pos,1.0);
vs_normal = normal;
vs_uv = uv_coords;
vec3 lightVector = normalize(lightpos - pos);
float diffuse = clamp(dot(normal,lightVector),0.0,1.0);
color = vec3(diffuse,diffuse,diffuse);
}
My fragment shader looks like this (ignore the "vs_normal", it is unused for now):
in vec2 vs_uv;
in vec3 vs_normal;
in vec3 color;
uniform sampler2D tex;
out vec4 frag_color;
void main()
{
frag_color = vec4(color,1.0) * texture(tex,vs_uv).rgba;
}
And my drawing code looks like this:
mat4 mvMatrix = view_matrix*model_matrix;
mat4 mvpMatrix = projection_matrix * mvMatrix;
glBindVertexArray(vertexBufferObjectID);
glUseProgram(shaderProgram);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
for (int x = 0; x < NUM_ROOMS_X; x++)
{
for (int z = 0; z < NUM_ROOMS_Z; z++)
{
//int x = int(std::round(position.x / ROOM_SIZE_X_MAX));
//int z = int(std::round(position.z / ROOM_SIZE_Z_MAX));
Room rm = room_array[x][z];
glBindBuffer(GL_ARRAY_BUFFER, rm.vertex_buffer);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*) 0);
glBindBuffer(GL_ARRAY_BUFFER, rm.uv_buffer);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, (void*) 0);
glBindBuffer(GL_ARRAY_BUFFER, rm.normal_buffer);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 0, (void*) 0);
glUniformMatrix4fv(mvpMatrixID, 1, GL_FALSE, &mvpMatrix[0][0]);
glUniformMatrix4fv(mvMatrixID, 1, GL_FALSE, &mvMatrix[0][0]);
glUniform3fv(light_ID, 3, &rm.light_pos[0]); //Here is where I'm setting the new light position. It looks like this is ignored
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, rm.texture);
glUniform1i(texture_ID, 0);
glDrawArrays(GL_QUADS, 0, rm.vertices.size());
}
}
glUseProgram(0);
glBindVertexArray(0);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
glBindTexture(GL_TEXTURE_2D, 0);
However, here is what the result looks like (I've modified my drawing code to draw a box where each light is located, and I've circled the room at position (0,0)):
http://imgur.com/w4uPMD6
As you can see, it looks like only the light at position (0,0) affects any of the rooms on the map, the other lights are simply ignored. I know that the lights are positioned correctly, because the boxes I use to show the positions are correct. I think even though I'm setting the new light_pos, it isn't going through for some reason. Any ideas?
One thing that your are doing, which is not very common, is to pass the light position as a vertex attribute. Optimally, you should pass it to the shader as a uniform variable, just as you do with the matrices. But I doubt that is the problem here.
I believe your problem is that you are doing the light calculations in different spaces. The vertexes of the surfaces that you draw are in object/model space, while I'm guessing, your light is located at a point defined in world space. Try multiplying your light position by the inverse of the model matrix you are applying to the vertexes. I'm not familiar with GLM, but I figure there must be an inverse() function in it:
vec3 light_pos_object_space = inverse(model_matrix) * rm.light_pos;
glVertexAttrib3fv(light_ID, &light_pos_object_space[0]);
Figured out my problem. I was calling this function:
glUniform3fv(light_ID, 3, &rm.light_pos[0]);
When I should have been calling this:
glUniform3fv(light_ID, 1, &rm.light_pos[0]);