OpenGL - Getting camera to move - c++

I'm having trouble with my OpenGL game where I can't get the camera to move.
I am unable to use GLFW, GLUT and glulookat(). Here is my code, what's wrong?
P.S everything works except the camera movement meaning the game plays and works perfectly, just cant move the camera.
My Camera Code:
#include "SpriteRenderer.h"
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
class Camera
{
private:
Shader shader;
GLfloat angle = -90.f;
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, -1.0f),
cameraPosition = glm::vec3(0.0f, 0.0f, 0.1f),
cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);
glm::mat4 viewMatrix;
// recompute the view matrix from the camera variables
void updateMatrix()
{
viewMatrix = glm::lookAt(cameraPosition, cameraPosition + cameraFront, cameraUp);
}
// default constructor
void defaultNew()
{
cameraPosition = glm::vec3(0.0f, 0.0f, 0.1f);
cameraFront = glm::vec3(0.0f, 0.0f, -1.0f);
cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);
updateMatrix();
}
public:
Camera() { defaultNew(); }
Camera(Shader &shader) { this->shader = shader; defaultNew(); }
glm::mat4 GetViewMatrix() const
{
// if your view matrix is always up-to-date, just return it directly
return viewMatrix;
}
// get functions
glm::vec3 GetCameraPosition() const { return cameraPosition; }
// .. same for Front and Up
// set functions
// call updateMatrix every time you update a variable
void SetCameraPosition(glm::vec3 pos)
{
cameraPosition = pos;
updateMatrix();
}
// .. same for Front and Up
// no need to use this-> all the time
virtual void Draw()
{
this->shader.Use();
this->shader.SetMatrix4("view", viewMatrix);
}
};
My Shader Code:
Shader &Use(){ glUseProgram(this->ID); return *this; }
void SetMatrix4(const GLchar *name, const glm::mat4 &matrix, GLboolean useShader = false)
{ if (useShader)this->Use(); glUniformMatrix4fv(glGetUniformLocation(this->ID, name), 1, GL_FALSE, glm::value_ptr(matrix)); }
My Game Code:
Camera *View;
projection2 = glm::perspective(glm::radians(44.0f), (float)this->Width / (float)this->Width, 0.1f, 100.0f);
AssetController::LoadShader("../src/Shaders/Light.vert", "../src/Shaders/Light.frag", "light");
AssetController::GetShader("light").SetMatrix4("projection", projection2);
View = new Camera(AssetController::GetShader("light"));
(...)
GLfloat velocity = playerSpeed * deltaTime;
glm::vec3 camPosition;
// Update Players Position
if (movingLeft)
{
if (Player->Position.x >= 0)
{
Player->Position.x -= velocity;
if (Ball->Stuck)
Ball->Position.x -= velocity;
camPosition = View->GetCameraPosition();
camPosition.x -= velocity / 2;
View->SetCameraPosition(camPosition);
}
}
else if (movingRight)
{
if (Player->Position.x <= this->Width - Player->Size.x)
{
Player->Position.x += velocity;
if (Ball->Stuck)
Ball->Position.x += velocity;
camPosition = View->GetCameraPosition();
camPosition.x += velocity / 2;
View->SetCameraPosition(camPosition);
}
}
(...)
GameOver->Draw(*Renderer);
View->Draw();
My Shaders:
.vert:
#version 440 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec2 aTexCoord;
out vec2 TexCoord;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(aPos, 1.0f);
TexCoord = vec2(aTexCoord.x, aTexCoord.y);
}
.frag:
#version 440 core
out vec4 FragColor;
in vec2 TexCoord;
// texture samplers
uniform sampler2D texture1;
uniform sampler2D texture2;
void main()
{
// linearly interpolate between both textures (80% container, 20% awesomeface)
FragColor = mix(texture(texture1, TexCoord), texture(texture2, TexCoord), 0.2);
}

The problem is that you only update local position variable cameraPosition, and not the view matrix, which is passed to OpenGL during rendering.
It is also a bad habit to make the camera variables and matrix public, as they can potentially be modified incorrectly or out-of-sync (as you are doing). Instead, you could write a pair of get/set functions:
class Camera
{
private:
Shader shader;
GLfloat angle = -90.f;
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, -1.0f),
cameraPosition = glm::vec3(0.0f, 0.0f, 0.1f),
cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);
glm::mat4 viewMatrix;
// recompute the view matrix from the camera variables
void updateMatrix()
{
viewMatrix = glm::lookAt(cameraPosition, cameraPosition + cameraFront, cameraUp);
}
// default constructor
void defaultNew()
{
cameraPosition = glm::vec3(0.0f, 0.0f, 0.1f);
cameraFront = glm::vec3(0.0f, 0.0f, -1.0f);
cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);
updateMatrix();
}
public:
Camera() {
defaultNew();
}
Camera(Shader &shader) {
this->shader = shader;
defaultNew();
}
glm::mat4 GetViewMatrix() const
{
// if your view matrix is always up-to-date, just return it directly
return viewMatrix;
}
// get functions
glm::vec3 GetCameraPosition() const { return cameraPosition; }
// .. same for Front and Up
// set functions
// call updateMatrix every time you update a variable
void SetCameraPosition(glm::vec3 p)
{
cameraPosition = p;
updateMatrix();
}
// .. same for Front and Up
// no need to use this-> all the time
virtual void Draw()
{
shader.Use();
shader.SetMatrix4("view", viewMatrix);
}
};
And then when you update the camera position, simply use these functions instead of the exposed variables:
view->SetCameraPosition(view->GetCameraPosition() + velocity / 2.0f);
This will make sure that the draw calls always use the updated view matrix instead of the initial one (which was the case before and the source of your troubles).

Related

Directional Light Shadow Mapping Issues

So I've been trying to re-implement shadow mapping in my engine using directional lights, but I have to throw shade on my progress so far (see what I did there?).
I had it working in a previous commit a while back but refactored my engine and I'm trying to redo some of the shadow mapping. Wouldn't say I'm the best in terms of drawing shadows so thought I'd try and get some help.
Basically my issue seems to stem from the calculation of the light space matrix (seems a lot of people have the same issue). Initially I had a hardcoded projection matrix and simple view matrix for the light like this
void ZLight::UpdateLightspaceMatrix()
{
// …
if (type == ZLightType::Directional) {
auto lightDir = glm::normalize(glm::eulerAngles(Orientation()));
glm::mat4 lightV = glm::lookAt(lightDir, glm::vec3(0.f), WORLD_UP);
glm::mat4 lightP = glm::ortho(-50.f, 50.f, -50.f, 50.f, -100.f, 100.f);
lightspaceMatrix_ = lightP * lightV;
}
// …
}
This then gets passed unmodified as a shader uniform, with which I multiply the vertex world space positions by. A few months ago this was working but with the recent refactor I did on the engine it no longer shows anything. The output to the shadow map looks like this
And my scene isn't showing any shadows, at least not where it matters
Aside from this, after hours of scouring posts and articles about how to implement a dynamic frustrum for the light that will encompass the scene's contents at any given time, I also implemented a simple solution based on transforming the camera's frustum into light space, using an NDC cube and transforming it with the inverse camera VP matrix, and computing a bounding box from the result, which gets passed in to glm::ortho to make the light's projection matrix
void ZLight::UpdateLightspaceMatrix()
{
static std::vector <glm::vec4> ndcCube = {
glm::vec4{ -1.0f, -1.0f, -1.0f, 1.0f },
glm::vec4{ 1.0f, -1.0f, -1.0f, 1.0f },
glm::vec4{ -1.0f, 1.0f, -1.0f, 1.0f },
glm::vec4{ 1.0f, 1.0f, -1.0f, 1.0f },
glm::vec4{ -1.0f, -1.0f, 1.0f, 1.0f },
glm::vec4{ 1.0f, -1.0f, 1.0f, 1.0f },
glm::vec4{ -1.0f, 1.0f, 1.0f, 1.0f },
glm::vec4{ 1.0f, 1.0f, 1.0f, 1.0f }
};
if (type == ZLightType::Directional) {
auto activeCamera = Scene()->ActiveCamera();
auto lightDir = normalize(glm::eulerAngles(Orientation()));
glm::mat4 lightV = glm::lookAt(lightDir, glm::vec3(0.f), WORLD_UP);
lightspaceRegion_ = ZAABBox();
for (const auto& corner : ndcCube) {
auto invVPMatrix = glm::inverse(activeCamera->ProjectionMatrix() * activeCamera->ViewMatrix());
auto transformedCorner = lightV * invVPMatrix * corner;
transformedCorner /= transformedCorner.w;
lightspaceRegion_.minimum.x = glm::min(lightspaceRegion_.minimum.x, transformedCorner.x);
lightspaceRegion_.minimum.y = glm::min(lightspaceRegion_.minimum.y, transformedCorner.y);
lightspaceRegion_.minimum.z = glm::min(lightspaceRegion_.minimum.z, transformedCorner.z);
lightspaceRegion_.maximum.x = glm::max(lightspaceRegion_.maximum.x, transformedCorner.x);
lightspaceRegion_.maximum.y = glm::max(lightspaceRegion_.maximum.y, transformedCorner.y);
lightspaceRegion_.maximum.z = glm::max(lightspaceRegion_.maximum.z, transformedCorner.z);
}
glm::mat4 lightP = glm::ortho(lightspaceRegion_.minimum.x, lightspaceRegion_.maximum.x,
lightspaceRegion_.minimum.y, lightspaceRegion_.maximum.y,
-lightspaceRegion_.maximum.z, -lightspaceRegion_.minimum.z);
lightspaceMatrix_ = lightP * lightV;
}
}
What results is the same output in my scene (no shadows anywhere) and the following shadow map
I've checked the light space matrix calculations over and over, and tried tweaking values dozens of times, including all manner of lightV matrices using the glm::lookAt function, but I never get the desired output.
For more reference, here's my shadow vertex shader
#version 450 core
#include "Shaders/common.glsl" //! #include "../common.glsl"
layout (location = 0) in vec3 position;
layout (location = 5) in ivec4 boneIDs;
layout (location = 6) in vec4 boneWeights;
layout (location = 7) in mat4 instanceM;
uniform mat4 P_lightSpace;
uniform mat4 M;
uniform mat4 Bones[MAX_BONES];
uniform bool rigged = false;
uniform bool instanced = false;
void main()
{
vec4 pos = vec4(position, 1.0);
if (rigged) {
mat4 boneTransform = Bones[boneIDs[0]] * boneWeights[0];
boneTransform += Bones[boneIDs[1]] * boneWeights[1];
boneTransform += Bones[boneIDs[2]] * boneWeights[2];
boneTransform += Bones[boneIDs[3]] * boneWeights[3];
pos = boneTransform * pos;
}
gl_Position = P_lightSpace * (instanced ? instanceM : M) * pos;
}
my soft shadow implementation
float PCFShadow(VertexOutput vout, sampler2D shadowMap) {
vec3 projCoords = vout.FragPosLightSpace.xyz / vout.FragPosLightSpace.w;
if (projCoords.z > 1.0)
return 0.0;
projCoords = projCoords * 0.5 + 0.5;
// PCF
float shadow = 0.0;
float bias = max(0.05 * (1.0 - dot(vout.FragNormal, vout.FragPosLightSpace.xyz - vout.FragPos.xzy)), 0.005);
for (int i = 0; i < 4; ++i) {
float z = texture(shadowMap, projCoords.xy + poissonDisk[i]).r;
shadow += z < (projCoords.z - bias) ? 1.0 : 0.0;
}
return shadow / 4;
}
...
...
float shadow = PCFShadow(vout, shadowSampler0);
vec3 color = (ambient + (1.0 - shadow) * (diffuse + specular)) + materials[materialIndex].emission;
FragColor = vec4(color, albd.a);
and my camera view and projection matrix getters
glm::mat4 ZCamera::ProjectionMatrix()
{
glm::mat4 projectionMatrix(1.f);
auto scene = Scene();
if (!scene) return projectionMatrix;
if (cameraType_ == ZCameraType::Orthographic)
{
float zoomInverse_ = 1.f / (2.f * zoom_);
glm::vec2 resolution = scene->Domain()->Resolution();
float left = -((float)resolution.x * zoomInverse_);
float right = -left;
float bottom = -((float)resolution.y * zoomInverse_);
float top = -bottom;
projectionMatrix = glm::ortho(left, right, bottom, top, -farClippingPlane_, farClippingPlane_);
}
else
{
projectionMatrix = glm::perspective(glm::radians(zoom_),
(float)scene->Domain()->Aspect(),
nearClippingPlane_, farClippingPlane_);
}
return projectionMatrix;
}
glm::mat4 ZCamera::ViewMatrix()
{
return glm::lookAt(Position(), Position() + Front(), Up());
}
Been trying all kinds of small changes but I still don't get correct shadows. Don't know what I'm doing wrong here. The closest I've gotten is by scaling lightspaceRegion_ bounds by a factor of 10 in the light space matrix calculations (only in X and Y) but the shadows are still no where near correct.
The camera near and far clipping planes are set at reasonable values (0.01 and 100.0, respectively), camera zoom is 45.0 degrees and scene→Domain()→Aspect() just returns the width/height aspect ratio of the framebuffer's resolution. My shadow map resolution is set to 2048x2048.
Any help here would be much appreciated. Let me know if I left out any important code or info.

Shadow map - some shadows get cut off

I'm working on a voxel engine and my shadow map has some strange behavior. When I turn my directional light to a certain angle, some shadows are cut off.
here a
pic how it looks like and my shadow map.
this is how I render my shadows:
glDisable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glViewport(0, 0, SHADOW_WIDTH_, SHADOW_HEIGHT_);
glBindFramebuffer(GL_FRAMEBUFFER, depthMapFBO_);
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_DEPTH_BUFFER_BIT);
glm::mat4 lightProjection, lightView;
float near_plane = 1.0f, far_plane = 200.0f;
//lightProjection = glm::perspective(glm::radians(45.0f), (GLfloat)SHADOW_WIDTH_ / (GLfloat)SHADOW_HEIGHT_, near_plane, far_plane); // note that if you use a perspective projection matrix you'll have to change the light position as the current light position isn't enough to reflect the whole scene
lightProjection = glm::ortho<float>(-100.0f, 100.0f, -100.0f, 100.0f, near_plane, far_plane);
glm::vec3 lightPos;
lightPos.x = LightControl_->GetLightPosition(LightControl_->GetDirectionalLightNames()[0]->ToString()).X;
lightPos.y = LightControl_->GetLightPosition(LightControl_->GetDirectionalLightNames()[0]->ToString()).Y;
lightPos.z = LightControl_->GetLightPosition(LightControl_->GetDirectionalLightNames()[0]->ToString()).Z;
glm::vec3 target;
target.x = LightControl_->GetLightDirection(LightControl_->GetDirectionalLightNames()[0]->ToString()).X;
target.y = LightControl_->GetLightDirection(LightControl_->GetDirectionalLightNames()[0]->ToString()).Y;
target.z = LightControl_->GetLightDirection(LightControl_->GetDirectionalLightNames()[0]->ToString()).Z;
lightView = glm::lookAt(lightPos, target+lightPos, glm::vec3(0.0, 1.0, 0.0));
*lightSpaceMatrix = lightProjection * lightView;
// render scene from light's point of view
shaderProgram.Use();
glm::mat4 model = glm::mat4(1.0f);
glUniformMatrix4fv(shaderProgram.GetUniform("lightSpaceMatrix"), 1, GL_FALSE, glm::value_ptr(*lightSpaceMatrix));
glUniformMatrix4fv(shaderProgram.GetUniform("u_model"), 1, GL_FALSE, glm::value_ptr(model));
glViewport(0, 0, SHADOW_WIDTH_, SHADOW_HEIGHT_);
glBindFramebuffer(GL_FRAMEBUFFER, depthMapFBO_);
glClear(GL_DEPTH_BUFFER_BIT);
auto tmp = GraphicControl_->GetVoxelBuffer("test");
tmp.ClearVoxelRenderList();
tmp.VoxelBufferData(chunksForRendering);
RenderVoxelBuffer(tmp,shaderProgram);
glBindFramebuffer(GL_FRAMEBUFFER, GraphicControl_->GetDefaultFrameBuffer()->getFBO());
glViewport(0, 0, 1920, 1080);
glEnable(GL_CULL_FACE);
shadow map vert:
#version 450 core
layout(location = 0) in uint all_voxel_data;
out VsOut {
mat4 lightSpaceMatrix;
mat4 model;
vec3 u_chunk_location;
} vs_out;
uniform mat4 lightSpaceMatrix;
uniform mat4 model;
uniform vec3 u_chunk_location;
const int AllVoxelDataMask[6] = {
0xF8000000, // x-Position
0x07C00000, // y-Position
0x003E0000, // z-Position
0x0001F800, // culled Faces
0x00000700, // Render Options
0x000000FF // color Index for Palette
};
vec4 DecodePosition(uint encoded_position)
{
float x = float((encoded_position & AllVoxelDataMask[0]) >> 27) + u_chunk_location.x * 32;
float y = float((encoded_position & AllVoxelDataMask[1]) >> 22) + u_chunk_location.y * 32;
float z = float((encoded_position & AllVoxelDataMask[2]) >> 17) + u_chunk_location.z * 32;
return vec4(x, y, z, 1.0);
}
void main()
{
vs_out.lightSpaceMatrix=lightSpaceMatrix;
vs_out.u_chunk_location=u_chunk_location;
vs_out.model=model;
gl_Position = DecodePosition(all_voxel_data);
}
shadow map geom:
#version 450 core
layout(points) in;
layout(triangle_strip, max_vertices = 36) out;
#define halfVoxelSize 0.5
// 2-------6
// /| /|
// 3-------7 |
// | 0-----|-4
// |/ |/
// 1-------5
const vec4 VoxelVertices[8] = {
vec4(-halfVoxelSize, -halfVoxelSize, -halfVoxelSize, 0.0f), // Back-Bot-Left
vec4(-halfVoxelSize, -halfVoxelSize, halfVoxelSize, 0.0f), // Front-Bot-Left
vec4(-halfVoxelSize, halfVoxelSize, -halfVoxelSize, 0.0f), // Back-Top-Left
vec4(-halfVoxelSize, halfVoxelSize, halfVoxelSize, 0.0f), // Front-Top-Left
vec4(halfVoxelSize, -halfVoxelSize, -halfVoxelSize, 0.0f), // Back-Bot-Right
vec4(halfVoxelSize, -halfVoxelSize, halfVoxelSize, 0.0f), // Front-Bot-Right
vec4(halfVoxelSize, halfVoxelSize, -halfVoxelSize, 0.0f), // Back-Top-Right
vec4(halfVoxelSize, halfVoxelSize, halfVoxelSize, 0.0f) // Front-Top-Right
};
const vec3 VoxelNormals[6] = {
vec3(-1.0f, 0.0f, 0.0f), // Left-X-Axis
vec3( 1.0f, 0.0f, 0.0f), // Right-X-Axis
vec3( 0.0f, -1.0f, 0.0f), // Bot-Y-Axis
vec3( 0.0f, 1.0f, 0.0f), // Top-Y-Axis
vec3( 0.0f, 0.0f, -1.0f), // Back-Z-Axis
vec3( 0.0f, 0.0f, 1.0f) // Front-Z-Axis
};
in VsOut {
mat4 lightSpaceMatrix;
mat4 model;
vec3 u_chunk_location;
} gs_in[];
out GsOut {
mat4 lightSpaceMatrix;
mat4 model;
vec3 u_chunk_location;
} gs_out;
void AddTriangle(vec4 a, vec4 b, vec4 c)
{
gl_Position = gs_in[0].lightSpaceMatrix * a;
EmitVertex();
gl_Position = gs_in[0].lightSpaceMatrix * b;
EmitVertex();
gl_Position = gs_in[0].lightSpaceMatrix * c;
EmitVertex();
EndPrimitive();
}
void AddQuad(vec4 a, vec4 b, vec4 c, vec4 d, vec3 normal)
{
vec4 center = gl_in[0].gl_Position; // Zugriff auf das erste Voxel, was gleichzeitig unser einzigstes Voxel ist. Weil gl_Points verwendet wird.
gs_out.model=gs_in[0].model;
if (dot(-(gs_in[0].lightSpaceMatrix * center), (gs_in[0].lightSpaceMatrix * vec4(normal, 0.0f))) <= 0.0)
return;
// a-------d
// | \ |
// | \ |
// b-------c
AddTriangle(center + a, center + b, center + c);
AddTriangle(center + c, center + d, center + a);
}
void main()
{
// 0-------3
// | |
// | |
// 1-------2
AddQuad(VoxelVertices[3], VoxelVertices[1], VoxelVertices[5], VoxelVertices[7], VoxelNormals[5]); // Front-Surface
AddQuad(VoxelVertices[2], VoxelVertices[0], VoxelVertices[1], VoxelVertices[3], VoxelNormals[0]); // Left-Surface
AddQuad(VoxelVertices[7], VoxelVertices[5], VoxelVertices[4], VoxelVertices[6], VoxelNormals[1]); // Right-Surface
AddQuad(VoxelVertices[2], VoxelVertices[3], VoxelVertices[7], VoxelVertices[6], VoxelNormals[3]); // Top-Surface
AddQuad(VoxelVertices[1], VoxelVertices[0], VoxelVertices[4], VoxelVertices[5], VoxelNormals[2]); // Bot-Surface
AddQuad(VoxelVertices[6], VoxelVertices[4], VoxelVertices[0], VoxelVertices[2], VoxelNormals[4]); // Back-Surface
}
shadow map frag
#version 450 core
out vec4 outColor;
void main()
{
//gl_FragDepth = gl_FragCoord.z;
}
does anyone know what's wrong?

OpenGL Projection Matrix showing Orthographic

I got an orthographic camera working however I wanted to try and implement a perspective camera so I can do some parallax effects later down the line. I am having some issues when trying to implement it. It seems like the depth is not working correctly. I am rotating a 2d image along the x-axis to simulate it laying somewhat down so I get see the projection matrix working. It is still showing as an orthographic perspective though.
Here is some of my code:
CameraPersp::CameraPersp() :
_camPos(0.0f,0.0f,0.0f), _modelMatrix(1.0f), _viewMatrix(1.0f), _projectionMatrix(1.0f)
Function called init to setup the matrix variables:
void CameraPersp::init(int screenWidth, int screenHeight)
{
_screenHeight = screenHeight;
_screenWidth = screenWidth;
_modelMatrix = glm::translate(_modelMatrix, glm::vec3(0.0f, 0.0f, 0.0f));
_modelMatrix = glm::rotate(_modelMatrix, glm::radians(-55.0f), glm::vec3(1.0f, 0.0f, 0.0f));
_viewMatrix = glm::translate(_viewMatrix, glm::vec3(0.0f, 0.0f, -3.0f));
_projectionMatrix = glm::perspective(glm::radians(45.0f), static_cast<float>(_screenWidth) / _screenHeight, 0.1f, 100.0f);
}
Initializing a texture to be loaded in with x,y,z,width,height,src
_sprites.back()->init(-0.5f, -0.5f, 0.0f, 1.0f, 1.0f, "src/content/sprites/DungeonCrawlStoneSoupFull/monster/deep_elf_death_mage.png");
Sending in the matrices to the vertexShader:
GLint mLocation = _colorProgram.getUniformLocation("M");
glm::mat4 mMatrix = _camera.getMMatrix();
//glUniformMatrix4fv(mLocation, 1, GL_FALSE, &(mMatrix[0][0]));
glUniformMatrix4fv(mLocation, 1, GL_FALSE, glm::value_ptr(mMatrix));
GLint vLocation = _colorProgram.getUniformLocation("V");
glm::mat4 vMatrix = _camera.getVMatrix();
//glUniformMatrix4fv(vLocation, 1, GL_FALSE, &(vMatrix[0][0]));
glUniformMatrix4fv(vLocation, 1, GL_FALSE, glm::value_ptr(vMatrix));
GLint pLocation = _colorProgram.getUniformLocation("P");
glm::mat4 pMatrix = _camera.getPMatrix();
//glUniformMatrix4fv(pLocation, 1, GL_FALSE, &(pMatrix[0][0]));
glUniformMatrix4fv(pLocation, 1, GL_FALSE, glm::value_ptr(pMatrix));
Here is my vertex shader:
#version 460
//The vertex shader operates on each vertex
//input data from VBO. Each vertex is 2 floats
in vec3 vertexPosition;
in vec4 vertexColor;
in vec2 vertexUV;
out vec3 fragPosition;
out vec4 fragColor;
out vec2 fragUV;
//uniform mat4 MVP;
uniform mat4 M;
uniform mat4 V;
uniform mat4 P;
void main() {
//Set the x,y position on the screen
//gl_Position.xy = vertexPosition;
gl_Position = M * V * P * vec4(vertexPosition, 1.0);
//the z position is zero since we are 2d
//gl_Position.z = 0.0;
//indicate that the coordinates are nomalized
gl_Position.w = 1.0;
fragPosition = vertexPosition;
fragColor = vertexColor;
// opengl needs to flip the coordinates
fragUV = vec2(vertexUV.x, 1.0 - vertexUV.y);
}
I can see the image "squish" a little because it is still rendering the perspective as orthographic. If I remove the rotation on the x-axis, it is not longer squished because it isn't laying down at all. Any thoughts on what I am doing wrong? I can supply more info upon request but I think I put in most of the meat of things.
Picture:
You shouldn't modify gl_Position.w
gl_Position = M * V * P * vec4(vertexPosition, 1.0); // gl_Position is good
//indicate that the coordinates are nomalized < not true
gl_Position.w = 1.0; // Now perspective divisor is lost, projection isn't correct

Opengl - Triangle won't render?

I'm attempting to render a triangle using VBOs in OpenGL via C++.
First, I identify my variables:
CBuint _vao;
CBuint _vbo;
CBuint _ebo;
struct Vertex
{
CBfloat position[3];
};
Then, I set the positions for each vertex to form a geometrical triangle:
Vertex v[3];
v[0].position[0] = 0.5f;
v[0].position[1] = 0.5f;
v[0].position[2] = 0.0f;
v[1].position[0] = 0.5f;
v[1].position[1] = -0.5f;
v[1].position[2] = 0.0f;
v[2].position[0] = -0.5f;
v[2].position[1] = -0.5f;
v[2].position[2] = 0.0f;
Simple enough, right?
Then, I declare my indices for the EBO/IBO:
unsigned short i[] =
{
0, 1, 2
};
Now that I have all the attribute data needed for buffering, I bind the VAO as well as the VBOs:
// Generate vertex elements
glGenVertexArrays(1, &_vao);
glGenBuffers(1, &_vbo);
glGenBuffers(1, &_ebo);
// VAO
glBindVertexArray(_vao);
// VBO
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * 9, &v, GL_STATIC_DRAW);
// EBO
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _ebo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned short) * 3, &i, GL_STATIC_DRAW);
// Location 0 - Positions
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), BUFFER_OFFSET(0));
glBindVertexArray(0);
Next, I render them:
glBindVertexArray(_vao);
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_SHORT, 0);
glBindVertexArray(0);
I then use the vertex shader:
#version 330 core
// Vertex attributes
layout(location = 0) in vec3 position;
// Node parses
out vec3 Fragpos;
// Uniforms
uniform mat4 model;
uniform mat4 projection;
uniform mat4 view;
// Vertex loop
void main()
{
gl_Position = projection * view * model * vec4(position, 1.0f);
Fragpos = vec3(model * vec4(position, 1.0f));
}
To simply calculate the model position as well as the camera view. The fragment shader is again, quite simplistic:
#version 330 core
// Node parses
out vec4 color;
in vec3 Fragpos;
// Camera data
uniform vec3 view_loc;
// Global uniforms
uniform struct GLOBAL
{
float ambient_coefficient;
vec3 ambient_colour;
} _global;
// Main loop
void main(void)
{
color = vec4(1.0, 1.0, 1.0, 1.0);
}
The uniforms work perfectly fine as I've been using this shader code for previous projects. So what could the problem be? The triangle simply does not render. I can't think of anything that's causing this, any ideas?
Edit: Just to narrow things down, I also use these variables to handle the model matrix that are being parsed to and from the vertex shader:
CBuint _u_model;
mat4 _model;
vec3 _position;
vec4 _rotation;
vec3 _scale;
Then inside the constructor, I initialize the variables like so:
_model = mat4::identity();
_position = vec3(0.0f, 0.0f, 0.0f);
_rotation = vec4(0.0f, 1.0f, 0.0f, 0.0f);
_scale = vec3(1.0f, 1.0f, 1.0f);
_u_model = glGetUniformLocation(shader->_program, "model");
And finally, I update the model matrix using this formula:
_model = translate(_position) *
rotate(_rotation.data[0], _rotation.data[1], _rotation.data[2], _rotation.data[3]) *
scale(_scale);
Edit 2: This is the camera class I use for the MVP:
class Camera : public object
{
private:
CBbool _director;
CBfloat _fov;
CBfloat _near;
CBfloat _far;
CBfloat _speed;
Math::vec3 _front;
Math::vec3 _up;
Math::mat4 _projection;
Math::mat4 _view;
CBuint _u_projection;
CBuint _u_view;
public:
Camera(Shader* shader, Math::vec3 pos, float fov, float n, float f, bool dir) : _speed(5.0f)
{
_model = Math::mat4::identity();
_projection = Math::mat4::identity();
_view = Math::mat4::identity();
_position = pos;
_fov = fov;
_near = n;
_far = f;
_director = dir;
_front = vec3(0.0f, 0.0f, -1.0f);
_up = vec3(0.0f, 1.0f, 0.0f);
_u_projection = glGetUniformLocation(shader->_program, "projection");
_u_view = glGetUniformLocation(shader->_program, "view");
_u_model = glGetUniformLocation(shader->_program, "view_loc");
}
~Camera() {}
inline CBbool isDirector() { return _director; }
inline void forward(double delta) { _position.data[2] -= _speed * (float)delta; }
inline void back(double delta) { _position.data[2] += _speed * (float)delta; }
inline void left(double delta) { _position.data[0] -= _speed * (float)delta; }
inline void right(double delta) { _position.data[0] += _speed * (float)delta; }
inline void up(double delta) { _position.data[1] += _speed * (float)delta; }
inline void down(double delta) { _position.data[1] -= _speed * (float)delta; }
virtual void update(double delta)
{
_view = Math::lookat(_position, _position + _front, _up);
_projection = Math::perspective(_fov, 900.0f / 600.0f, _near, _far);
}
virtual void render()
{
glUniformMatrix4fv(_u_view, 1, GL_FALSE, _view);
glUniformMatrix4fv(_u_projection, 1, GL_FALSE, _projection);
glUniform3f(_u_model, _position.data[0], _position.data[1], _position.data[2]);
}
};
As Amadeus mentioned, I simply had to use gl_Position = vec4(position, 1.0f); for it to render. No idea why, but now's the time to find out! Thanks for your time.

OpenGL Matrices and Shaders Confusion

I've been following a tutorial on modern OpenGL with the GLM library
I'm on a segment where we introduce matrices for transforming models, positioning the camera, and adding perspective.
I've got a triangle:
const GLfloat vertexBufferData[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
I've got my shaders:
GLuint programID = loadShaders("testVertexShader.glsl",
"testFragmentShader.glsl");
I've got a model matrix that does no transformations:
glm::mat4 modelMatrix = glm::mat4(1.0f); /* Identity matrix */
I've got a camera matrix:
glm::mat4 cameraMatrix = glm::lookAt(
glm::vec3(4.0f, 4.0f, 3.0f), /*Camera position*/
glm::vec3(0.0f, 0.0f, 0.0f), /*Camera target*/
glm::vec3(0.0f, 1.0f, 0.0f) /*Up vector*/
);
And I've got a projection matrix:
glm::mat4 projectionMatrix = glm::perspective(
90.0f, /*FOV in degrees*/
4.0f / 3.0f, /*Aspect ratio*/
0.1f, /*Near clipping distance*/
100.0f /*Far clipping distance*/
);
Then I multiply all the matrices together to get the final matrix for the triangle I want to draw:
glm::mat4 finalMatrix = projectionMatrix
* cameraMatrix
* modelMatrix;
Then I send the matrix to GLSL (I think?):
GLuint matrixID = glGetUniformLocation(programID, "MVP");
glUniformMatrix4fv(matrixID, 1, GL_FALSE, &finalMatrix[0][0]);
Then I do shader stuff I don't understand very well:
/*vertex shader*/
#version 330 core
in vec3 vertexPosition_modelspace;
uniform mat4 MVP;
void main(){
vec4 v = vec4(vertexPosition_modelspace, 1);
gl_Position = MVP * v;
}
/*fragment shader*/
#version 330 core
out vec3 color;
void main(){
color = vec3(1, 1, 0);
}
Everything compiles and runs, but I see no triangle. I've moved the triangle and camera around, thinking maybe the camera was pointed the wrong way, but with no success. I was able to successfully get a triangle on the screen before we introduced matrices, but now, no triangle. The triangle should be at origin, and the camera is a few units away from origin, looking at origin.
Turns out, you need to send the matrix to the shader after you've bound the shader.
In other words, you call glUniformMatrix4fv() after glUseProgram()
Lots of things could be your problem - try outputting a vec4 color instead, with alpha explicitly set to 1. One thing I often do as a sanity check is to have the vertex shader ignore all inputs, and to just output vertices directly, e.g. something like:
void main(){
if (gl_VertexID == 0) {
gl_Position = vec4(-1, -1, 0, 1);
} else if (gl_VertexID == 1) {
gl_Position = vec4(1, -1, 0, 1);
} else if (gl_VertexID == 2) {
gl_Position = vec4(0, 1, 0, 1);
}
}
If that works, then you can try adding your vertex position input back in. If that works, you can add your camera or projection matrices back in, etc.
More generally, remove things until something works, and you understand why it works, and then add parts back in until you stop understanding why things don't work. Quite often I've been off by a sign, or in the order of multiplication.