How do I implement two translations and a scale operation in GLSL? - opengl

I am trying to implement the following transformation.
My original world-space coordinates are (2D) x=1586266800 and y=11812
I want:
the bottom left corner of the OpenGL image to represent coordinates (1586266800, 11800)
the top right corner of the OpenGL image to represent coordinates (1586267400, 11900)
In order to do that I plan to join three transformation matrices:
Translate to the origin of coordinates x=1586266800 and y=11800
Scale to have a width of 600 and a height of 100
Translate again -1.0f and -1.0f so the center of the OpenGL is at the bottom left.
I use the following transformation matrices:
Translation Matrix:
| 1 0 0 tx |
| 0 1 0 ty |
| 0 0 1 tz |
| 0 0 0 1 |
Scale Matrix:
| sx 0 0 0 |
| 0 sy 0 0 |
| 0 0 sz 0 |
| 0 0 0 1 |
In Octave I can implement the transformation as follows, multiplying three matrices:
>> candle
candle =
1586266800
11812
0
1
>> translation1
translation1 =
1 0 0 -1586266800
0 1 0 -11800
0 0 1 0
0 0 0 1
>> scale
scale =
0.00333333333333333 0 0 0
0 0.02 0 0
0 0 1 0
0 0 0 1
(where `0.0033333 = 2/600` and `0.02 = 2/100`)
>> translation2
translation2 =
1 0 0 -1
0 1 0 -1
0 0 1 0
0 0 0 1
>> translation2*scale*translation1*candle
ans =
-1
-0.759999999999991
0
1
Which translates the point to the right place in a -1.0f,1.0f OpenGL screen.
Now I am trying to replicate that in my Geometry shader, which receives the original world-space coordinates from the vertex shader.
I tried this:
#version 330 core
layout (points) in;
layout (line_strip, max_vertices = 12) out;
in uint gs_in_y[];
in uint gs_in_x[];
uniform uint xOrigin;
uniform uint xScaleWidth;
uniform uint yOrigin;
uniform uint yScaleWidth;
void main()
{
// TRANSLATION MATRIX
// [ 1 0 0 tx ]
// [ 0 1 0 ty ]
// [ 0 0 1 tz ]
// [ 0 0 0 1 ]
// mat3 m = mat3(
// 1.1, 2.1, 3.1, // first column (not row!)
// 1.2, 2.2, 3.2, // second column
// 1.3, 2.3, 3.3 // third column
// );
mat4 translation = mat4(
1.0f, 0, 0, -xOrigin,
0, 1.0f, 0, -yOrigin,
0, 0, 1.0f, 0,
0, 0, 0, 1.0f
);
// SCALE MATRIX
// [ sx 0 0 0 ]
// [ 0 sy 0 0 ]
// [ 0 0 sz 0 ]
// [ 0 0 0 1 ]
mat4 scale = mat4(
2.0/xScaleWidth, 0, 0, 0,
0, 2.0f/yScaleWidth, 0, 0,
0, 0, 1.0f, 0,
0, 0, 0, 1.0f
);
// FINAL TRANSLATION
mat4 translationGl = mat4(
1.0f, 0, 0, -1.0f,
0, 1.0f, 0, -1.0f,
0, 0, 1.0f, 0,
0, 0, 0, 1.0f
);
gl_Position = translationGl * scale * translation * vec4(gs_in_x[0], gs_in_y[0], 0.0, 1.0);
EmitVertex();
gl_Position = translationGl * scale * translation * vec4(gs_in_x[0]+30, gs_in_y[0], 0.0, 1.0);
EmitVertex();
EndPrimitive();
}

Related

opengl not drawing some triangles [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
In opengl, I was attempting to write a program that would be able to draw multiple rectangles across the screen using triangles. Instead of writing down all of the vertices by hand, I wrote a nested for loop to generate the vertices. However, instead of drawing all the triangles, this program only outputs the last two triangles as a rectangle(see the pictures below). I'm sure that this way of generating triangles is hilariously bad and inefficient but that's not my main gripe with the output of this code.
Below is the nested for loop that adds the vertices to the array(be warned this code is absolutely disgusting)
float initTri1[] = { -1.5f, 0.5f, 0.0f, -1.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f };
float initTri2[] = { -1.5f, 0.5f, 0.0f, -1.0f, 0.0f, 0.0f, -1.0f, 0.5f, 0.0f };
float vertices[18*4];
float increment =0.0f;
// draws an amount of rectangles equal to the number after 18 in vertices
for (int j = 0; j < sizeof(vertices)/sizeof(vertices[0])/18; j++)
{
increment += 0.5f;
// draws triangle with hypotenuse on right side
for (int i = 0; i < 9; i++)
{
// shifts the value of the initial triangles x verticies by 0.5
if ((i + 1) % 3 != 1)
{
vertices[i+j*9] = initTri1[i];
}
// keeps the y and z values the same as the initial triangle.
else
{
vertices[i+j*9] = initTri1[i] + increment;
}
}
// sometimes draws the triangle with the hypotenuse on the left side
for (int i = 9; i < 18; i++)
{
// shifts the value of the initial triangles x verticies by 0.5
if ((i + 1) % 3 != 1)
{
vertices[i+j*9] = initTri2[i - 9];
}
// keeps the y and z values the same as the initial triangle.
else
{
vertices[i+j*9] = initTri2[i - 9] + increment;
}
}
}
Below are images of the outcome of generating: 2 triangles, 4 triangles, and 8 triangles respectively.
I didn't feel able to follow the index computations of OP. It was easier to try it out. The result looks wrong:
vertices[0]: -1, 0.5, 0
vertices[3]: -1, 0, 0
vertices[6]: -0.5, 0, 0
vertices[9]: -0.5, 0.5, 0
vertices[12]: -0.5, 0, 0
vertices[15]: 0, 0, 0
vertices[18]: 0, 0.5, 0
vertices[21]: 0, 0, 0
vertices[24]: 0.5, 0, 0
vertices[27]: 0.5, 0.5, 0
vertices[30]: 0.5, 0, 0
vertices[33]: 1, 0, 0
vertices[36]: 0.5, 0.5, 0
vertices[39]: 1, 0, 0
vertices[42]: 1, 0.5, 0
vertices[45]: 0, 8.40779e-45, 0
vertices[48]: 8.82286e-39, 0, 8.82332e-39
vertices[51]: 0, 5.87998e-39, 0
vertices[54]: 5.87998e-39, 0, 8.82286e-39
vertices[57]: 0, -4.13785e+09, 4.58841e-41
vertices[60]: 1.4013e-45, 0, 2.8026e-45
vertices[63]: 0, 8.82195e-39, 0
vertices[66]: 5.88135e-39, 0, 0
vertices[69]: 0, 0, 0
Demo on coliru
So, I just rewrote the loops instead of tediously debugging it. (That appeared the lesser evil to me.)
#include <iostream>
int main()
{
float initTri1[] = { -1.5f, 0.5f, 0.0f, -1.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f };
const size_t nTri1 = std::size(initTri1);
float initTri2[] = { -1.5f, 0.5f, 0.0f, -1.0f, 0.0f, 0.0f, -1.0f, 0.5f, 0.0f };
const size_t nTri2 = std::size(initTri2);
const size_t nRect = 4;
const size_t nVtcs = (nTri1 + nTri2) * nRect;
float vertices[nVtcs];
float increment = 0.0f;
for (size_t j = 0, k = 0; j < nRect; ++j) {
for (size_t i = 2; i < nTri1; i += 3) {
vertices[k++] = initTri1[i - 2] + increment;
vertices[k++] = initTri1[i - 1];
vertices[k++] = initTri1[i - 0];
}
for (size_t i = 2; i < nTri2; i += 3) {
vertices[k++] = initTri2[i - 2] + increment;
vertices[k++] = initTri2[i - 1];
vertices[k++] = initTri2[i - 0];
}
increment += 0.5f;
}
for (size_t k = 0; k < nVtcs; ++k) {
if (k % (nTri1 + nTri2) == 0) std::cout << '\n';
if (k % 3 == 0) {
std::cout << "vertices[" << k << "]: ";
}
std::cout << vertices[k];
std::cout << (k % 3 < 2 ? ", " : "\n");
}
}
Output:
vertices[0]: -1.5, 0.5, 0
vertices[3]: -1.5, 0, 0
vertices[6]: -1, 0, 0
vertices[9]: -1.5, 0.5, 0
vertices[12]: -1, 0, 0
vertices[15]: -1, 0.5, 0
vertices[18]: -1, 0.5, 0
vertices[21]: -1, 0, 0
vertices[24]: -0.5, 0, 0
vertices[27]: -1, 0.5, 0
vertices[30]: -0.5, 0, 0
vertices[33]: -0.5, 0.5, 0
vertices[36]: -0.5, 0.5, 0
vertices[39]: -0.5, 0, 0
vertices[42]: 0, 0, 0
vertices[45]: -0.5, 0.5, 0
vertices[48]: 0, 0, 0
vertices[51]: 0, 0.5, 0
vertices[54]: 0, 0.5, 0
vertices[57]: 0, 0, 0
vertices[60]: 0.5, 0, 0
vertices[63]: 0, 0.5, 0
vertices[66]: 0.5, 0, 0
vertices[69]: 0.5, 0.5, 0
Demo on coliru
The moral of the story:
Simpler code is faster to write.
Simpler code is running sooner.
Simpler code is maintenance friendly.
Profiling it, you might be surprised that simpler code might be even faster.

Putting data onto different textures in WebGL and getting it out

I am trying to output more than one buffer from a shader - the general goal is to use it for GPGPU purposes. I've looked at this answer and got closer to the goal with this:
document.addEventListener("DOMContentLoaded", function() {
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
if (!gl) {
return alert("need WebGL2");
}
gl.canvas.width = 2;
gl.canvas.height = 2;
const vs = `
#version 300 es
in vec2 position;
void main(void) {
gl_Position = vec4(0.0, 0.0, 0.0, 1.0);
}
`;
const fs = `
#version 300 es
precision mediump float;
layout(location = 0) out vec4 outColor0;
layout(location = 1) out vec4 outColor1;
layout(location = 2) out vec4 outColor2;
layout(location = 3) out vec4 outColor3;
layout(location = 4) out vec4 outColor4;
layout(location = 5) out vec4 outColor5;
void main() {
// simplified for question purposes
outColor0 = vec4(1, 0, 0, 1);
outColor1 = vec4(0, 1, 0, 1);
outColor2 = vec4(0, 0, 1, 1);
outColor3 = vec4(1, 1, 0, 1);
outColor4 = vec4(1, 0, 1, 1);
outColor5 = vec4(0, 1, 1, 1);
}
`
const program = twgl.createProgram(gl, [vs, fs]);
const textures = [];
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
for (let i = 0; i < 6; ++i) {
const tex = gl.createTexture();
textures.push(tex);
gl.bindTexture(gl.TEXTURE_2D, tex);
const width = 2;
const height = 2;
const level = 0;
gl.texImage2D(gl.TEXTURE_2D, level, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
// attach texture to framebuffer
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0 + i, gl.TEXTURE_2D, tex, level);
}
gl.viewport(0, 0, 2, 2);
// tell it we want to draw to all 4 attachments
gl.drawBuffers([
gl.COLOR_ATTACHMENT0,
gl.COLOR_ATTACHMENT1,
gl.COLOR_ATTACHMENT2,
gl.COLOR_ATTACHMENT3,
gl.COLOR_ATTACHMENT4,
gl.COLOR_ATTACHMENT5,
]);
// draw a single point
gl.useProgram(program);
{
const offset = 0;
const count = 1
gl.drawArrays(gl.TRIANGLE, 0, 4);
}
for (var l = 0; l < 6; l++) {
var pixels = new Uint8Array(gl.canvas.width * gl.canvas.height * 4);
gl.readBuffer(gl.COLOR_ATTACHMENT0 + l);
gl.readPixels(0, 0, gl.canvas.width, gl.canvas.height, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
console.log(pixels.join(' '));
}
}
main();
})
However, the result is that only one pixel in each buffer gets set, so the output is:
0 0 0 0 255 0 0 255 0 0 0 0 0 0 0 0
0 0 0 0 0 255 0 255 0 0 0 0 0 0 0 0
0 0 0 0 0 0 255 255 0 0 0 0 0 0 0 0
0 0 0 0 255 255 0 255 0 0 0 0 0 0 0 0
0 0 0 0 255 0 255 255 0 0 0 0 0 0 0 0
0 0 0 0 0 255 255 255 0 0 0 0 0 0 0 0
rather than what I was hoping/expecting:
255 0 0 255 255 0 0 255 255 0 0 255 255 0 0 255
etc.
I was expecting that
outColor0 = vec4(1, 0, 0, 1);
is the equivalent to
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
but clearly I am wrong.
So how do I get to the desired outcome - to be able to set each pixel on each of the buffers?
The code does not provide any vertex data even though it's asking it to draw 4 vertices. Further it's passing in gl.TRIANGLE which doesn't exist. It's gl.TRIANGLES with an S at the end. gl.TRIANGLE will be undefined which gets coerced into 0 which matches gl.POINTS
In the JavaScript console
> const gl = document.createElement('canvas').getContext('webgl2');
< undefined
> gl.TRIANGLE
< undefined
> gl.TRIANGLES
< 4
> gl.POINTS
< 0
To put it another way all the gl.CONSTANTS are just integer values. Instead of
gl.drawArrays(gl.TRIANGLES, offset, count)
you can just do this
gl.drawArrays(4, offset, count)
because gl.TRIANGLES = 4.
But you you didn't use gl.TRIANGLES you used gl.TRIANGLE (no S) so you effectively did this
gl.drawArrays(undefined, offset, count)
that was interpreted as
gl.drawArrays(0, offset, count)
0 = gl.POINTS so that's the same as
gl.drawArrays(gl.POINTS, offset, count)
The code then draws a single 1 pixel point 4 times at the same location because you called it with a count of 4
gl.drawArrays(gl.POINTS, 0, 4)
Nothing in your vertex shader changes each iteration so every iteration is going to do exactly the same thing. In this case it's going to draw a 1x1 pixel POINT at clip space position 0,0,0,1 which will end up being the bottom left pixel of the 2x2 pixels.
In any case you probably want to provide vertices but as a simple test if I add
gl_PointSize = 2.0;
to the vertex shader and change the draw call to
gl.drawArrays(gl.POINTS, 0, 1); // draw 1 point
Then it produces the results you expect. It draws a single 2x2 pixel POINT at clip space position 0,0,0,1
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
if (!gl) {
return alert("need WebGL2");
}
gl.canvas.width = 2;
gl.canvas.height = 2;
const vs = `
#version 300 es
in vec2 position;
void main(void) {
gl_Position = vec4(0.0, 0.0, 0.0, 1.0);
gl_PointSize = 2.0;
}
`;
const fs = `
#version 300 es
precision mediump float;
layout(location = 0) out vec4 outColor0;
layout(location = 1) out vec4 outColor1;
layout(location = 2) out vec4 outColor2;
layout(location = 3) out vec4 outColor3;
layout(location = 4) out vec4 outColor4;
layout(location = 5) out vec4 outColor5;
void main() {
// simplified for question purposes
outColor0 = vec4(1, 0, 0, 1);
outColor1 = vec4(0, 1, 0, 1);
outColor2 = vec4(0, 0, 1, 1);
outColor3 = vec4(1, 1, 0, 1);
outColor4 = vec4(1, 0, 1, 1);
outColor5 = vec4(0, 1, 1, 1);
}
`
const program = twgl.createProgram(gl, [vs, fs]);
const textures = [];
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
for (let i = 0; i < 6; ++i) {
const tex = gl.createTexture();
textures.push(tex);
gl.bindTexture(gl.TEXTURE_2D, tex);
const width = 2;
const height = 2;
const level = 0;
gl.texImage2D(gl.TEXTURE_2D, level, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
// attach texture to framebuffer
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0 + i, gl.TEXTURE_2D, tex, level);
}
gl.viewport(0, 0, 2, 2);
// tell it we want to draw to all 4 attachments
gl.drawBuffers([
gl.COLOR_ATTACHMENT0,
gl.COLOR_ATTACHMENT1,
gl.COLOR_ATTACHMENT2,
gl.COLOR_ATTACHMENT3,
gl.COLOR_ATTACHMENT4,
gl.COLOR_ATTACHMENT5,
]);
// draw a single point
gl.useProgram(program); {
const offset = 0;
const count = 1
gl.drawArrays(gl.POINTS, 0, 1);
}
for (var l = 0; l < 6; l++) {
var pixels = new Uint8Array(gl.canvas.width * gl.canvas.height * 4);
gl.readBuffer(gl.COLOR_ATTACHMENT0 + l);
gl.readPixels(0, 0, gl.canvas.width, gl.canvas.height, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
console.log(pixels.join(' '));
}
}
main();
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
You can try using webgl-lint which if I run with your original code will at least complain
Uncaught Error: https://greggman.github.io/webgl-lint/webgl-lint.js:2942: error in drawArrays(/UNKNOWN WebGL ENUM/ undefined, 0, 4): argument 0 is undefined
with WebGLProgram("unnamed") as current program
with the default vertex array bound
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
if (!gl) {
return alert("need WebGL2");
}
gl.canvas.width = 2;
gl.canvas.height = 2;
const vs = `
#version 300 es
in vec2 position;
void main(void) {
gl_Position = vec4(0.0, 0.0, 0.0, 1.0);
}
`;
const fs = `
#version 300 es
precision mediump float;
layout(location = 0) out vec4 outColor0;
layout(location = 1) out vec4 outColor1;
layout(location = 2) out vec4 outColor2;
layout(location = 3) out vec4 outColor3;
layout(location = 4) out vec4 outColor4;
layout(location = 5) out vec4 outColor5;
void main() {
// simplified for question purposes
outColor0 = vec4(1, 0, 0, 1);
outColor1 = vec4(0, 1, 0, 1);
outColor2 = vec4(0, 0, 1, 1);
outColor3 = vec4(1, 1, 0, 1);
outColor4 = vec4(1, 0, 1, 1);
outColor5 = vec4(0, 1, 1, 1);
}
`
const program = twgl.createProgram(gl, [vs, fs]);
const textures = [];
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
for (let i = 0; i < 6; ++i) {
const tex = gl.createTexture();
textures.push(tex);
gl.bindTexture(gl.TEXTURE_2D, tex);
const width = 2;
const height = 2;
const level = 0;
gl.texImage2D(gl.TEXTURE_2D, level, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
// attach texture to framebuffer
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0 + i, gl.TEXTURE_2D, tex, level);
}
gl.viewport(0, 0, 2, 2);
// tell it we want to draw to all 4 attachments
gl.drawBuffers([
gl.COLOR_ATTACHMENT0,
gl.COLOR_ATTACHMENT1,
gl.COLOR_ATTACHMENT2,
gl.COLOR_ATTACHMENT3,
gl.COLOR_ATTACHMENT4,
gl.COLOR_ATTACHMENT5,
]);
// draw a single point
gl.useProgram(program); {
const offset = 0;
const count = 1
gl.drawArrays(gl.TRIANGLE, 0, 4);
}
for (var l = 0; l < 6; l++) {
var pixels = new Uint8Array(gl.canvas.width * gl.canvas.height * 4);
gl.readBuffer(gl.COLOR_ATTACHMENT0 + l);
gl.readPixels(0, 0, gl.canvas.width, gl.canvas.height, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
console.log(pixels.join(' '));
}
}
main();
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
<script src="https://greggman.github.io/webgl-lint/webgl-lint.js" crossorigin="anonymous"></script>

OpenGL move object and keep transformation

I've a object, which is transfomred (rotated at 45deg on the Y axis).
The target is to move (translate) the object on the x and y axis and keep the transformation effect as it is.
Its very hard to explain, so I made a picture:
I know the concept of the camera in opengl and i know i cant really move the camera but in fact everything is moving around the camera. Does someone actually know how to achieve this?
My code:
//set mvp
matrixProj = new PerspectiveProjectionMatrix(fovy, aspect, near, far);
matrixView = new ModelMatrix();
matrixView.LookAtTarget(new Vertex3f(0, 0, 2), new Vertex3f(0, 0, 0), new Vertex3f(0, 1, 0));
matrixModel = new ModelMatrix();
matrixModel.SetIdentity();
matrixModel.RotateY(45);
matrixModel.Translate(-2, -2, 0);
Matrix4x4 mvp = matrixProj * matrixView * matrixModel;
Gl.UniformMatrix4(Gl.GetUniformLocation(shaderProgram, "MVP"), 1, false, mvp.ToArray());
//draw quad
Gl.Begin(PrimitiveType.Quads);
Gl.Vertex3(-2, 2, 0);
Gl.Vertex3(2, 2, 0);
Gl.Vertex3(2, -2, 0);
Gl.Vertex3(-2, -2, 0);
Gl.End();
You have to change the order of the instructions. A rotation around the axis of the object is performed, by multiplying the translation matrix of the object by the rotation matrix.
This means you have to do the translation first and then the rotation.
matrixModel = new ModelMatrix();
matrixModel.SetIdentity();
matrixModel.Translate(-2, -2, 0);
matrixModel.RotateY(45);
Note, the translation matrix looks like this:
Matrix4x4 translate;
translate[0] : ( 1, 0, 0, 0 )
translate[1] : ( 0, 1, 0, 0 )
translate[2] : ( 0, 0, 1, 0 )
translate[3] : ( tx, ty, tz, 1 )
And the rotation matrix around Y-Axis looks like this:
Matrix4x4 rotate;
float angle;
rotate[0] : ( cos(angle), 0, sin(angle), 0 )
rotate[1] : ( 0, 1, 0, 0 )
rotate[2] : ( -sin(angle), 0, cos(angle), 0 )
rotate[3] : ( 0, 0, 0, 1 )
A matrix multiplication works like this:
Matrix4x4 A, B, C;
// C = A * B
for ( int k = 0; k < 4; ++ k )
for ( int l = 0; l < 4; ++ l )
C[k][l] = A[0][l] * B[k][0] + A[1][l] * B[k][1] + A[2][l] * B[k][2] + A[3][l] * B[k][3];
The result of translate * rotate is this:
model[0] : ( cos(angle), 0, sin(angle), 0 )
model[1] : ( 0, 1, 0, 0 )
model[2] : ( -sin(angle), 0, cos(angle), 0 )
model[3] : ( tx, ty, tz, 1 )
Note, the result of rotate * translate would be:
model[0] : ( cos(angle), 0, sin(angle), 0 )
model[1] : ( 0, 1, 0, 0 )
model[2] : ( -sin(angle), 0, cos(angle), 0 )
model[3] : ( cos(angle)*tx - sin(angle)*tx, ty, sin(angle)*tz + cos(angle)*tz, 1 )
Extension to the answer:
A perspective projection matrix looks like this:
r = right, l = left, b = bottom, t = top, n = near, f = far
2*n/(r-l) 0 0 0
0 2*n/(t-b) 0 0
(r+l)/(r-l) (t+b)/(t-b) -(f+n)/(f-n) -1
0 0 -2*f*n/(f-n) 0
where :
r = w / h
ta = tan( fov_y / 2 );
2*n / (r-l) = 1 / (ta*a) ---> 1/(r-l) = 1/(ta*a) * 1/(2*n)
2*n / (t-b) = 1 / ta ---> 1/(t-b) = 1/ta * 1/(2*n)
If you want to displace the filed of view by an offset (x, y), then you have to do it like this:
x_disp = 1/(ta*a) * x/(2*n)
y_disp = 1/ta * y/(2*n)
1/(ta*a) 0 0 0
0 1/t 0 0
x_disp y_disp -(f+n)/(f-n) -1
0 0 - 2*f*n/(f-n) 0
Set up the perspective projection matrix like this:
float x = ...;
float y = ...;
matrixProj = new PerspectiveProjectionMatrix(fovy, aspect, near, far);
matrixProj[2][0] = x * matrixProj[0][0] / (2.0 * near);
matrixProj[2][1] = y * matrixProj[1][1] / (2.0 * near);
To glFrustum, a pixel offset, can be applied like this:
float x_pixel = .....;
float y_pixel = .....;
float x_dipl = (right - left) * x_pixel / width_pixel;
float y_dipl = (top - bottom) * y_pixel / height_pixel;
glFrustum( left + x_dipl, right + x_dipl, top + y_dipl, bottom + y_dipl, near, far);

OpenGL lighting changing based on look direction

AKA What am I doing wrong?
I've been messing around with OpenGL and I'm just trying to work on lighting a cube right now. I'm not sure if I'm understanding what I'm supposed to do correctly because when I move the camera around, the lighting on the cube changes.
For example:
Looking at the cube from the top down:
Looking at the cube from the side:
From searching around all of the answers that I've seen say that this happens when the normal isn't set correctly, but I think they are being set correctly, because when I print out all of the vertices along with their normals, this is the result (grouped by face, in the order they're drawn):
Position: 0 0 0 Normal: -1 0 0
Position: 0 30 0 Normal: -1 0 0
Position: 0 30 30 Normal: -1 0 0
Position: 0 0 30 Normal: -1 0 0
Position: 0 0 0 Normal: 0 1 0
Position: 0 0 30 Normal: 0 1 0
Position: 30 0 30 Normal: 0 1 0
Position: 30 0 0 Normal: 0 1 0
Position: 0 0 0 Normal: 0 0 -1
Position: 30 0 0 Normal: 0 0 -1
Position: 30 30 0 Normal: 0 0 -1
Position: 0 30 0 Normal: 0 0 -1
Position: 0 0 30 Normal: 0 0 1
Position: 0 30 30 Normal: 0 0 1
Position: 30 30 30 Normal: 0 0 1
Position: 30 0 30 Normal: 0 0 1
Position: 0 30 0 Normal: 0 -1 0
Position: 30 30 0 Normal: 0 -1 0
Position: 30 30 30 Normal: 0 -1 0
Position: 0 30 30 Normal: 0 -1 0
Position: 30 0 0 Normal: 1 0 0
Position: 30 0 30 Normal: 1 0 0
Position: 30 30 30 Normal: 1 0 0
Position: 30 30 0 Normal: 1 0 0
Here's also some of the code used for rendering in case the mistake is in there:
RenderEngine::RenderEngine(int width, int height) {
//initializing the window...
glClearDepth(1.f);
glClearColor(217.f / 256.f, 233.f / 256.f, 255.f / 256.f, 1.f);
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glFrontFace(GL_CW);
glEnable(GL_CULL_FACE);
glEnable(GL_LIGHTING);
//glEnable(GL_COLOR_MATERIAL);
GLfloat lightPos[] = { 0.f, -1.0f, 0.0f, 0.f };
GLfloat ambient[] = {0.3f,0.3f,0.3f,1.0f};
GLfloat diffuse[] = {0.7f,0.7f,0.7f,1.0f};
glLightfv(GL_LIGHT0, GL_POSITION, lightPos);
glLightfv(GL_LIGHT0, GL_AMBIENT, ambient);
glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuse);
glEnable(GL_LIGHT0);
//more window related things
}
void RenderEngine::beginRender() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
void RenderEngine::endRender() {
//window stuff
}
void RenderEngine::translatePlayer(const sf::Vector3f& position) {
glTranslatef(-(position.x + 0.5) * 30, -(position.y + 1.75) * 30, -(position.z + 0.5) * 30);
}
void RenderEngine::rotatePlayer(const sf::Vector3f& rotation) {
glRotatef(rotation.x, 1.f, 0.f, 0.f);
glRotatef(rotation.y, 0.f, 1.f, 0.f);
glRotatef(rotation.z, 0.f, 0.f, 1.f);
}
void RenderEngine::renderVertexArray(const std::vector<Vertex>& vertices) {
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(Vertex), &vertices[0].pos[0]);
glColorPointer(3, GL_FLOAT, sizeof(Vertex), &vertices[0].color[0]);
glNormalPointer(GL_FLOAT, sizeof(Vertex), &vertices[0].normal[0]);
glDrawArrays(GL_QUADS, 0, vertices.size());
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
}
And the vertex object:
struct Vertex {
float pos[3];
float color[3];
float normal[3];
Vertex(float _pos[3], float _color[3], float _normal[3]) :
pos {_pos[0], _pos[1], _pos[2]},
color {_color[0], _color[1], _color[2]},
normal{_normal[0], _normal[1], _normal[2]} {}
Vertex() : pos{0,0,0}, color{0,0,0}, normal{0,0,0} {}
};
Please ignore all the random 30's. I'm aware that those are out of place and should not be done that way, but that's not the issue here.
When you call the following:
glLightfv(GL_LIGHT0, GL_POSITION, lightPos);
... then the passed lightPos is transformed with the current model-view matrix and then stored in camera coordinates. Thus, your light will move together with the camera. If you want it to be static, you have to execute the above line again after setting the model-view matrix.

Eigen perspective projection matrix

I'm trying to create a perspective projection matrix for OpenGL. I know how to do it with a float[16] but for consistencies sake I'd like to use an Eigen matrix.
The formula is:
[ xScale 0 0 0 ]
P = [ 0 yScale 0 0 ]
[ 0 0 -(zFar+zNear)/(zFar-zNear) -2*zNear*zFar/(zFar-zNear) ]
[ 0 0 -1 0 ]
Where:
yScale = cot(fovY/2)
xScale = yScale/aspectRatio
Since the formula is column-major and c-arrays are defined row-major, you would define a float[16] matrix with:
float P[16] = {
xScale, 0, 0, 0,
0, yScale, 0, 0,
0, 0, -(zFar+zNear)/(zFar-zNear), -1
0, 0, -2*zNear*zFar/(zFar-zNear), 0
};
So how exactly would I create a matrix like this with Eigen? Would I use an Eigen::Affine3f or a Eigen::Matrix4f? Looking at the documentation, it's not apparent to me how to set individual cell values.
In your case, the simplest is to use the comma initializer syntax:
Eigen::Matrix4f pmat;
pmat << xScale, 0, 0, 0,
0, yScale, 0, 0,
0, 0, -(zFar+zNear)/(zFar-zNear), -1,
0, 0, -2*zNear*zFar/(zFar-zNear), 0;
Setting individual cell values can be done simply with a paren, e.g. Matrix(0,0) = xScale; .