how to move the vertex transform to vertex shader [closed] - glsl

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I need your help! I am trying to move the vertex transform part from the cpu code to the vertex shader, here's the cpp code of the vertex transform:
//calculate the transform matrix of a refelcting surface
//#point: a point on the surface
//#normal: the normalized normal of the surface
//#return: the transform matrix
glm::mat4 flatReflect(glm::vec3& point, glm::vec3& normal)
{
glm::vec4 R = glm::vec4(point,1);
glm::vec4 N = glm::vec4(normal,0);
GLfloat d = glm::dot(R,N);
glm::mat4 result;
result[0][0] = 1 - 2* N.x*N.x;
result[0][1] = -2*N.x*N.y;
result[0][2] = -2*N.x*N.z;
result[0][3] = -2*N.x*d;
result[1][0] = result[0][1];
result[1][1] = 1 - 2*N.y*N.y;
result[1][2] = -2*N.y*N.z;
result[1][3] = -2*N.y*d;
result[2][0] = result[0][2];
result[2][1] = result[1][2];
result[2][2] = 1-2*N.z*N.z;
result[2][3] = -2*N.z*d;
result[3][0] = 0;
result[3][1] = 0;
result[3][2] = 0;
result[3][3] = 1;
return result;
}
Any idea? Thanks in advance!

You can translate that fairly readily. First, to translate these lines:
glm::vec4 R = glm::vec4(point,1);
glm::vec4 N = glm::vec4(normal,0);
GLfloat d = glm::dot(R,N);
These types and functions exist natively in GLSL and need no qualification, so:
vec4 R = vec4(point, 1.0); // An explicit decimal is important; some drivers do
vec4 N = vec4(normal, 0.0); // not perform implicit coercions.
float d = dot(R, N);
Constructing the matrix differs a little; rather than declaring a matrix and setting its elements, you can create and return it all at once, e.g. (for the identity matrix):
return mat4(1.0, 0.0, 0.0, 0.0,
0.0, 1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
Accessing elements of the vectors works just as in C++. Knowing that, the rest of the translation should be easy. Good luck!

Related

Tiled: Why do my tiles flip incorrectly when their diagonal-flip bit is set?

I'm trying to use a geometry shader to display a tile map created in Tiled. I'm having some trouble implementing how Tiled handles rotation and flipping, though.
Tiled reserves the highest four bits of a tile ID for specifying how the tile should be oriented (as described here), with each bit specifying which axis to flip the tile across. My geometry shader starts by creating four vertices and then swapping their positions based on how each flag is set:
void createVertices(bool flipDiagonal, bool flipVertical, bool flipHorizontal, out vec4 vertices[4])
{
vec4 verts[4];
verts[0] = gl_in[0].gl_Position + vec4( 0.0, 0.0, 0.0, 1.0);
verts[1] = gl_in[0].gl_Position + vec4( 1.0, 0.0, 0.0, 1.0);
verts[2] = gl_in[0].gl_Position + vec4( 0.0, -1.0, 0.0, 1.0);
verts[3] = gl_in[0].gl_Position + vec4( 1.0, -1.0, 0.0, 1.0);
vec4 swap;
if(flipDiagonal)
{
swap = verts[1];
verts[1] = verts[2];
verts[2] = swap;
}
if(flipHorizontal)
{
swap = verts[0];
verts[0] = verts[1];
verts[1] = swap;
swap = verts[2];
verts[2] = verts[3];
verts[3] = swap;
}
if(flipVertical)
{
swap = verts[0];
verts[0] = verts[2];
verts[2] = swap;
swap = verts[1];
verts[1] = verts[3];
verts[3] = swap;
}
for(int i = 0; i < vertices.length(); i++)
{
vertices[i] = u_projection * u_camera * u_transform * verts[i];
}
}
The problem is that when the diagonal bit is set and either the horizontal bit or the vertical bit is set, it flips in the opposite direction of what I expect it to. Tiles that should be flipped horizontally are flipped vertically, and tiles that should be flipped vertically appear horizontally flipped. Tiles in which all three flags are set render correctly, however.
Here are some images for reference. The one on the left was exported directly from Tiled and is what should be displayed. The one on the right is what my program is rendering, with the incorrect tiles circled:
Now, I could easily work around this with a couple of if-statements, but I want to know why this is happening. I've tried working through this on paper, but to no avail. What am I missing?

Glsl artifacts after fract uv coords [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am trying to make a grid with fragment shader and i get problems with uv coords.
On this screenshot you can see first result:
float roundRect(vec2 p, vec2 size, float radius) {
vec2 d = abs(p) - size;
return min(max(d.x, d.y), 0.0) + length(max(d, 0.0)) - radius;
}
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution.xy;
vec2 f_uv = fract(uv * 22.680);
float rect = smoothstep(0.040, 0.0, roundRect(f_uv - vec2(0.5), vec2(0.44), 0.040));
gl_FragColor = vec4(vec3(rect), 1.0);
}
Second:
float roundRect(vec2 p, vec2 size, float radius) {
vec2 d = abs(p) - size;
return min(max(d.x, d.y), 0.0) + length(max(d, 0.0)) - radius;
}
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution.xy;
vec2 f_uv = fract(uv * 20.680);
float rect = smoothstep(0.040, 0.0, roundRect(f_uv - vec2(0.5), vec2(0.44), 0.040));
gl_FragColor = vec4(vec3(rect), 1.0);
}
These both screenshots have a difference in line
vec2 f_uv = fract(uv * x);
How can i fix it?
What you see is aliasing caused by gridlines
thinner than the pixels themselves, and
spaced at non-integer pixel intervals.
To fix that you need to band-limit your function. One way of doing this is as follows:
void main() {
float scale = 22.680;
vec2 uv = gl_FragCoord.xy / u_resolution.xy * scale;
float fw = max(fwidth(uv.x), fwidth(uv.y));
float rect = smoothstep(fw, -fw, roundRect(fract(uv) - vec2(0.5), vec2(0.44), 0.040));
gl_FragColor = vec4(vec3(rect), 1.0);
}
The results look as follows:
Note that some lines are still blurrier than others -- but the only way around it is to ensure that your scale factor is an integer amount of pixels.

How to rotate a 2D square in OpenGL?

I have a basic square. How can i rotate it ?
let vertices = vec![
x, y, uv1.0, uv1.0, layer, // top left
// ---
x + w, y, uv2.0, uv2.1, layer, // top right
// ---
x + w, y + h, uv3.0, uv3.1, layer, // bottom right
// ---
x, y + h, uv4.0, uv4.1, layer // bottom left
This is my orthographic projection matrix.
let c_str_vert = CString::new("MVP".as_bytes()).unwrap();
let modelLoc = gl::GetUniformLocation(shaderProgram, c_str_vert.as_ptr());
let model = cgmath::ortho(0.0, SCR_WIDTH as f32, SCR_HEIGHT as f32, 0.0, -1.0, 1.0);
gl::UniformMatrix4fv(modelLoc, 1, gl::FALSE, model.as_ptr());
#version 430 core
layout(location = 0) in vec2 position;
// ...
uniform mat4 MVP;
void main() {
gl_Position = MVP * vec4(position.x, position.y, 0.0, 1.0);
// ...
}
I have a lot of squares but I don't want to rotate them all.
width and height is 100px, how can I turn my square to make it look like this?
I know that I can achieve this by using transformation matrices. I did it in web development while working on svg, but I have no idea how I can get this working in OpenGl.
I would like to know how can i multiply matrix and vector in OpenGL.
I would like to know how can i multiply matrix and vector in OpenGL.
You already multiplying matrices with a vector
gl_Position = MVP * vec4(position.x, position.y, 0.0, 1.0);
All you have to do is multiply your MVP matrix with your rotation matrix and than with your vector
gl_Position = MVP * rotationMatrix * vec4(position.x, position.y, 0.0, 1.0);
The rotation Matrix should be also a 4x4 matrix
i tried the library nalgebra and nalgebra-glm. API cgmath confuses me.
let mut ortho = glm::ortho(0.0, SCR_WIDTH as f32, SCR_HEIGHT as f32, 0.0, -1.0, 1.0);
ortho = glm::translate(&ortho, &glm::vec3(0.0, 50.0, 0.0));
let rad = 30.0 * (PI / 360.0);
ortho = glm::rotate(&ortho, rad, &glm::vec3(0.0, 0.0, 1.0));
ortho = glm::translate(&ortho, &glm::vec3(0.0, -50.0, 0.0) );
Thank you for all the answers

Applying perspective with GLSL matrix

I am not quite sure what is missing, but I loaded a uniform matrix into a vertex shader and when the matrix was:
GLfloat translation[4][4] = {
{1.0, 0.0, 0.0, 0.0},
{0.0, 1.0, 0.0, 0.0},
{0.0, 0.0, 1.0, 0.0},
{0.0, 0.2, 0.0, 1.0}};
or so, I seemed to be able to translate vertices just fine, depending on which values I chose to change. However, when swapping this same uniform matrix to apply projection, the image would not appear. I tried several matrices, such as:
GLfloat frustum[4][4] = {
{((2.0*frusZNear)/(frusRight - frusLeft)), 0.0, 0.0, 0.0},
{0.0, ((2.0*frusZNear)/(frusTop - frusBottom)), 0.0 , 0.0},
{((frusRight + frusLeft)/(frusRight-frusLeft)), ((frusTop + frusBottom) / (frusTop - frusBottom)), (-(frusZFar + frusZNear)/(frusZFar - frusZNear)), (-1.0)},
{0.0, 0.0, ((-2.0*frusZFar*frusZNear)/(frusZFar-frusZNear)), 0.0}
};
and values, such as:
const GLfloat frusLeft = -3.0;
const GLfloat frusRight = 3.0;
const GLfloat frusBottom = -3.0;
const GLfloat frusTop = 3.0;
const GLfloat frusZNear = 5.0;
const GLfloat frusZFar = 10.0;
The vertex shader, which seemed to apply translation just fine:
gl_Position = frustum * vPosition;
Any help appreciated.
The code for calculating the perspective/frustum matrix looks correct to me. This sets up a perspective matrix that assumes that your eye point is at the origin, and you're looking down the negative z-axis. The near and far values specify the range of distances along the negative z-axis that are within the view volume.
Therefore, with near/far values of 5.0/10.0, the range of z-values that are within your view volume will be from -5.0 to -10.0.
If your geometry is currently drawn around the origin, use a translation by something like (0.0, 0.0, -7.0) as your view matrix. This needs to be applied before the projection matrix.
You can either combine the view and projection matrices, or pass them separately into your vertex shader. With a separate view matrix, containing the translation above, your shader code could then look like this:
uniform mat4 viewMat;
...
gl_Position = frustum * viewMat * vPosition;
First thing I see is that the Z near and far planes is chosen at 5, 10. If your vertices do not lie between these planes you will not see anything.
The Projection matrix will take everything in the pyramid like shape and translate and scale it into the unit volume -1,1 in every dimension.
http://www.lighthouse3d.com/tutorials/view-frustum-culling/

OpenGL Asymmetric Frustum for Desktop VR

I am making an OpenGL c++ application that tracks the users location in relation to the screen and then updates the rendered scene to the perspective of the user. This is know as "desktop VR" or you can think of the screen as a diorama or fish tank. I am rather new to OpenGL and have only defined a very simple scene thus far, just a cube, and it is initially rendered correctly.
The problem is that when I start moving and want to rerender the cube scene, the projection plane seems translated and I don't see what I think I should. I want this plane fixed. If I were writing a ray tracer, my window would always be fixed, but my eye is allowed to wander. Can someone please explain to me how I can achieve the effect I desire (pinning the viewing window) while having my camera/eye wander at a non-origin coordinate?
All of the examples I find demand that the camera/eye be at the origin, but this is not conceptually convenient for me. Also, because this is a "fish tank", I am setting my d_near to be the xy-plane, where z = 0.
In screen/world space, I assign the center of the screen to (0,0,0) and its 4 corners to:
TL(-44.25, 25, 0)
TR( 44.25, 25, 0)
BR( 44.25,-25, 0)
BL(-44.25,-25, 0)
These values are in cm for a 16x9 display.
I then calculate the user's eye (actually a web cam on my face) using POSIT to be usually somewhere in the range of (+/-40, +/-40, 40-250). My POSIT method is accurate.
I am defining my own matrices for the perspective and viewing transforms and using shaders.
I initialize as follows:
float right = 44.25;
float left = -44.25;
float top = 25.00;
float bottom = -25.00;
vec3 eye = vec3(0.0, 0.0, 100.0);
vec3 view_dir = vec3(0.0, 0.0, -1.0);
vec3 up = vec3(0.0, 1.0, 0.0);
vec3 n = normalize(-view_dir);
vec3 u = normalize(cross(up, n));
vec3 v = normalize(cross(n, u));
float d_x = -(dot(eye, u));
float d_y = -(dot(eye, v));
float d_z = -(dot(eye, n));
float d_near = eye.z;
float d_far = d_near + 50;
// perspective transform matrix
mat4 P = mat4((2.0*d_near)/(right-left ), 0, (right+left)/(right-left), 0,
0, (2.0*d_near)/(top-bottom), (top+bottom)/(top-bottom), 0,
0, 0, -(d_far+d_near)/(d_far-d_near), -(2.0*d_far*d_near)/(d_far-d_near),
0, 0, -1.0, 0);
// viewing transform matrix
mat4 V = mat4(u.x, u.y, u.z, d_x,
v.x, v.y, v.z, d_y,
n.x, n.y, n.z, d_z,
0.0, 0.0, 0.0, 1.0);
mat4 MV = C * V;
//MV = V;
From what I gather looking on the web, my view_dir and up are to remain fixed. This means that I need only to update d_near and d_far as well as d_x, d_y, and d_y? I do this in my glutIdleFunc( idle );
void idle (void) {
hBuffer->getFrame(hFrame);
if (hFrame->goodH && hFrame->timeStamp != timeStamp) {
timeStamp = hFrame->timeStamp;
std::cout << "(" << hFrame->eye.x << ", " <<
hFrame->eye.y << ", " <<
hFrame->eye.z << ") \n";
eye = vec3(hFrame->eye.x, hFrame->eye.y, hFrame->eye.z);
d_near = eye.z;
d_far = eye.z + 50;
P = mat4((2.0*d_near)/(right-left), 0, (right+left)/(right-left), 0,
0, (2.0*d_near)/(top-bottom), (top+bottom)/(top-bottom), 0,
0, 0, -(d_far+d_near)/(d_far-d_near), -(2.0*d_far*d_near)/(d_far-d_near),
0, 0, -1.0, 0);
d_x = -(dot(eye, u));
d_y = -(dot(eye, v));
d_z = -(dot(eye, n));
C = mat4(1.0, 0.0, 0.0, eye.x,
0.0, 1.0, 0.0, eye.y,
0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
V = mat4(u.x, u.y, u.z, d_x,
v.x, v.y, v.z, d_y,
n.x, n.y, n.z, d_z,
0.0, 0.0, 0.0, 1.0);
MV = C * V;
//MV = V;
glutPostRedisplay();
}
}
Here is my shader code:
#version 150
uniform mat4 MV;
uniform mat4 P;
in vec4 vPosition;
in vec4 vColor;
out vec4 color;
void
main()
{
gl_Position = P * MV * vPosition;
color = vColor;
}
Ok, I made some changes to my code, but without success. When I use V in place of MV in the vertex shader, everything looks as I want it to, the perspective is correct and the objects are the right size, however, the scene is translated by the displacement of the camera.
When using C and V to obtain MV, my scene is rendered from the perspective of an observer straight on and the rendered scene fills the window as it should, but the perspective of the eye/camera is lost.
Really, what I want is to translate the 2D pixels, the projection plane, by the appropriate x and y values of the eye/camera, so as to keep the center of the object (whose xy center is (0,0)) in the center of the rendered image. I am guided by the examples in the textbook "Interactive Computer Graphics: A Top-Down Approach with Shader-Based OpenGL (6th Edition)". Using the files paired with the book freely available on the web, I am continuing with the row major approach.
The following images are taken when not using the matrix C to create MV. When I do use C to create MV, all scenes look like the first image below. I desire no translation in z and so I leave that as 0.
Because the projection plane and my camera plane are parallel, the conversion from one to the other coordinate system is simply a translation and inv(T) ~ -T.
Here is my image for the eye at (0,0,50):
Here is my image for the eye at (56,-16,50):
The solution is to update d_x, d_y, and d_z, accounting for the new eye position, but to never change u, v, or n. In addition, one must update the matrix P with new values for left, right, top and bottom, as they relate to the new position of the camera/eye.
I initialize with this:
float screen_right = 44.25;
float screen_left = -44.25;
float screen_top = 25.00;
float screen_bottom = -25.00;
float screen_front = 0.0;
float screen_back = -150;
And now my idle function looks like this, note the calculations for top, bottom, right, and left:
void idle (void) {
hBuffer->getFrame(&hFrame);
if (hFrame.goodH && hFrame.timeStamp != timeStamp) {
timeStamp = hFrame.timeStamp;
//std::cout << "(" << hFrame.eye.x << ", " <<
// hFrame.eye.y << ", " <<
// hFrame.eye.z << ") \n";
eye = vec3(hFrame.eye.x, hFrame.eye.y, hFrame.eye.z);
d_near = eye.z;
d_far = eye.z + abs(screen_back) + 1;
float top = screen_top - eye.y;
float bottom = top - 2*screen_top;
float right = screen_right - eye.x;
float left = right - 2*screen_right;
P = mat4((2.0 * d_near)/(right - left ), 0.0, (right + left)/(right - left), 0.0,
0.0, (2.0 * d_near)/(top - bottom), (top + bottom)/(top - bottom), 0.0,
0.0, 0.0, -(d_far + d_near)/(d_far - d_near), -(2.0 * d_far * d_near)/(d_far - d_near),
0.0, 0.0, -1.0, 0.0);
d_x = -(dot(eye, u));
d_y = -(dot(eye, v));
d_z = -(dot(eye, n));
V = mat4(u.x, u.y, u.z, d_x,
v.x, v.y, v.z, d_y,
n.x, n.y, n.z, d_z,
0.0, 0.0, 0.0, 1.0);
glutPostRedisplay();
}
}
This redefinition of the perspective matrix keeps the rendered image from translating. I still have camera capture and screen grabbing synchronization issues, but I am able to create the images like following updated in real-time for the position of the user:
All of the examples I find demand that the camera/eye be at the origin
This is not a demand. OpenGL doesn't define a camera, it all boils down to transformations. Those are usually called projection and modelview (model and view transform combined). The camera, i.e. view transform is simply the inverse of the positioning of the camera in world space. So say you built yourself a matrix for your camera C, then you'd premultiply it's inverse on the modelview.
mat4 MV = inv(C) * V
On a side note, your shader code is wrong:
gl_Position = P * ( (V * vPosition ) / vPosition.w );
^^^^^^^^^^^^^^
The homogenous divide is not to be done in the shader, as it is hardwired into the render pipeline.