OpenGL vertex shader nonlinear transformation [closed] - opengl

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
When I want to do a linear transformation, I can pass a transform matrix to the vertex shader to do matrix product.
gl_Position = gl_ModelViewMatrix * gl_Vertex
But what if I want to do a nonlinear transformation? For example, I want to rotate a cube around x axis by a angle theta = 0.1 * x, so the cube can be twisted in the end.

Yes, this can be implemented in a vertex shader. It doesn't really matter if you create a rotation matrix first, but for simplicity I will show how to apply the rotation directly:
float theta = 0.1 * gl_Vertex.x;
vec4 twisted_vertex = vec4(
gl_Vertex.x,
cos(theta) * gl_Vertex.y - sin(theta) * gl_Vertex.z,
sin(theta) * gl_Vertex.y + cos(theta) * gl_Vertex.z,
gl_Vertex.w);
You can now perform the usual calculation you already have on the twisted vertex instead of the original one, for example:
gl_Position = gl_ModelViewMatrix * twisted_vertex;
Note, that when you apply that to a simple cube (8 corner, 36 triangles), then the result might not be what you expect. The transformation is only applied to the vertices (the 8 corners) and not to the edges. If you want to twist a cube, then you have to make sure that it is tessellated high enough along the x-axis.
Edit: Here it goes for move along y, twist along x, move along z, twist along x. Basically, you can chain all these transformations as you want.
vec4 twist(vec4 v, float theta)
{
vec4 twisted_vertex = vec4(
v.x,
cos(theta) * v.y - sin(theta) * v.z,
sin(theta) * v.y + cos(theta) * v.z,
v.w);
return twisted_vertex;
}
//In main
//Move along y and first twist
vec4 v1 = twist(gl_Vertex + vec4(0, y_move, 0, 0));
//Move along z and second twist
vec4 v2 = twist(v1 + vec4(0, 0, z_move, 0));
You could also create the four transformation matrices and chain them, but this will most probably be slower and cost more operations.

Related

OpenGL screen postprocessing effects [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've built a nice music visualizer using OpenGL in Java. It already looks pretty neat, but I've thought about adding some post processing to it. At the time, it looks like this:
There is already a framebuffer for recording the output, so I have the texture already available. Now I wonder if someone has an idea for some effects. The current Fragment shader looks like this:
#version 440
in vec3 position_FS_in;
in vec2 texCoords_FS_in;
out vec4 out_Color;
//the texture of the last Frame by now exactly the same as the output
uniform sampler2D textureSampler;
//available data:
//the average height of the lines seen in the screenshot, ranging from 0 to 1
uniform float mean;
//the array of heights of the lines seen in the screenshot
uniform float music[512];
void main()
{
vec4 texColor = texture(textureSampler, texCoords_FS_in);
//insert post processing here
out_Color = texColor;
}
Most post processing effects vary with time, so it is common to have a uniform that varies with the passage of time. For example, a "wavy" effect might be created by offsetting texture coordinates using sin(elapsedSec * wavyRadsPerSec + (PI * gl_FragCoord.y * 0.5 + 0.5) * wavyCyclesInFrame).
Some "postprocessing" effects can be done very simply, for example, instead of clearing the back buffer with glClear you can blend a nearly-black transparent quad over the whole screen. This will create a persistence effect where the past frames fade to black behind the current one.
A directional blur can be implemented by taking multiple samples at various distances from each point, and weighting the closer ones more strongly and summing. If you track the motion of a point relative to the camera position and orientation, it can be made into a motion blur implementation.
Color transformations are very simple as well, simply treat the RGB as though they are the XYZ of a vector, and do interesting transformations on it. Sepia and "psychedelic" colors can be produced this way.
You might find it helpful to convert the color into something like HSV, do transformations on that representation, and convert it back to RGB for the framebuffer write. You could affect hue, saturation, for example, fading to black and white, or intensifying the color saturation smoothly.
A "smearing into the distance" effect can be done by blending the framebuffer onto the framebuffer, by reading from texcoord that is slightly scaled up from gl_FragCoord, like texture(textureSampler, (gl_FragCoord * 1.01).xy).
On that note, you should not need those texture coordinate attributes, you can use gl_FragCoord to find out where you are on the screen, and use (an adjusted copy of) that for your texture call.
Have a look at a few shaders on GLSLSandbox for inspiration.
I have done a simple emulation of the trail effect on GLSLSandbox. In the real one, the loop would not exist, it would take one sample from a small offset. The "loop" effect would happen by itself because its input includes the output from the last frame. To emulate having a texture of the last frame, I simply made it so I can calculate what the other pixel is. You would read the last-frame texture instead of calling something like pixelAt when doing the trail effect.
You can use the wave instead of my faked sine wave. Use the uv.x to select an index, scaled appropriately.
GLSL
#ifdef GL_ES
precision mediump float;
#endif
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
const float PI = 3.14159265358979323;// lol ya right, but hey, I memorized it
vec4 pixelAt(vec2 uv)
{
vec4 result;
float thickness = 0.05;
float movementSpeed = 0.4;
float wavesInFrame = 5.0;
float waveHeight = 0.3;
float point = (sin(time * movementSpeed +
uv.x * wavesInFrame * 2.0 * PI) *
waveHeight);
const float sharpness = 1.40;
float dist = 1.0 - abs(clamp((point - uv.y) / thickness, -1.0, 1.0));
float val;
float brightness = 0.8;
// All of the threads go the same way so this if is easy
if (sharpness != 1.0)
dist = pow(dist, sharpness);
dist *= brightness;
result = vec4(vec3(0.3, 0.6, 0.3) * dist, 1.0);
return result;
}
void main( void ) {
vec2 fc = gl_FragCoord.xy;
vec2 uv = fc / resolution - 0.5;
vec4 pixel;
pixel = pixelAt(uv);
// I can't really do postprocessing in this shader, so instead of
// doing the texturelookup, I restructured it to be able to compute
// what the other pixel might be. The real code would lookup a texel
// and there would be one sample at a small offset, the feedback
// replaces the loop.
const float e = 64.0, s = 1.0 / e;
for (float i = 0.0; i < e; ++i) {
pixel += pixelAt(uv + (uv * (i*s))) * (0.3-i*s*0.325);
}
pixel /= 1.0;
gl_FragColor = pixel;
}

Texture Warping Shader: Polar to Rectangular Coordinates

I am writing a 2D game using OpenGL and I have planned a shadow casting algorithm which needs a transformation of a texture from Polar Coordinates to Rectangular Coordinates. The desired effect is the following:
From this:
To this:
I know the formulas for converting coordinates between both Polar and Rectangular systems but I am having problems on writing the shader to achieve the desired effect.
My shader receives a texture as an input and should draw the warped texture to the screen. I planned the following (knowing that the fragment shader acts upon one fragment at a time):
Find the coordinates of the current fragment using gl_FragCoord.xy
Determine r and theta that correspond to the point (x, y).
Transform r and theta into texture_x and texture_y (which will be used to sample the texture)
Transfer the sampled pixel to the current fragment
My final result is the same input texture rotated 90 degrees clock-wise. I think that I'm missing something on step 3. I might be just getting the same x and y of the current fragment, because I'm simply using both the transform and inverse transform formulas.
How should I proceed to get the expected result?
Here is my shader:
#version 120
uniform sampler2D tex;
void main() {
vec2 fragCoords = gl_FragCoord.xy - vec2(128, 128); //shift the coordinates so that 0, 0 is in the center of the screen (the final texture is 256 * 256)
fragCoords /= vec2(256, 256);
float r = sqrt(pow(fragCoords.x, 2) + pow(fragCoords.y, 2));
float theta = atan(fragCoords.y, fragCoords.x);
if (fragCoords.y/fragCoords.x <= 0.5 && fragCoords.y/fragCoords.x >= -0.5) {
r *= 1/(256*sin(theta));
} else {
r *= 1/(0.5*256*cos(theta));
}
vec2 texCoords = vec2(r, theta);
vec4 texFrag = texture2D(tex, texCoords);
gl_FragColor = texFrag * vec4(1.0, 0.0, 0.0, 1.0);
}
In your shader you're first translating into polar coordinates
float r = sqrt(pow(fragCoords.x, 2) + pow(fragCoords.y, 2));
float theta = atan(fragCoords.y, fragCoords.x);
and then you't translating them back into cartesian
float tX = r * sin(theta);
float tY = r * cos(theta);
You want to stay in polar coordinates, so just plug r and theta into the texture coordinates
vec2 texCoords = vec2(r , theta);
vec4 texFrag = texture2D(tex, texCoords);
However by the looks of the images you pasted there's some renormalization step involved, so that (r, theta) will cover a rectangular area. If I'm not entirely mistaken, then r is scaled by the distance it takes a ray from the center-bottom to intersect with the rectangular area. If we assume theta=0 to be straight up, then for the range [-atan(0.5)…atan(0.5)] it's scaled by 1/(height*sin(theta)) and outside that range by 1/(0.5*width*cos(theta))

How to set a specific eye point using perspective view with shaders

In these days I am reading the Learning Modern 3D Graphics Programming book by Jason L. McKesson. Basically it is a book about the OpenGL 3.3 and I am now at the chapter 4, that is about orthographic and perspective view.
At the end of the chapter, under the "Further Study" section he suggests to try few things like implementing a variable eye point (he used at the begin (0, 0, 0) in camera space for semplicity) and an arbitrary perspective plane location.
He says I am going to need to offset the X, Y camera-space positions of the vertices by E_x and E_y respectively.
I cannot understand this passage, how am I supposed to use a variable eye point modifying only the X, Y offsets?
Edit: could it be something like this?
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
smooth out vec4 theColor;
uniform vec2 offset;
uniform vec2 E;
uniform float zNear;
uniform float zFar;
uniform float frustumScale;
void main()
{
vec4 cameraPos = position + vec4(offset.x, offset.y, 0.0, 0.0);
vec4 clipPos;
clipPos.xy = cameraPos.xy * frustumScale + vec4(E.x, E.y, 0.0, 0.0);
clipPos.z = cameraPos.z * (zNear + zFar) / (zNear - zFar);
clipPos.z += 2 * zNear * zFar / (zNear - zFar);
clipPos.w = cameraPos.z / (-E.z);
gl_Position = clipPos;
theColor = color;
}
Edit2: thanks Boris, your picture helped a lot :) especially because:
it makes clear what you previously stated regarding thinking E as projection place position and not eye point position
it underlines that the size of the project plane must be always [-1, 1], passage that I read on the book without fully understanding what it meant
Just a curiosity, why do you mention multiplying after subtracting? Is it for the same reason the book says, that is aspect ratio? Because everything logically push me doing exactly the opposite, that is first translation (-2) and then multiplication (/5).. Or maybe with the term "scaling", the book refers to the reshape function?
Here, we are interested in computing a transformation from Camera Coordinates (CC) to Normalized Device Coordinates (NDC).
Think of E as the position of the projection plane in Camera Coordinates, instead of the position of the eye point according to the projection plane. In Camera Coordinates, the eye point is by definition located at the origin, at least in my interpretation of what "Camera Coordinate" means: a coordinate frame centered from where you look at the scene. (You can mathematically define a perspective transformation centered from anywhere, but this means your input space is not the camera space, imho. This is what the World->Camera transformation is for, as you will see in chapter 6)
Summary:
you are in camera space, hence your eye point is located at (0,0,0)
you are looking toward the negative Z-axis
your projection plane is parallel to the xOy plane, with a size of [-1,1] in both direction
This is the picture here (each tick is 0.5 unit):
In this picture, you can see that the projection plane (bottom side of the gray trapezoid) is centered in (0,0,-1), with a size of [-1,1] in both X and Y direction.
Now, what is asked is instead of choosing (0,0,-1) for the center of this plane, to choose an arbitrary (E.x, E.y, E.z) position (assumes E.z is negative). But the plane has still to be parallel to xOy axis and with the same size.
You can see that the dimension E.xy plays a very different role than E.z, reason why E.xy will be involved in an substraction, while E.z will be involved in a division. This is easy to see with an example:
assume zNear = -E.z (not necessarily the case, but you can in fact always change frustumScale to have an equivalent perspective satisfying this)
consider the point E (which is the center of the projection plane).
What is its coordinate in NDC space? It is (0,0,-1) by definition. What you've done is substracting E.xy, but dividing by -E_z.
Your code got this idea, but still some things are wrong:
First, you defined uniform vec2 E; instead of uniform vec3 E; (just a typo, not a big deal)
The line clipPos.xy = ... ; is about vec2 arithmetic. Hence, you can only multiply by scalar values (i.e., a float), or add/substract vec2 values. Hence, vec4(E.x, E.y, 0.0, 0.0) is of incorrect type, you should use E.xy instead, which has the correct type vec2.
You should in fact substract E.xy instead of add it. This is easy to see in my example above.
Finally, things are more subtle ;-)
I made a picture to illustrate the modifications:
Each tick is 1 unit in this picture. Top left is your Camera Coordinate Space, with displayed zNear, zFar, and two possible projection planes. In blue is the one used in the explanation and shader here, and the red one is the one you now want to use. The colored areas correponds to what should be visible in you final screen, e.g. what should be in the cube [-1,1]^3 in the NDC Space. Hence, if you use the blue projection plane, you want to obtain the space in top right, and if you use the red projection plane, you want to optain the space in the bottom. To do this, you can observe that you need to perform the scaling and translation in NDC space, e.g. after the perspective division! (I think what is written in the book is either incorrect, or interpret the question differently).
Hence you want to do, in euclidean coordinate (i.e., not homogeneous coordinate, e.g. without W coordinate):
clipPosEuclideanRed.xy = clipPosEuclideanBlue.xy * (-E.z) - E.xy;
clipPosEuclideanRed.z = clipPosEuclideanBlue.z;
However, because you are in homogeneous coordinates, this values are in fact:
clipPosEuclidean.xyz = clipPos.xyz / clipPos.w; // with clipPos.w = -cameraPos.z;
Hence, you have to composate by writing:
clipPosRed.xy = clipPosBlue.xy * (-E.z) - E.xy * (-cameraPos.z);
clipPosRed.z = clipPosBlue.z;
So my solution to this problem would be to add only one line:
void main()
{
vec4 cameraPos = position + vec4(offset.x, offset.y, 0.0, 0.0);
vec4 clipPos;
clipPos.xy = cameraPos.xy * frustumScale;
// only add this line
clipPos.xy = - clipPos.xy * E.z + E.xy * cameraPos.z;
clipPos.z = cameraPos.z * (zNear + zFar) / (zNear - zFar);
clipPos.z += 2 * zNear * zFar / (zNear - zFar);
clipPos.w = -cameraPos.z;
gl_Position = clipPos;
theColor = color;
}

OpenGL, GLM Quaternion Rotations

I'm updating an old OpenGL project and I'm switching all the (deprecated) glMatrix() functions for matrices and quaternions, and I'm having trouble getting the rotation working.
My drawing looks like this:
//these two are supposedly working
mat4 mProjection = perspective(FOV, aspectRatio, near, far);
mat4 mView = lookAt(cameraPosition, cameraCenter, headsUp);
mat4 mModel = mat4(1.0f);
mat4 mMVP = mProjection * mView * mModel;
What I'm trying to do now is to apply rotation to an object around a specific point (like the object's center).
I tried:
mat4 mModelRotation = rotate(mModel, object->RotationY(), vec3(0.0, 1.0, 0.0)); //RotationY being an angle in degrees
mat4 mMVP = mProjection * mView * mModel * mModelRotation;
But this causes the object to rotate around one of it's edges, not it's center.
I'd like to know how can I apply Quaternions to rotate the object around any point I pass as parameter for example.
I'm unexperienced with matrices, since I avoided them because I could use the glMatrix() functions before, so I don't understand much about the relation between the spatial position and them, and trying to update them to Quaternions is looking even more complicated.
I've read about the logic of Quaternions and how to use them, technically, but I don't understand where their values comes from.
For example:
//axis is a unit vector
local_rotation.w = cosf( fAngle/2)
local_rotation.x = axis.x * sinf( fAngle/2 )
local_rotation.y = axis.y * sinf( fAngle/2 )
local_rotation.z = axis.z * sinf( fAngle/2 )
total = local_rotation * total
I read this, and I have no clue what these values are. Axis is a unit vector... of what? fAngle I assume it's the angle I want to rotate, but since Quaternions use an arbitrary axis, how do I get the value for each of the XYZ axis, and how do I specify it in the Quaternion?
So, I'm looking for any practical example/tutorial of a Quaternion, so I can understand what's going on.
The only information I have when I want to rotate an object is the axis I want to rotate (x, y OR z, not all of them, but a combination of them in the final result), and a value in degrees.
I'm not much of a math person, so any tutorial that doesn't use shortcuts is highly appreciated.
OK, let say that you have a model ML (set of points of a model) and a point P to rotate that ML.
All the rotations are referred at to the origin, so you need to move the set of points ML taking the point P as the origin, make the rotation of all the points and then move it back.
How to do this ?, simple, for each point ML(k) (a point in the set) you do:
ML(k)-P --> with this you move the points, using the P point as origin
then rotate:
ROT * (ML(k)-P)
and finally, you move it back:
ROT(ML(k)-P) + P
As quaternions, you replace the matrix mult by q and -q
q * (ML(k)-p) * -q + p
That should work.

Perspecitve divide in vertex shader?

When using a perspective matrix in a vertex shader am I supposed to write code to divide by w or is it done automatically in a later stage?
The reason for my question is that I have seen lots of vertex shaders using:
gl_Position = matrix * pos;
which makes sense if there is a later stage that divides the vector with its w component.
However I never got it to work until I used the following in my vertex shader:
gl_Position = matrix * pos;
gl_Position = gl_Position / gl_Position.w;
Is the second example the correct one or could some other setting be missing?
Put it another way: which of the steps shown in the OpenGL Vertex transformation(first image) do I have to write in the vertex shader?
I know for sure that ModelView and Projection matrix belongs there(or a merge of the two). The viewport transform is not a part of the vertex shader, but what about the divide by w?
My setup is some simple triangles having coordinates within [-1 1] for x/y/z.
The Perspective matrix is supposed to project coordinates from z=-1 to -10 onto z=-1, x=[-1,1], y=[-1,1].
-1.0 0.0 0.0 0.0
0.0 -1.0 0.0 0.0
0.0 0.0 -1.2 -2.2
0.0 0.0 1.0 0.0
It was generated by:
x = 2.0f * zNear / (xMax - xMin);
y = 2.0f * zNear / (yMax - yMin);
a = -(xMax + xMin) / (xMax - xMin);
b = -(yMax + yMin) / (yMax - yMin);
c = (zFar + zNear) / (zNear - zFar);
d = -(2.0f * zFar * zNear) / (zNear - zFar);
To make the matrix P:
x, 0, a, 0
0, y, b, 0
0, 0, c, d
0, 0, 1, 0;
Finally I generate the final matrix by matrix = P * T where T is a translation (0,0,-2)
I have tried to do the math on the CPU and it appears to work generating expected results, however there I also do the divide by w manually.
Update: Solved but need understanding
I negated all components in the matrix (multiply by -1) and now it works.
The example above also had an issue with projecting both positive and negative z-coordinates onto the projection plane which also got solved by this change.
Any references or explanation why it got solved by this change is welcome.
You should not do the perspective divide yourself in the vertex shader, it will be done automatically later in the pipeline.
If that's not working, can you show some code or describe the problem more? I'm surprised that it's making a difference for you.