Skew an image using openGL shaders - c++

To lighten the work-load on my artist, I'm working on making a bare-bones skeletal animation system for images. I've conceptualized how all the parts work. But to make it animate how I want to, I need to be able to skew an image in real time. (If I can't do that, I still know how to make it work, but it'd be a lot prettier if I could skew images to add perspective)
I'm using SFML, which doesn't support image skewing. I've been told I'd need to use OpenGL shaders on the image, but all the advice I got on that was "learn GLSL". So I figured maybe someone over here could help me out a bit.
Does anyone know where I should start with this? Or how to do it?
Basically, I want to be able to deforem an image to give it perspective (as shown in the following mockup)
The images on the left are being skewed at the top and bottom. The images on the right are being skewed at the left and right.

An example of how to skew a texture in GLSL would be the following (poorly poorly optimized) shader. Idealy, you would want to actually precompute your transform matrix inside your regular program and pass it in as a uniform so that you aren't recomputing the transform every move through the shader. If you still wanted to compute the transform in the shader, pass the skew factors in as uniforms instead. Otherwise, you'll have to open the shader and edit it every time you want to change the skew factor.
This is for a screen aligned quad as well.
Vert
attribute vec3 aVertexPosition;
varying vec2 texCoord;
void main(void){
// Set regular texture coords
texCoord = ((vec2(aVertexPosition.xy) + 1.0) / 2.0);
// How much we want to skew each axis by
float xSkew = 0.0;
float ySkew = 0.0;
// Create a transform that will skew our texture coords
mat3 trans = mat3(
1.0 , tan(xSkew), 0.0,
tan(ySkew), 1.0, 0.0,
0.0 , 0.0, 1.0
);
// Apply the transform to our tex coords
texCoord = (trans * (vec3(texCoord.xy, 0.0))).xy;
// Set vertex position
gl_Position = (vec4(aVertexPosition, 1.0));
}
Frag
precision highp float;
uniform sampler2D sceneTexture;
varying vec2 texCoord;
void main(void){
gl_FragColor = texture2D(sceneTexture, texCoord);
}

This ended up being significantly simpler than I thought it was. SFML has a vertexArray class that allows drawing custom quads without requiring the use of openGL.
The code I ended up going with is as follows (for anyone else who runs into this problem):
sf::Texture texture;
texture.loadFromFile("texture.png");
sf::Vector2u size = texture.getSize();
sf::VertexArray box(sf::Quads, 4);
box[0].position = sf::Vector2f(0, 0); // top left corner position
box[1].position = sf::Vector2f(0, 100); // bottom left corner position
box[2].position = sf::Vector2f(100, 100); // bottom right corner position
box[3].position = sf::Vector2f(100, 100); // top right corner position
box[0].texCoords = sf::Vector2f(0,0);
box[1].texCoords = sf::Vector2f(0,size.y-1);
box[2].texCoords = sf::Vector2f(size.x-1,size.y-1);
box[3].texCoords = sf::Vector2f(size.x-1,0);
To draw it, you call the following code wherever you usually tell your window to draw stuff.
window.draw(lines,&texture);
If you want to skew the image, you just change the positions of the corners. Works great. With this information, you should be able to create a custom drawable class. You'll have to write a bit of code (set_position, rotate, skew, etc), but you just need to change the position of the corners and draw.

Related

OpenGL point sprites not always rendered front to back

I'm working on a game engine with LWJGL3 in which all objects are point sprites. It uses an orthographic camera and I wrote a vertex shader that calculates the 3D position of each sprite (which also causes the fish-eye lens effect). I calculate the distance to the camera and use that value as the depth value for each point sprite. The data for these sprites is stored in chunks of 16x16 in VBOs.
The issue that I'm having is that the sprites are not always rendered front to back. When looking away from the origin, the depth testing works as intended, but when looking in the direction of the origin, sprites are rendered from back to front which causes a big performance drop.
This might seem like depth testing is not enabled, but when I disable depth testing, sprites in the back are drawn on top of the ones in front, so that is not the case.
Here's the full vertex shader:
#version 330 core
#define M_PI 3.1415926535897932384626433832795
uniform mat4 camRotMat; // Virtual camera rotation
uniform vec3 camPos; // Virtual camera position
uniform vec2 fov; // Virtual camera field of view
uniform vec2 screen; // Screen size (pixels)
in vec3 pos;
out vec4 vColor;
void main() {
// Compute distance and rotated delta position to camera
float dist = distance(pos, camPos);
vec3 dXYZ = (camRotMat * vec4(camPos - pos, 0)).xyz;
// Compute angles of this 3D position relative center of camera FOV
// Distance is never negative, so negate it manually when behind camera
vec2 rla = vec2(atan(dXYZ.x, length(dXYZ.yz)),
atan(dXYZ.z, length(dXYZ.xy) * sign(-dXYZ.y)));
// Find sprite size and coordinates of the center on the screen
float size = screen.y / dist * 2; // Sprites become smaller based on their distance
vec2 px = -rla / fov * 2; // Find pixel position on screen of this object
// Output
vColor = vec4((1 - (dist * dist) / (64 * 64)) + 0.5); // Brightness
gl_Position = vec4(px, dist / 1000, 1.0); // Position on the screen
gl_PointSize = size; // Sprite size
}
In the first image, you can see how the game normally looks. In the second one, I've disabled alpha-testing, so you can see sprites are rendered front to back. But in the third image, when looking in the direction of the origin, sprites are being drawn back to front.
Edit:
I am almost 100% certain the depth value is set correctly. The size of the sprites is directly linked to the distance, and they resize correctly when moving around. I also set the color to be brighter when the distance is lower, which works as expected.
I also set the following flags (and ofcourse clear the frame and depth buffer):
GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glDepthFunc(GL11.GL_LEQUAL);
Edit2:
Here's a gif of what it looks like when you rotate around: https://i.imgur.com/v4iWe9p.gifv
Edit3:
I think I misunderstood how depth testing works. Here is a video of how the sprites are drawn over time: https://www.youtube.com/watch?v=KgORzkM9U2w
That explains the initial problem, so now I just need to find a way to render them in a different order depending on the camera rotation.

GLSL vertex shader on 3d model

I am currently coding a simple vertex shader for a model. What I want to achieve is something like this :
I have a model of a dragon, nothing too fancy, and I want to shade the wings vertexes to move around a bit, to simulte flying. Now, this is for academic purposes so it doesn't have to be perfect in any way.
What I'm looking for precisely, is how do make for example only the vertices further from the center of the model move ? Is there any way I can compare the position of the vertex to the center of the model, and make it move more or less (using a time variable sent from the OpenGL app) depending on the distance to the center ?
If not, are there any other ways that would be appropriate and relatively simple to do?
You could try this:
#version 330 core
in vec3 vertex;
void main() {
//get the distance from 0,0,0
float distanceFromCenter = length(vertex);
//create a simple squiggly wave function
//you have to change the constant at the end depending
//on the size of your model
float distortionAmount = sin(distanceFromCenter / 10.0);
//the last vector says on which axes to distor, and how much. this example would wiggle on the z-axis
vec3 distortedPosition = vertex + distortionAmount * vec3(0, 0, 1);
gl_Position = distortedPosition;
}
It might not be perfect, but it should you get started.

How can I use something like "discard;" to increase performance?

I would like to use shaders to increase my in-game performance.
As you can see, I am cutting out the far-away pixels from rendering, using the discard function.
However, for some reason this is not increasing FPS. I have got the exact same 5 FPS that I have got without any shaders at all! The low FPS is because I chose to render an absolute ton of trees, but since they are not seen... Then why are they causing lag!?
My fragment shader code:
varying vec4 position_in_view_space;
uniform sampler2D color_texture;
void main()
{
float dist = distance(position_in_view_space,
vec4(0.0, 0.0, 0.0, 1.0));
if (dist < 1000.0)
{
gl_FragColor = gl_Color;
// color near origin
}
else
{
//kill;
discard;
//gl_FragColor = vec4(0.3, 0.3, 0.3, 1.0);
// color far from origin
}
}
My vertex shader code:
varying vec4 position_in_view_space;
void main()
{
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
position_in_view_space = gl_ModelViewMatrix * gl_Vertex;
// transformation of gl_Vertex from object coordinates
// to view coordinates;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
(I have got very slightly modified fragment shader for the terrain texturing and a lower distance for the trees (Ship in that screenshot is too far from trees), however it is basically the same idea.)
So, could someone please tell me why discard is not improving performance? If it can't, is there another way to make unwanted vertices not render (or render less vertices further away) to increase performance?
using discard will actually prevent the GPU from using the depth buffer before invoking the fragment shader.
It's much simpler to adjust the far plane of the projection matrix so unwanted vertices are outside the render box.
Also consider not calling the glDraw for distant trees in the first place.
you can for example group the trees per island and then per island check if it is close enough for the trees to render and just not call glDraw* when it's not.

OpenGL: Rendering sprites of variable sizes

I have a particle system to model sand flow simulations. I came across the CUDA SDK particles example which I modified to handle particles of different sizes. The positions and colors per particle are stored in VBOs and I added a VBO for the radius attribute. The physics aspect is fine. In rendering, lies my problem:-
The example implements a vertex shader which uses GL_POINT_SPRITE_ARB (its an NVidia extension ), which takes a fixed particle radius. I googled extensively but found no way to make the sprites take variable radii from the particle radius VBO.
The vertex shader looks like this :
#define STRINGIFY(A) #A
// vertex shader
const char *vertexShader = STRINGIFY(
uniform float pointRadius; // point size in world space
uniform float pointScale; // scale to calculate size in pixels
uniform float densityScale;
uniform float densityOffset;
void main()
{
// calculate window-space point size
vec3 posEye = vec3(gl_ModelViewMatrix * vec4(gl_Vertex.xyz, 1.0));
float dist = length(posEye);
gl_PointSize = pointRadius * (pointScale / dist);
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = gl_ModelViewProjectionMatrix * vec4(gl_Vertex.xyz, 1.0);
gl_FrontColor = gl_Color;
}
);
How do I get varying sized point sprites to render ?
NB:- This is a workstation app, so using OpenGL ES isn't an option.
I'd also like to ask about the viability of 2 solutions I came up with:-
Using glusphere(), followed by translate and scaling. But I'm afraid this might take a performance hit, from having to render all those extra vertices.
Ditch OpenGL and port the entire thing to Direct3D 9 which I believe has a built-in attribute which accepts variable sprite radii, but there's a time constraint and I don't know anything about DirectX.
Can I implement 2 instances of the above vertex shader in the same scene and pass them different pointRadius ?
I googled extensively but found no way to make the sprites take variable radii from the particle radius VBO.
glEnable(GL_PROGRAM_POINT_SIZE)
Add a varying float pointSize and bind your radius VBO there
gl_PointSize = pointSize in your vertex shader

OpenGL GLSL SSAO Implementation

I try to implement Screen Space Ambient Occlusion (SSAO) based on the R5 Demo found here: http://blog.nextrevision.com/?p=76
In Fact I try to adapt their SSAO - Linear shader to fit into my own little engine.
1) I calculate View Space surface normals and Linear depth values.
I Store them in a RGBA texture using the following shader:
Vertex:
varNormalVS = normalize(vec3(vmtInvTranspMatrix * vertexNormal));
depth = (modelViewMatrix * vertexPosition).z;
depth = (-depth-nearPlane)/(farPlane-nearPlane);
gl_Position = pvmtMatrix * vertexPosition;
Fragment:
gl_FragColor = vec4(varNormalVS.x,varNormalVS.y,varNormalVS.z,depth)
For my linear depth calculation I referred to: http://www.gamerendering.com/2008/09/28/linear-depth-texture/
Is it correct?
Texture seem to be correct, but maybe it is not?
2) The actual SSAO Implementation:
As mentioned above the original can be found here: http://blog.nextrevision.com/?p=76
or faster: on pastebin http://pastebin.com/KaGEYexK
In contrast to the original I only use 2 input textures since one of my textures stores both, normals as RGB and Linear Depht als Alpha.
My second Texture, the random normal texture, looks like this:
http://www.gamerendering.com/wp-content/uploads/noise.png
I use almost exactly the same implementation but my results are wrong.
Before going into detail I want to clear some questions first:
1) ssao shader uses projectionMatrix and it's inverse matrix.
Since it is a post processing effect rendered onto a screen aligned quad via orthographic projection, the projectionMatrix is the orthographic matrix. Correct or Wrong?
2) Having a combined normal and Depth texture instead of two seperate ones.
In my opinion this is the biggest difference between the R5 implementation and my implementation attempt. I think this should not be a big problem, however, due to different depth textures this is most likley to cause problems.
Please note that R5_clipRange looks like this
vec4 R5_clipRange = vec4(nearPlane, farPlane, nearPlane * farPlane, farPlane - nearPlane);
Original:
float GetDistance (in vec2 texCoord)
{
//return texture2D(R5_texture0, texCoord).r * R5_clipRange.w;
const vec4 bitSh = vec4(1.0 / 16777216.0, 1.0 / 65535.0, 1.0 / 256.0, 1.0);
return dot(texture2D(R5_texture0, texCoord), bitSh) * R5_clipRange.w;
}
I have to admit I do not understand the code snippet. My depth his stored in the alpha of my texture and I thought it should be enought to just do this
return texture2D(texSampler0, texCoord).a * R5_clipRange.w;
Correct or Wrong?
Your normal texture seems wrong. My guess is that your vmtInvTranspMatrix is a model-view matrix. However it should be model-view-projection matrix (note you need screen space normals, not view space normals). The depth calculation is correct.
I've implemented SSAO once and the normal texture looks like this (note there is no blue here):
1) ssao shader uses projectionMatrix and it's inverse matrix.
Since it is a post processing effect rendered onto a screen aligned quad via orthographic projection, the projectionMatrix is the orthographic matrix. Correct or Wrong ?
If you mean the second pass where you are rendering a quad to compute the actual SSAO, yes. You can avoid the multiplication by the orthogonal projection matrix altogether. If you render screen quad with [x,y] dimensions ranging from -1 to 1, you can use really simple vertex shader:
const vec2 madd=vec2(0.5,0.5);
void main(void)
{
gl_Position = vec4(in_Position, -1.0, 1.0);
texcoord = in_Position.xy * madd + madd;
}
2) Having a combined normal and Depth texture instead of two seperate
ones.
Nah, that won't cause problems. It's a common practice to do so.