GLSL 1.2 floor() issues in Vertex Shader - opengl

I'm trying to calculate texture coordinates based on the coordinates of an incoming vertex in the Vertex Shader. This is a stripped down version of my attempt:
#version 120
varying vec4 color;
uniform sampler2D heightmap;
uniform ivec2 heightmapSize;
void main(void)
{
vec2 fHeightmapSize = vec2(heightmapSize);
vec2 pos = gl_Vertex.zx + vec2(0.5f, 0.5f);
vec2 offset = floor(fHeightmapSize * pos) + vec2(0.5f, 0.5f);
if (fract(offset.x) > 0.45f && fract(offset.x) < 0.55f
&& fract(offset.y) > 0.45f && fract(offset.y) < 0.55f)
color = vec4(0.0f, 1.0f, 0.0f, 1.0f);
else
color = vec4(1.0f, 0.0f, 0.0f, 1.0f);
// gl_Position = ...
// ...
}
gl_Vertex is in [-0.5, 0.5]^2, on the XZ-plane. So what I am basically trying to do, is to
first create a float vec2 from the ivec2 heightmapSize, which holds the width and height of the heightmap sampler.
Then I'm converting the vertex coordinates to the interval [0, 1]^2.
Then I'm calculating the offset in texture coordinates by multiplying the vertex position with the heightmap size. The left part (using floor()) should return the number of texels in each direction.
Example: The texure is of size 4x4, position is (0.55, 0.5). This means I would got 2 texels to the right, and two texels upwards -> (2.0, 2.0).
In the right part, I add another (0.5, 0.5) because I want the center of the texel. The (2.0, 2.0) becomes (2.5, 2.5). Note: whatever the coordinates are, the fractional part should be 0.5 in the end.
Now comes the strange part. I'm testing for the result by specifying two colors. If the fractional part of the offset is "close" to 0.5, I'm setting the resulting color to green, otherwise red. The image is almost all red.
How is it possible, that either fract() does not result in a vector of two integer values (or close to integer because of float accuracy), or the adding of vec2(0.5, 0.5) has no effect? Am I missing something else?

Related

Problems with uv when using half pixel translations

I am rendering 2 triangles to make a square, using a single draw call with GL_TRIANGLE_STRIP.
I calculate position and uv in the shader with:
vec2 uv = vec2(gl_VertexID >> 1, gl_VertexID & 1);
vec2 position = uv * 333.0f;
float offset = 150.0f;
mat4 model = mat4(1.0f);
model[3][1] = offset;
gl_Position = projection * model * position;
projection is a regular orthographic projection that matches screen size.
In the fragment shader I want to draw each first line of pixels with blue and each second line of pixels with red color.
int v = int(uv.y * 333.0f);
if (v % 2 == 0) {
color = vec4(1.0f, 0.0f, 0.0f, 1.0f);
} else {
color = vec4(0.0f, 0.0f, 1.0f, 1.0f);
}
This works ok, however if I use offset that will give me a subpixel translation:
offset = 150.5f;
The 2 triangles don't get matching uvs as seen in this picture:
What am I doing wrong?
Attribute interpolation is done only with a finite precision. That means that due to round-off errors, even a difference in 1 ulp (unit-last-place, i.e. least significant digit) can cause the result rounded to the other direction. Since the order of operations in the hardware interpolation unit can be different between the two triangles, the values prior to rounding can be slightly different. OpenGL does not provide any guarantees about that.
For example you might be getting 1.499999 in the upper triangle and 1.50000 in the lower triangle. Consequently when you add an offset of 0.5 then 1.99999 will be rounded down to 1.00000 but 2.000000 will be rounded to 2.00000.
If pixel perfect results are important to you I suggest you calculate uv coordinates manually from the gl_FragCoord.xy. In a case of an axis-aligned rectangle, as in your example, it is straightforward to do.

Instanced Quads using SSBO centres

I am creating a GPU particle system. I use a compute shader to advect particles which are stored in an SSBO as:
// Storing x y and z although only looking at 2D for now
float pos[3 * NUM_PARTICLES] // { x1, y1, z1, ... xn, yn, zn }
I also have a VBO containing base x,y for the four vertices of a quad as
float vert[4 * 4] = { 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f };
i.e. the quad fills the viewport as my MVP maps [0 1] to the full viewport.
I would like to use glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 4, NUM_PARTICLES) but rather than using per instance data stored in VBOs and setting them using glVertexAttribDivisor as I want to use the data in the SSBO directly for each instance. My initial thoughts were to modify the vertex shader like this:
#version 430 core
// Inputs of the quad (triangle strip) to render
uniform mat4 mvp;
in vec2 inPos;
// Position of particles and hence quad centres
layout(binding = 7) buffer particlePosBlock
{
float pos[ ];
};
void main()
{
// Get quad centre for this instance as a vec2
vec2 quadcentre = vec2(pos[0 + gl_InstanceID * 3], pos[0 + gl_InstanceID * 3]);
// Scale and shift the vertex positions from the base quad data
gl_Position = mvp * vec4(inPos, 0.0, 1.0);
}
But how do I modify the gl_Position values to scale and shift an instance based on the SSBO data? I assume some kind of matrix transform? Also, will indexing into the SSBO like this kill performance?

Artifacts with rendering texture with horizontal and vertical lines with OpenGL

I created 8x8 pixel bitmap letters to render them with OpenGL, but sometimes, depending on scaling I get weird artifacts as shown below in the image. Texture filtering is set to nearest pixel. It looks like rounding issue, but how could there be some if the line is perfectly horizontal.
Left original 8x8, middle scaled to 18x18, right scaled to 54x54.
Vertex data are unsigned bytes in format (x-offset, y-offset, letter). Here is full code:
vertex shader:
#version 330 core
layout(location = 0) in uvec3 Data;
uniform float ratio;
uniform float font_size;
out float letter;
void main()
{
letter = Data.z;
vec2 position = vec2(float(Data.x) / ratio, Data.y) * font_size - 1.0f;
position.y = -position.y;
gl_Position = vec4(position, 0.0f, 1.0f);
}
geometry shader:
#version 330 core
layout (points) in;
layout (triangle_strip, max_vertices = 4) out;
uniform float ratio;
uniform float font_size;
out vec3 texture_coord;
in float letter[];
void main()
{
// TODO: pre-calculate
float width = font_size / ratio;
float height = -font_size;
texture_coord = vec3(0.0f, 0.0f, letter[0]);
gl_Position = gl_in[0].gl_Position + vec4(0.0f, height, 0.0f, 0.0f);
EmitVertex();
texture_coord = vec3(1.0f, 0.0f, letter[0]);
gl_Position = gl_in[0].gl_Position + vec4(width, height, 0.0f, 0.0f);
EmitVertex();
texture_coord = vec3(0.0f, 1.0f, letter[0]);
gl_Position = gl_in[0].gl_Position + vec4(0.0f, 0.0f, 0.0f, 0.0f);
EmitVertex();
texture_coord = vec3(1.0f, 1.0f, letter[0]);
gl_Position = gl_in[0].gl_Position + vec4(width, 0.0f, 0.0f, 0.0f);
EmitVertex();
EndPrimitive();
}
fragment shader:
#version 330 core
in vec3 texture_coord;
uniform sampler2DArray font_texture_array;
out vec4 output_color;
void main()
{
output_color = texture(font_texture_array, texture_coord);
}
I had the same problem developing with Freetype and OpenGL. And after days of researching and scratching my head, I found the solution. In my case, I had to explicitly call the function 'glBlendColor'. Once, I did that, I did not observe any more artifacts.
Here is a snippet:
//Set Viewport
glViewport(0, 0, FIXED_WIDTH, FIXED_HEIGHT);
//Enable Blending
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBlendColor(1.0f, 1.0f, 1.0f, 1.0f); //Without this I was having artifacts: IMPORTANT TO EXPLICITLY CALLED
//Set Alignment requirement to 1 byte
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
I figured out the solution after reviewing the source code of this OpenGL-Freetype library on github: opengl-freetype library
Well, when using nearest filtering, you will see such issues if your sample location is very close to the boundary between two texels. And since the tex coords are to be interpolated separately for each fragment you are drawing, slight numerical inaccuracies will result in jumping between those two texels.
When you draw an 8x8 texture to an 18x18 pixel big rectangle, and your rectangle is perfectly aligned to the putput pixel raster, you are almost guaranteed to trigger that behavior:
Looking at the texel coodinates will then reveal that for the very bottom output pixel, the texture coords would be interpolated to 1/(2*18) = 1/36. Going one pixel up will add 1/18 = 2/36 to the t coordinate. So for the fifth row from the bottom, it would be 9/36.
So for the 8x8 texel big texture you are sampling from, you are actually sampling at unnormalized texel coordinates (9/36)*8 == 2.0. This is exactly the boundary between the second and third row of your texture. Since the texture coordinates for each fragment are interpolated by a barycentric interpolation between the tex coords assigned to the three vertices froming the triangle, there can be slight inaccuracies. And even the slightest possible inaccuracy representable in floating point format would result in flipping between two texels in this case.
I think your approach is just not good. Scaling bitmap fonts is always problematic (maybe besides integral scale factors). If you want nicely looking scalable texture fonts, I recommend you to look into signed distance fields. It is quite a simple and powerful technique, and there are tools available to generate the necessary distance field textures.
If you are looking for a quick hack, you coud also just offset your output rectangle slightly. You basically must make sure to keep the offset in [-0.5,0.5] pixels (so that never different fragments are generated during rasterization, and you must make sure that all the potential sample locations will never lie close to an integer, so the offset will depend on the actual scale factor.

OpenGL Projective Texture Mapping via Shaders

I am trying to implement a simple projective texture mapping approach by using shaders in OpenGL 3+. While there are some examples on the web I am having trouble creating a working example with shaders.
I am actually planning on using two shaders, one which does a normal scene draw, and another for projective texture mapping. I have a function for drawing a scene void ProjTextureMappingScene::renderScene(GLFWwindow *window) and I am using glUseProgram() to switch between shaders. The normal drawing works fine. However, it is unclear to me how I am supposed to render the projective texture on top of an already textured cube. Do I somehow have to use a stencil buffer or a framebuffer object(the rest of the scene should be unaffected)?
I also don't think that my projective texture mapping shaders are correct since the second time I render a cube it shows black. Further, I tried to debug by using colors and only the t component of the shader seems to be non-zero(so the cube appears green). I am overriding the texColor in the fragment shader below just for debugging purposes.
VertexShader
#version 330
uniform mat4 TexGenMat;
uniform mat4 InvViewMat;
uniform mat4 P;
uniform mat4 MV;
uniform mat4 N;
layout (location = 0) in vec3 inPosition;
//layout (location = 1) in vec2 inCoord;
layout (location = 2) in vec3 inNormal;
out vec3 vNormal, eyeVec;
out vec2 texCoord;
out vec4 projCoords;
void main()
{
vNormal = (N * vec4(inNormal, 0.0)).xyz;
vec4 posEye = MV * vec4(inPosition, 1.0);
vec4 posWorld = InvViewMat * posEye;
projCoords = TexGenMat * posWorld;
// only needed for specular component
// currently not used
eyeVec = -posEye.xyz;
gl_Position = P * MV * vec4(inPosition, 1.0);
}
FragmentShader
#version 330
uniform sampler2D projMap;
uniform sampler2D gSampler;
uniform vec4 vColor;
in vec3 vNormal, lightDir, eyeVec;
//in vec2 texCoord;
in vec4 projCoords;
out vec4 outputColor;
struct DirectionalLight
{
vec3 vColor;
vec3 vDirection;
float fAmbientIntensity;
};
uniform DirectionalLight sunLight;
void main (void)
{
// supress the reverse projection
if (projCoords.q > 0.0)
{
vec2 finalCoords = projCoords.st / projCoords.q;
vec4 vTexColor = texture(gSampler, finalCoords);
// only t has non-zero values..why?
vTexColor = vec4(finalCoords.s, finalCoords.t, finalCoords.r, 1.0);
//vTexColor = vec4(projCoords.s, projCoords.t, projCoords.r, 1.0);
float fDiffuseIntensity = max(0.0, dot(normalize(vNormal), -sunLight.vDirection));
outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
}
}
Creation of TexGen Matrix
biasMatrix = glm::mat4(0.5f, 0, 0, 0.5f,
0, 0.5f, 0, 0.5f,
0, 0, 0.5f, 0.5f,
0, 0, 0, 1);
// 4:3 perspective with 45 fov
projectorP = glm::perspective(45.0f * zoomFactor, 4.0f / 3.0f, 0.1f, 1000.0f);
projectorOrigin = glm::vec3(-3.0f, 3.0f, 0.0f);
projectorTarget = glm::vec3(0.0f, 0.0f, 0.0f);
projectorV = glm::lookAt(projectorOrigin, // projector origin
projectorTarget, // project on object at origin
glm::vec3(0.0f, 1.0f, 0.0f) // Y axis is up
);
mModel = glm::mat4(1.0f);
...
texGenMatrix = biasMatrix * projectorP * projectorV * mModel;
invViewMatrix = glm::inverse(mModel*mModelView);
Render Cube Again
It is also unclear to me what the modelview of the cube should be? Should it use the view matrix from the slide projector(as it is now) or the normal view projector? Currently the cube is rendered black(or green if debugging) in the middle of the scene view, as it would appear from the slide projector(I made a toggle hotkey so that I can see what the slide projector "sees"). The cube also moves with the view. How do I get the projection unto the cube itself?
mModel = glm::translate(projectorV, projectorOrigin);
// bind projective texture
tTextures[2].bindTexture();
// set all uniforms
...
// bind VBO data and draw
glBindVertexArray(uiVAOSceneObjects);
glDrawArrays(GL_TRIANGLES, 6, 36);
Switch between main scene camera and slide projector camera
if (useMainCam)
{
mCurrent = glm::mat4(1.0f);
mModelView = mModelView*mCurrent;
mProjection = *pipeline->getProjectionMatrix();
}
else
{
mModelView = projectorV;
mProjection = projectorP;
}
I have solved the problem. One issue I had is that I confused the matrices in the two camera systems (world and projective texture camera). Now when I set the uniforms for the projective texture mapping part I use the correct matrices for the MVP values - the same ones I use for the world scene.
glUniformMatrix4fv(iPTMProjectionLoc, 1, GL_FALSE, glm::value_ptr(*pipeline->getProjectionMatrix()));
glUniformMatrix4fv(iPTMNormalLoc, 1, GL_FALSE, glm::value_ptr(glm::transpose(glm::inverse(mCurrent))));
glUniformMatrix4fv(iPTMModelViewLoc, 1, GL_FALSE, glm::value_ptr(mCurrent));
glUniformMatrix4fv(iTexGenMatLoc, 1, GL_FALSE, glm::value_ptr(texGenMatrix));
glUniformMatrix4fv(iInvViewMatrix, 1, GL_FALSE, glm::value_ptr(invViewMatrix));
Further, the invViewMatrix is just the inverse of the view matrix not the model view (this didn't change the behaviour in my case, since the model was identity, but it is wrong). For my project I only wanted to selectively render a few objects with projective textures. To do this, for each object, I must make sure that the current shader program is the one for projective textures using glUseProgram(projectiveTextureMappingProgramID). Next, I compute the required matrices for this object:
texGenMatrix = biasMatrix * projectorP * projectorV * mModel;
invViewMatrix = glm::inverse(mView);
Coming back to the shaders, the vertex shader is correct except that I re-added the UV texture coordinates (inCoord) for the current object and stored them in texCoord.
For the fragment shader I changed the main function to clamp the projective texture so that it doesn't repeat (I couldn't get it to work with the client side GL_CLAMP_TO_EDGE) and I am also using the default object texture and UV coordinates in case the projector does not cover the whole object (I also removed lighting from the projective texture since it is not needed in my case):
void main (void)
{
vec2 finalCoords = projCoords.st / projCoords.q;
vec4 vTexColor = texture(gSampler, texCoord);
vec4 vProjTexColor = texture(projMap, finalCoords);
//vec4 vProjTexColor = textureProj(projMap, projCoords);
float fDiffuseIntensity = max(0.0, dot(normalize(vNormal), -sunLight.vDirection));
// supress the reverse projection
if (projCoords.q > 0.0)
{
// CLAMP PROJECTIVE TEXTURE (for some reason gl_clamp did not work...)
if(projCoords.s > 0 && projCoords.t > 0 && finalCoords.s < 1 && finalCoords.t < 1)
//outputColor = vProjTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
outputColor = vProjTexColor*vColor;
else
outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
}
else
{
outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
}
}
If you are stuck and for some reason can not get the shaders to work, you can check out an example in "OpenGL 4.0 Shading Language Cookbook" (textures chapter) - I actually missed this, until I got it working by myself.
In addition to all of the above, a great help for debugging if the algorithm is working correctly was to draw the frustum (as wireframe) for the projective camera. I used a shader for frustum drawing. The fragment shader just assigns a solid color, while the vertex shader is listed below with explanations:
#version 330
// input vertex data
layout(location = 0) in vec3 vp;
uniform mat4 P;
uniform mat4 MV;
uniform mat4 invP;
uniform mat4 invMV;
void main()
{
/*The transformed clip space position c of a
world space vertex v is obtained by transforming
v with the product of the projection matrix P
and the modelview matrix MV
c = P MV v
So, if we could solve for v, then we could
genrerate vertex positions by plugging in clip
space positions. For your frustum, one line
would be between the clip space positions
(-1,-1,near) and (-1,-1,far),
the lower left edge of the frustum, for example.
NB: If you would like to mix normalized device
coords (x,y) and eye space coords (near,far),
you need an additional step here. Modify your
clip position as follows
c' = (c.x * c.z, c.y * c.z, c.z, c.z)
otherwise you would need to supply both the z
and w for c, which might be inconvenient. Simply
use c' instead of c below.
To solve for v, multiply both sides of the equation above with
-1
(P MV)
This gives
-1
(P MV) c = v
This is equivalent to
-1 -1
MV P c = v
-1
P is given by
|(r-l)/(2n) 0 0 (r+l)/(2n) |
| 0 (t-b)/(2n) 0 (t+b)/(2n) |
| 0 0 0 -1 |
| 0 0 -(f-n)/(2fn) (f+n)/(2fn)|
where l, r, t, b, n, and f are the parameters in the glFrustum() call.
If you don't want to fool with inverting the
model matrix, the info you already have can be
used instead: the forward, right, and up
vectors, in addition to the eye position.
First, go from clip space to eye space
-1
e = P c
Next go from eye space to world space
v = eyePos - forward*e.z + right*e.x + up*e.y
assuming x = right, y = up, and -z = forward.
*/
vec4 fVp = invMV * invP * vec4(vp, 1.0);
gl_Position = P * MV * fVp;
}
The uniforms are used like this (make sure you use the right matrices):
// projector matrices
glUniformMatrix4fv(iFrustumInvProjectionLoc, 1, GL_FALSE, glm::value_ptr(glm::inverse(projectorP)));
glUniformMatrix4fv(iFrustumInvMVLoc, 1, GL_FALSE, glm::value_ptr(glm::inverse(projectorV)));
// world camera
glUniformMatrix4fv(iFrustumProjectionLoc, 1, GL_FALSE, glm::value_ptr(*pipeline->getProjectionMatrix()));
glUniformMatrix4fv(iFrustumModelViewLoc, 1, GL_FALSE, glm::value_ptr(mModelView));
To get the input vertices needed for the frustum's vertex shader you can do the following to get the coordinates (then just add them to your vertex array):
glm::vec3 ftl = glm::vec3(-1, +1, pFar); //far top left
glm::vec3 fbr = glm::vec3(+1, -1, pFar); //far bottom right
glm::vec3 fbl = glm::vec3(-1, -1, pFar); //far bottom left
glm::vec3 ftr = glm::vec3(+1, +1, pFar); //far top right
glm::vec3 ntl = glm::vec3(-1, +1, pNear); //near top left
glm::vec3 nbr = glm::vec3(+1, -1, pNear); //near bottom right
glm::vec3 nbl = glm::vec3(-1, -1, pNear); //near bottom left
glm::vec3 ntr = glm::vec3(+1, +1, pNear); //near top right
glm::vec3 frustum_coords[36] = {
// near
ntl, nbl, ntr, // 1 triangle
ntr, nbl, nbr,
// right
nbr, ftr, ntr,
ftr, nbr, fbr,
// left
nbl, ftl, ntl,
ftl, nbl, fbl,
// far
ftl, fbl, fbr,
fbr, ftr, ftl,
//bottom
nbl, fbr, fbl,
fbr, nbl, nbr,
//top
ntl, ftr, ftl,
ftr, ntl, ntr
};
After all is said and done, it's nice to see how it looks:
As you can see I applied two projective textures, one of a biohazard image on Blender's Suzanne monkey head, and a smiley texture on the floor and a small cube. You can also see that the cube is partly covered by the projective texture, while the rest of it appears with its default texture. Finally, you can see the green frustum wireframe for the projector camera - and everything looks correct.

OpenGL point sprites rotation in fragment shader

I'm following this tutorial to learn something more about OpenGL and in particular point sprites. But I'm stuck on one of the exercises at the end of the page:
Try to rotate the point sprites 45 degrees by changing the fragment shader.
There are no hints about this sort of thing in the chapter, nor in the previous ones. And I didn't find any documentation on how to do it. These are my vertex and fragment shaders:
Vertex Shader
#version 140
attribute vec2 coord2d;
varying vec4 f_color;
uniform float offset_x;
uniform float scale_x;
uniform float point_size;
void main(void) {
gl_Position = vec4((coord2d.x + offset_x) * scale_x, coord2d.y, 0.0, 1.0);
f_color = vec4(coord2d.xy / 2.0 + 0.5, 1.0, 1.0);
gl_PointSize = point_size;
}
Fragment Shader
#version 140
varying vec4 f_color;
uniform sampler2D texture;
void main(void) {
gl_FragColor = texture2D(texture, gl_PointCoord) * f_color;
}
I thought about using a 2x2 matrix in the FS to rotate the gl_PointCoord, but I have no idea how to fill the matrix to accomplish it. Should I pass it directly to the FS as a uniform?
The traditional method is to pass a matrix to the shader, whether vertex or fragment. If you don't know how to fill in a rotation matrix, Google and Wikipedia can help.
The main thing is that you're going to run into is the simple fact that a 2D rotation is not enough. gl_PointCoord goes from [0, 1]. A pure rotation matrix rotates around the origin, which is the bottom-left in point-coord space. So you need more than a pure rotation matrix.
You need a 3x3 matrix, which has part rotation and part translation. This matrix should be generated as follows (using GLM for math stuff):
glm::mat4 currMat(1.0f);
currMat = glm::translate(currMat, glm::vec3(0.5f, 0.5f, 0.0f));
currMat = glm::rotate(currMat, angle, glm::vec3(0.0f, 0.0f, 1.0f));
currMat = glm::translate(currMat, glm::vec3(-0.5f, -0.5f, 0.0f));
You then pass currMat to the shader as a 4x4 matrix. Your shader does this:
vec2 texCoord = (rotMatrix * vec4(gl_PointCoord, 0, 1)).xy
gl_FragColor = texture2D(texture, texCoord) * f_color;
I'll leave it as an exercise for you as to how to move the translation from the fourth column into the third, and how to pass it as a 3x3 matrix. Of course, in that case, you'll do vec3(gl_PointCoord, 1) for the matrix multiply.
I was stuck in the same problem too, but I found a tutorial that explain how to perform a 2d texture rotation in the same fragment shader with only with passing the rotate value (vRotation).
#version 130
uniform sampler2D tex;
varying float vRotation;
void main(void)
{
float mid = 0.5;
vec2 rotated = vec2(cos(vRotation) * (gl_PointCoord.x - mid) + sin(vRotation) * (gl_PointCoord.y - mid) + mid,
cos(vRotation) * (gl_PointCoord.y - mid) - sin(vRotation) * (gl_PointCoord.x - mid) + mid);
vec4 rotatedTexture=texture2D(tex, rotated);
gl_FragColor = gl_Color * rotatedTexture;
}
Maybe this method is slow but is only to prove/show that you have an alternative to perform a texture 2D rotation inside fragment shader instead of passing a Matrix.
Note: vRotation should be in Radians.
Cheers,
You're right - a 2x2 rotation matrix will do what you want.
This page: http://www.cg.info.hiroshima-cu.ac.jp/~miyazaki/knowledge/teche31.html shows how to compute the elements. Note that you will be rotating the texture coordinates, not the vertex positions - the result will probably not be what you're expecting - it will rotate around the 0,0 texture coordinate, for example.
You may alse need to multiply the point_size by 2 and shrink the gl_PointCoord by 2 to ensure the whole texture fits into the point sprite when it's rotated. But do that as a second change. Note that a straight scale of texture coordinates move them towards the texture coordinate origin, not the middle of the sprite.
If you use a higher dimension matrix (3x3) then you will be able to combine the offset, scale and rotation into one operation.