I am a complete OpenGL beginner and I inherited a codebase. I want OpenGL to flip a texture in vertical direction, meaning the top row goes to the bottom and so on. I am only doing 2D processing, if that is relevant.
My texture vertices are currently this:
const float texture_vertices[] = {
0.0, 1.0,
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
1.0, 0.0,
1.0, 1.0,
};
I tried changing the directions of the triangles and reordering them from going clockwise to counter clockwise. I know this is caused by my lack of awareness of how the very basics of OpenGL work, but I would appreciate all help (especially a short explanation of why something is the right way to reorder them).
Maybe I am going about this all wrong and the texture coordinates are not what I am interested in?
You need to flip the 2nd component of the texture coordinates (swap 0 and 1):
const float texture_vertices[] = {
0.0, 0.0,
0.0, 1.0,
1.0, 1.0,
0.0, 0.0,
1.0, 1.0,
1.0, 0.0,
};
Related
Currently I am creating a mesh of a cube like this:
//bottom vertices of cube
positions.push([ 0.0, 0.0, 0.0 ]);
uvs.push([ 0.0, 0.0 ]);
positions.push([ 1.0, 0.0, 0.0 ]);
uvs.push([ 0.0, 1.0 ]);
positions.push([ 1.0, 0.0, 1.0 ]);
uvs.push([ 1.0, 1.0 ]);
positions.push([ 0.0, 0.0, 1.0 ]);
uvs.push([ 1.0, 0.0 ]);
//upper vertices of cube
positions.push([ 0.0, 1.0, 0.0 ]);
uvs.push([ 0.0, 1.0 ]);
positions.push([ 1.0, 1.0, 0.0 ]);
uvs.push([ 1.0, 1.0 ]);
positions.push([ 1.0, 1.0, 1.0 ]);
uvs.push([ 1.0, 0.0 ]);
positions.push([ 0.0, 1.0, 1.0 ]);
uvs.push([ 1.0, 1.0 ]);
(I use a texture that is 16x16 pixels big)
But I struggle to map those uv positions, because the maximum number of sides that work are 4.
I would know what to do if there would be 4 vertices for each side (24 in total), but I dont want to do it like this because of performance reasons.
Is there any way of doing this with only 8 vertices per cube?
the minimal cube has six sides
each side has TWO TRIANGLES
you will have TWELVE tris for a minimal cube
DONT, repeat DO NOT bother trying to share the verts. if for some reason you want to, do that later.
Simply ensure that each pair of tris (one square) has the correct UV
https://stackoverflow.com/a/36845398/294884
I want to use Alpha Blending (SRC_ALPHA, ONE_MINUS_SRC_ALPHA), which basically is this:
frag_out.aaaa * frag.out + ((1.0, 1.0, 1.0, 1.0) - frag_out.aaaa) * pixel_color
Let's say the pixel color already on screen is (1.0, 1.0, 1.0, 1.0)
The color I am currently rendering is (1.0, 1.0, 1.0, 0.4)
When I render this the resulting color on screen has an alpha 0.76 (even though it was fully opaque BEFORE)
Indeed:
(0.4, 0.4, 0.4, 0.4) * (1.0, 1.0, 1.0, 0.4) + (0.6, 0.6, 0.6, 0.6) * (1.0, 1.0, 1.0, 1.0) = (1.0, 1.0, 1.0, 0.76)
So basically the color on the screen was opaque (white) and after I put a transparent sprite on top the screen become transparent (0.76 alpha) and I am able to look through the background. It would be perfect if I could blend it like this:
(0.4, 0.4, 0.4, 1.0) * (1.0, 1.0, 1.0, 0.4) + (0.6, 0.6, 0.6, 0.6) * (1.0, 1.0, 1.0, 1.0) = (1.0, 1.0, 1.0, 1.0)
Is it possible to achieve this? If yes, how?
This is perfectly possible when using OpenGL 3.0 and above since the blend functions can there be set separately for RGB and A. For your case this would be
glBlendFuncSeparate(GL_SOURCE_ALPHA, GL_ONE_MINUS_SOURCE_ALPHA, GL_ONE, GL_ONE);
The last parameter specifying the blend factor for destination alpha might have to be adapted since you didn't specify the behavior for destination alpha < 1. For more details have a look at the function documentation.
I see that you can pass variables from javaScript to GLSL but is it possible to go the other way around. Basically I have a shader that converts a texture to 3 colors red, green, and blue based on the alpha channel.
if (texture.a == 1.0) {
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
}
if (texture.a == 0.0) {
gl_FragColor = vec4(0.0, 0.0, 1.0, 1.0);
call0nce = 1;
}
if (texture.a > 0.0 && texture.a < 1.0) {
"gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);",
}
I have been using the colors to help visualize things better. What I'm really trying to do is pick a random point with with an alpha of 1.0 and a random point with an alpha of 0.0 and output the texture coordinates of both in a way that I can access them from javaScript. How would I go about this?
You can not directly pass values from GLSL back to JavaScript. GLSL, at least in WebGL 1.0, can only output pixels to textures, renderbuffers, and the backbuffer.
You can read the pixels though by calling gl.readPixels so you can indirectly get data from GLSL back into JavaScript by writing the values you want and then calling gl.readPixels.
I am not quite sure what is missing, but I loaded a uniform matrix into a vertex shader and when the matrix was:
GLfloat translation[4][4] = {
{1.0, 0.0, 0.0, 0.0},
{0.0, 1.0, 0.0, 0.0},
{0.0, 0.0, 1.0, 0.0},
{0.0, 0.2, 0.0, 1.0}};
or so, I seemed to be able to translate vertices just fine, depending on which values I chose to change. However, when swapping this same uniform matrix to apply projection, the image would not appear. I tried several matrices, such as:
GLfloat frustum[4][4] = {
{((2.0*frusZNear)/(frusRight - frusLeft)), 0.0, 0.0, 0.0},
{0.0, ((2.0*frusZNear)/(frusTop - frusBottom)), 0.0 , 0.0},
{((frusRight + frusLeft)/(frusRight-frusLeft)), ((frusTop + frusBottom) / (frusTop - frusBottom)), (-(frusZFar + frusZNear)/(frusZFar - frusZNear)), (-1.0)},
{0.0, 0.0, ((-2.0*frusZFar*frusZNear)/(frusZFar-frusZNear)), 0.0}
};
and values, such as:
const GLfloat frusLeft = -3.0;
const GLfloat frusRight = 3.0;
const GLfloat frusBottom = -3.0;
const GLfloat frusTop = 3.0;
const GLfloat frusZNear = 5.0;
const GLfloat frusZFar = 10.0;
The vertex shader, which seemed to apply translation just fine:
gl_Position = frustum * vPosition;
Any help appreciated.
The code for calculating the perspective/frustum matrix looks correct to me. This sets up a perspective matrix that assumes that your eye point is at the origin, and you're looking down the negative z-axis. The near and far values specify the range of distances along the negative z-axis that are within the view volume.
Therefore, with near/far values of 5.0/10.0, the range of z-values that are within your view volume will be from -5.0 to -10.0.
If your geometry is currently drawn around the origin, use a translation by something like (0.0, 0.0, -7.0) as your view matrix. This needs to be applied before the projection matrix.
You can either combine the view and projection matrices, or pass them separately into your vertex shader. With a separate view matrix, containing the translation above, your shader code could then look like this:
uniform mat4 viewMat;
...
gl_Position = frustum * viewMat * vPosition;
First thing I see is that the Z near and far planes is chosen at 5, 10. If your vertices do not lie between these planes you will not see anything.
The Projection matrix will take everything in the pyramid like shape and translate and scale it into the unit volume -1,1 in every dimension.
http://www.lighthouse3d.com/tutorials/view-frustum-culling/
So, I'm trying to rotate a light around a stationary object in the center of my scene. I'm well aware that I will need to use the rotation matrix in order to make this transformation occur. However, I'm unsure of how to do it in code. I'm new to linear algebra, so any help with explanations along the way would help a lot.
Basically, I'm working with these two right now and I'm not sure of how to make the light circulate the object.
mat4 rotation = mat4(
vec4( cos(aTimer), 0.0, sin(aTimer), 0.0),
vec4( 0, 1.0, 0.0, 0.0),
vec4(-sin(aTimer), 0.0, cos(aTimer), 0.0),
vec4( 0.0, 0.0, 0.0, 1.0)
);
and this is how my light is set up :
float lightPosition[4] = {5.0, 5.0, 1.0, 0};
glLightfv(GL_LIGHT0, GL_POSITION, lightPositon);
The aTimer in this code is a constantly incrementing float.
Even though you want the light to rotate around your object, you must not use a rotation matrix for this purpose but a translation one.
The matrix you're handling is the model matrix. It defines the orientation, the position and the scale of your object.
The matrix you have here is a rotation matrix, so the orientation of the light will change, but not the position, which is what you want.
So there is two problems to fix here :
1.Define your matrix properly. Since you want a translation (circular), I think this is the matrix you need :
mat4 rotation = mat4(
vec4( 1.0, 0.0, 0.0, 0.0),
vec4( 0.0, 1.0, 0.0, 0.0),
vec4( 0.0, 0.0, 1.0, 0.0),
vec4( cos(aTimer), sin(aTimer), 0.0, 1.0)
);
2.Define a good position vertex for your light. Since it's a single vertex and it's the job of the model matrix (above) to move the light, the light vector 4D should be :
float lightPosition[4] = {0.0f, 0.0f, 0.0f, 1.0f};
//In C, 0.0 is a double, you may have warnings at compilation for loss of precision, so use the suffix "f"
The forth component must be one since it's thanks to it that translations are possible.
You may find additional information here
Model matrix in 3D graphics / OpenGL
However they are using column vectors. Judging from your rotation matrix I do belive you use row vectors, so the translation components are in the last row, not the last column of the model matrix.