How does data get laid out in am RGBA WebGL texture? - glsl

I'm trying to pass a list of integers to the fragment shader and need random access to any of its positions. I can't use uniforms since index must be a constant, so I'm using the usual technique of passing the data through a texture.
Things seem to work, but calling texture2D to obtain specific pixels is not behaving as I'd expect.
My data looks like this:
this.textureData = new Uint8Array([
0, 0, 0, 10, 0, 0, 0, 20, 0, 0, 0, 30, 0, 0, 0, 40,
0, 0, 0, 50, 0, 0, 0, 60, 0, 0, 0, 70, 0, 0, 0, 80,
]);
I then copy that over through a texture:
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_WRAP_S, this.gl.CLAMP_TO_EDGE);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_WRAP_T, this.gl.CLAMP_TO_EDGE);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_MIN_FILTER, this.gl.NEAREST);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_MAG_FILTER, this.gl.NEAREST);
this.gl.texImage2D(
this.gl.TEXTURE_2D,
0,
this.gl.RGBA,
4, // width: using 4 since its 4 bytes per pixel
2, // height
0,
this.gl.RGBA,
this.gl.UNSIGNED_BYTE,
this.textureData);
So this texture is 4x2 pixels.
When I call texture2D(uTexture, vec2(0,0)); I get a vec4 pixel with the correct values (0,0,0,10).
However, when I call with locations such as (1,0), (2,0), (3,0), (4,0), etc they all return a pixel with (0,0,0,30).
Same for the second row. If I call with (0,1) I get the first pixel of the second row.
Any number greater than 1 for the X coordinate returns the last pixel of the second row.
I'd expect the coordinates to be:
this.textureData = new Uint8Array([
// (0,0) (1,0) (2,0) (3,0)
0, 0, 0, 10, 0, 0, 0, 20, 0, 0, 0, 30, 0, 0, 0, 40,
// (0,1) (1,1) (2,1) (3,1)
0, 0, 0, 50, 0, 0, 0, 60, 0, 0, 0, 70, 0, 0, 0, 80,
]);
What am I missing? How can I correctly access the pixels?
Thanks!

Texture coordinates are not integral, they are in the range [0.0, 1.0]. They map the vertices of the geometry to a point in the texture image. The texture coordinates specifies which part of the texture is placed on an specific part of the geometry and together with the texture parameters (see gl.texParameteri) it specifies how the geometry is wrapped by the texture. In general, the lower left point of the texture is addressed by the texture coordinate (0.0, 0.0) and the upper right point of the texture is addressed by (1.0, 1.0).
Texture coordinates work the same in OpenGL, OpenGL Es and WebGL. See How do opengl texture coordinates work?

Related

How to implement flat shading in OpenGL without duplicate vertices?

I am trying to render 3D prisms in LWJGL OpenGL with flat shading. For example, I have a cube indexed as following:
I only have 8 vertices in the vertex buffer, which I have indexed as above. Is there any way to implement flat normal shading on the cube such as below? I don't want to rewrite my vertex and index buffers to include duplicate vertices if possible.
If you don't need any other attributes (e.g. texture coordinates), then there is an option to create a cube mesh with face normal vectors, by 8 vertices only. Use the flat Interpolation qualifier for the normal vector.
Vertex shader:
flat out vec3 surfaceNormal;
Fragment sahder:
flat out vec3 surfaceNormal;
When the flat qualifier is used, then the output of the vertex shader will not be interpolated. The value given to the fragment shader is one of the attributes associated to one vertex of the primitive, the Provoking vertex.
For a GL_TRINANGLE primitive this is either the last or the first vertex. That can be chosen by glProvokingVertex.
Choose the first vertex:
glProvokingVertex(GL_FIRST_VERTEX_CONVENTION);
For the order of the points of your cube mesh (image in the question)
front back
1 3 7 5
+---+ +---+
| | | |
+---+ +---+
0 2 6 4
you have to setup the following vertex coordinates and normal vectors:
// x y z nx, ny, nz
-1, -1, -1, 0, -1, 0, // 0, nv front
-1, -1, 1, 0, 0, 1, // 1, nv top
1, -1, -1, 0, 0, 0, // 2
1, -1, 1, 1, 0, 0, // 3, nv right
1, 1, -1, 0, 1, 0, // 4, nv back
1, 1, 1, 0, 0, 0, // 5
-1, 1, -1, 0, 0, -1, // 6, nv bottom
-1, 1, 1, -1, 0, 0, // 7, nv left
Define the indices in that way, that the vertices 7, 3, 0, 4, 6, 1 are the first vertex for both triangles of the left, right, front, back, bottom and top of the cube:
0, 2, 3, 0, 3, 1, // front
4, 6, 7, 4, 7, 5, // back
3, 2, 4, 3, 4, 5, // right
7, 6, 0, 7, 0, 1, // left
6, 4, 2, 6, 2, 0, // bottom
1, 3, 5, 1, 5, 7 // top
Draw 12 triangle primitives. e.g:
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_INT, 0);
For flat shading, it is better to use a geometry shader to compute the normals for each of the primitives. Although you can use the provoking-vertex method when rendering a cube, you cannot use it for certain geometric objects where the number of faces is more than that of the vertices: e.g. consider the polyhedron obtained by gluing two tetrahedra at their base triangle. Such an object will have 6 triangles but only 5 vertices (note that Euler's formula still holds: v-e+f = 5-9+6 = 2), so there are not enough vertices to send the face-normals via the vertices. Even if you can do that, another reason not to use provokig-vertex method is that it is not convenient to do so, because you would have to find a way to enumare the vertices in a way such that each vertex uniquely 'represents' a single face, so that you can associate the face-normal with it.
In a nutshell, just use a geometry shader, it is much simpler and more importantly much more robust. Not to mention that the normal calculations are done on the fly inside the GPU, rather than you having to set them up on CPU, creating & binding the necessary buffers and defining attributes which increases both the set-up costs and eats up the memory bandwith between the CPU and the GPU.

Using glDepthFunc(GL_GREATER) would not draw anything

I'm running the following code to draw rectangles using GL_GREATER function,
but instead of getting the color of the farthest rectangle from the camera, I get a white screen.
glClearColor(1, 1, 1, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_GREATER);
glOrtho(-1, 1, -1, 1, -1, 1);
glColor3f(1, 0, 0);
glPushMatrix();
glTranslatef(0, 0, -0.5);
glRectf(-1, -1, 1, 1);
glColor3f(0, 1, 0);
glTranslatef(0, 0, 1);
glRectf(-1, -1, 1, 1);
glColor3f(0, 0, 1);
glPopMatrix();
glRectf(-1, -1, 1, 1);
So I'm expecting to see the farthest rectangle color on the screen, which is green (which is also weird because the zNear is -1 and using GL_LESS draws green instead of red - I don't understand why aswell).
however using GL_GREATER I get a white screen instead of green.
What am I missing here?
By default the values in the depth buffer are in range [0, 1]. See glDepthRange.
When the depth buffer is cleared, then the depth values are set to 1 by default. See glClearDepth.
If every value in the depth buffer is 1 and the depth test is GL_GREATER, then the depth test will fail in any case, because no depth can be grater than 1.
The value which is used to clear the depth buffer can be changed by glClearDepth.
Set the clear value for the depth buffer to 0, instead of 1, before the buffer is cleared:
glClearColor(1, 1, 1, 1);
glClearDepth(0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
If you are flipping the comparison, you also have to flip the depth buffer clear value with glClearDepth. Set it to 0.

Texel data in case of GL_LUMINANCE when glTexImage2D is called

Usually a texel is an RGBA value. What data does a texel represent in the following code:
const int TEXELS_W = 2, TEXELS_H = 2;
GLubyte texels[] = {
100, 200, 0, 0,
200, 250, 0, 0
};
glBindTexture(GL_TEXTURE_2D, textureId);
glTexImage2D(
GL_TEXTURE_2D,
0, // mipmap reduction level
GL_LUMINANCE,
TEXELS_W,
TEXELS_H,
0, // border (must be 0)
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
texels);
GLubyte texels[] = {
100, 200, 0, 0,
200, 250, 0, 0
};
OpenGL will only read 4 of these values. Because GL_UNPACK_ALIGNMENT defaults to 4, OpenGL expects each row of pixel data to be aligned to 4 bytes. So the two 0's in each row are just padding, because the person who wrote this code didn't know how to change the alignment.
So OpenGL will read 100, 200 as the first row, then skip to the next 4 byte boundary and read 200, 250 as the second row.
GL_LUMINANCE:
Each element is a single luminance value. The GL converts it to floating point, then assembles it into an RGBA element by replicating the luminance value three times for red, green, and blue and attaching 1 for alpha. Each component is then multiplied by the signed scale factor GL_c_SCALE, added to the signed bias GL_c_BIAS, and clamped to the range [0,1] (see glPixelTransfer).

Getting exact pixel from texture

I have a question about textures in OpenGL. I am trying to use them for GPGPU operations but I am stuck at beggining. I have created a texture like this (4x4 int matrix).
OGLTexImageFloat dataTexImage = new OGLTexImageFloat(4, 4, 4);
dataTexImage.setPixel(0, 0, 0, 0);
dataTexImage.setPixel(0, 1, 0, 10);
dataTexImage.setPixel(0, 2, 0, 5);
dataTexImage.setPixel(0, 3, 0, 15);
dataTexImage.setPixel(1, 0, 0, 10);
dataTexImage.setPixel(1, 1, 0, 0);
dataTexImage.setPixel(1, 2, 0, 2);
dataTexImage.setPixel(1, 3, 0, 1000);
dataTexImage.setPixel(2, 0, 0, 5);
dataTexImage.setPixel(2, 1, 0, 2);
dataTexImage.setPixel(2, 2, 0, 0);
dataTexImage.setPixel(2, 3, 0, 2);
dataTexImage.setPixel(3, 0, 0, 15);
dataTexImage.setPixel(3, 1, 0, 1000);
dataTexImage.setPixel(3, 2, 0, 2);
dataTexImage.setPixel(3, 3, 0, 0);
texture = new OGLTexture2D(gl, dataTexImage);
Now I would like to add value from [1,1] matrix position to value of each pixel (matrix entry). As I am speaking about every picture I should probably do it in fragment shader. But i dont know how can i get exact pixel form texture ([1,1] entry from matrix). Can someone explain me, how to do this?
If you are trying to add a single constant value (i.e. a value from [1,1]) to the entire image (every pixel of the rendered image), then you should pass that constant value as a separate uniform value into your shader program.
Then in the fragment shader, add this constant value to the current pixel color. The current pixel color comes as an input vec4 from your vertex shader.

Strange blending when rendering self-transparent texture to the framebuffer

I'm trying to render self-transparent textures to the framebuffer, but I'm getting not what I guessed: everything previously rendered on the framebuffer gets ignored, and this texture blends with the colour I cleaned my main canvas.
That's what I would like to get, but without using framebuffers:
package test;
import com.badlogic.gdx.*;
import com.badlogic.gdx.graphics.*;
import com.badlogic.gdx.graphics.g2d.*;
public class GdxTest extends ApplicationAdapter {
SpriteBatch batch;
Texture img;
#Override
public void create () {
batch = new SpriteBatch();
Pixmap pixmap = new Pixmap(1, 1, Pixmap.Format.RGBA8888);
pixmap.setColor(1, 1, 1, 1);
pixmap.fillRectangle(0, 0, 1, 1);
// Generating a simple 1x1 white texture
img = new Texture(pixmap);
pixmap.dispose();
}
#Override
public void render () {
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
batch.setColor(1, 1, 1, 1);
batch.draw(img, 0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.setColor(0, 0, 0, 0.5f);
batch.draw(img, 0, 0, 300, 300);
batch.end();
}
}
And it works as perfectly as it should do:
http://i.stack.imgur.com/wpFNg.png
And that's what I get with using of framebuffer (I can't understand why the second rendered texture doesn't blend with the previous one, as it do without framebuffer):
package test;
import com.badlogic.gdx.*;
import com.badlogic.gdx.graphics.*;
import com.badlogic.gdx.graphics.g2d.*;
import com.badlogic.gdx.graphics.glutils.*;
public class GdxTest extends ApplicationAdapter {
SpriteBatch batch;
Texture img;
FrameBuffer buffer;
TextureRegion region;
#Override
public void create () {
batch = new SpriteBatch();
Pixmap pixmap = new Pixmap(1, 1, Pixmap.Format.RGBA8888);
pixmap.setColor(1, 1, 1, 1);
pixmap.fillRectangle(0, 0, 1, 1);
// Generating a simple 1x1 white texture
img = new Texture(pixmap);
pixmap.dispose();
// Generating a framebuffer
buffer = new FrameBuffer(Pixmap.Format.RGBA8888, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), false);
region = new TextureRegion(buffer.getColorBufferTexture());
region.flip(false, true);
}
#Override
public void render () {
// Filling with red shows the problem
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
buffer.begin();
batch.begin();
Gdx.gl.glClearColor(1, 1, 1, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.setColor(1, 1, 1, 1);
batch.draw(img, 0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.setColor(0, 0, 0, 0.5f);
batch.draw(img, 0, 0, 300, 300);
batch.end();
buffer.end();
batch.begin();
batch.setColor(1, 1, 1, 1);
batch.draw(region, 0, 0);
batch.end();
}
}
And an unpredictable result:
http://i.stack.imgur.com/UdDKD.png
So how could I make the framebuffer version work the way the first version does? ;)
The easy answer is to disable blending when rendering to the screen.
But I think it is good to understand why this is happening if you want to use FBO. So let's walk through what's actually going on.
First make sure to understand what the color of the texture and the color of the batch (the vertex color) does: they are multiplied. So when setting the batch color to 0,0,0,0.5 and the texture pixel (texel) is 1,1,1,1 this will result in a value of 1*0,1*0,1*0,1*0.5 = 0,0,0,0.5.
Next make sure to understand how blending works. Blending is enabled by default and will use the SRC_ALPHA and ONE_MINUS_SRC_ALPHA functions. This means that the source value (the texel) is multiplied by the source alpha and that the destination value (the screen pixel) is multiplied by one minus the source alpha. So if your screen pixel has the value 1,1,1,1 and your texel has the value 0,0,0,0.5 then the screen pixel will be set to:(0.5*0, 0.5*0, 0.5*0, 0.5*0.5) + ((1-0.5)*1, (1-0.5)*1, (1-0.5)*1, (1-0.5)*1) which is (0,0,0,0.25) + (0.5, 0.5, 0.5, 0.5) = (0.5, 0.5, 0.5, 0.75).
So let's see how that works for you in your first code:
You clear the screen with 1, 0, 0, 1, in other words: every pixel of the screen contains the value 1, 0, 0, 1.
Then you render a full rectangle with each texel value 1,1,1,1, every pixel of the screen now contains the value 1, 1, 1, 1.
Then you render a smaller rectangle with each texel value 0,0,0,0.5, every pixel on that part of the screen now contains the value 0.5, 0.5, 0.5, 0.75.
Got a feeling about the issue already? Let's see what happens in your second code:
You clear the screen with 1, 0, 0, 1: every pixel of the screen contains the value 1, 0, 0, 1.
You bind the FBO and clear it with 1, 1, 1, 1: every pixel of the FBO contains the value 1, 1, 1, 1.
You render a full rectangle with each texel value 1,1,1,1 to the FBO: every pixel of the FBO now contains the value 1,1,1,1.
You render a smaller rectangle with each texel value 0,0,0,0.5, every pixel on that part of the FBO now contains the value 0.5, 0.5, 0.5, 0.75.
Then you bind the screen again as the render target of which each pixel still contains the value 1, 0, 0, 1.
Finally you render the FBO texture as full rectangle to the screen, causing these texels to be blended with the screen pixels. For the smaller rectangle this means blending 0.5, 0.5, 0.5, 0.75 multiplied by 0.75 and 1, 0, 0, 1 multiplied by 1-0.75=0.25, which will result in 0.375, 0.375, 0.375, 0.5625 and 0.25, 0, 0, 0.25. So the final color is 0.625, 0.375, 0.375, 0,8125
Make sure to understand this process, otherwise it can cause quite some frustrating weird issues. If you find it hard to follow then you could take pen and paper and manually calculate the value for each step.