glTexImage2D() parameters and generating framebuffers - opengl

According to the opengl's reference page, glTexImage2D's structure looks like this:
void glTexImage2D( GLenum target,
GLint level,
GLint internalformat,
GLsizei width,
GLsizei height,
GLint border,
GLenum format,
GLenum type,
const GLvoid * data);
As far as I know, the last 3 parameters are how the function should interpret the image data given by const GLvoid * data
Meanwhile, I was studying the topic of framebuffers. And in the section called 'Floating point framebuffers' of this link https://learnopengl.com/Advanced-Lighting/HDR, the writer creates a framebuffer's color attachment like this
glBindTexture(GL_TEXTURE_2D, colorBuffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGBA, GL_FLOAT, NULL);
My question is, why did he give GL_FLOAT as the parameter for GLenum type? If const GLvoid * data is NULL anyway, is there a need to use GL_FLOAT? I first thought it's something related to GL_RGBA16F, but 16 bits is 2 bytes and floats are 4 bytes, so I guess it's not related at all.
Furthermore, before this tutorial, the writer used to make color attachments like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
In this case, why did he use GL_UNSIGNED_BYTE for the GLenum type parameter?

Whether the last parameter is NULL or not (assuming PBOs aren't employed), the format and the type parameters must always be legal values relative to the internalformat parameter.
That being said, it is not strictly necessary that the pixel transfer format and type parameters exactly match the internalformat. But they do have to be compatible with it, in accord with the rules on pixel transfer compatibility. In the specific cases you cite, the format and type values are all compatbile with either of the internalforamts used. So you could in theory do this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
That being said, you shouldn't, for two reasons:
You shouldn't write code that you wouldn't want executed for real. Having an implementation manually convert from unsigned normalized bytes into 16-bit floats kills performance. Even if it's not actually going to do that conversion because the pointer is NULL, you still would never want it to actually happen. So write the code that makes sense.
You have to remember all of the rules of pixel transfer compatibility. Because if you get them wrong, if you ask for an illegal conversion, you get an OpenGL error and thus broken code. If you're not used to always think about what your format and type parameters are, it's really easy to switch to integer textures and get an immediate error because you didn't use one of the _INTEGER formats. Whereas if you're always thinking about the pixel transfer parameters that represent the internalformat you're actually using, you'll never encounter that error.

Related

glTexImage2D null data pointer and memory initialization

Let's assume that 2D texture is defined using glTexImage2D with the null data pointer. Does passing the null data pointer zeroes memory or I may get some garbage data when a fragment shader samples that texture ?
vec3 color = texture(texture0, texCoord).xyz;
The documentation clearly states:
data may be a null pointer. In this case, texture memory is allocated to accommodate a texture of width width and height height. ... The image is undefined if the user tries to apply an uninitialized portion of the texture image to a primitive.
So yes -- you will get garbage. Keep in mind that zeroes are also valid garbage, so don't use that as a proof that it "works".
The easiest way to clear a texture to all-zeros is with glClearTexImage:
glClearTexImage(texture, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
For this function data == 0 actually means "all zeroes".

How to send an array of int to my shader

I'm making a voxel engine and I can render a chunk. I'm using instanced rendering, meaning that I can render all of the chunk with a single draw call. Every blocks of a chunk has a single int (From 0 to 4095) that defines his block type (0 for air, 1 for dirt, etc...). I wanna be able to render my block by applying the good texture in my fragment shader. My chunk contains a tri-dimensionnal array :
uint8_t blocks[16][16][16]
The problem is that I can't find a way to send my array of int to the shader. I tried using a VBO but it makes no-sense (I didn't get any result). I also tried to send my array with glUniform1iv() but I failed.
Is it possible to send an array of int to a shader with glUniformX() ?
In order to prevent storing big data, can I set a byte (uint8_t) instead of int with glUniformX() ?
Is there a good way to send that much data to my shader ?
Is instanced drawing a good way to draw the same model with different textures/types of blocks.
For all purposes and intents, data of this type should be treated like texture data. This doesn't mean literally uploading it as texture data, but rather that that's the frame of thinking you should be using when considering how to transfer it.
Or, in more basic terms: don't try to pass this data as uniform data.
If you have access to OpenGL 4.3+ (which is a reasonably safe bet for most hardware no older than 6-8 years), then Shader Storage Buffers are going to be the most laconic solution:
//GLSL:
layout(std430, binding = 0) buffer terrainData
{
int data[16][16][16];
};
void main() {
int terrainType = data[voxel.x][voxel.y][voxel.z];
//Do whatever
}
//HOST:
struct terrain_data {
int data[16][16][16];
};
//....
terrain_data data = get_terrain_data();
GLuint ssbo;
GLuint binding = 0;//Should be equal to the binding specified in the shader code
glGenBuffers(1, &ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo);
glBufferData(GL_SHADER_STORAGE_BUFFER, GLsizeiptr size​, data.data​, GLenum usage);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, binding, ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
Any point after this where you need to update the data, simply bind ssbo, call glBufferData (or your preferred method for updating buffer data), and then you're good to go.
If you're limited to older hardware, you do have some options, but they quickly get clunky:
You can use Uniform Buffers, which behave very similarly to Shader Storage Buffers, but
Have limited storage space (65kb in most implementations)
Have other restrictions that may or may not be relevant to your use case
You can use textures directly, where you convert the terrain data to floating point values (or use as integers, if the hardware supports integer formats internally), and then convert back inside the shader
Compatible with almost any hardware
But requires extra complexity and calculations in your shader code
I do second the approach as laid out in #Xirema's answer, but come to a slightly different recommendation. Since your original data type is just uint8_t, using an SSBO or UBO directly will require to either waste 3 bytes per element or to manually pack 4 elements into a single uint. From #Xirema's answer:
For all purposes and intents, data of this type should be treated like texture data. This doesn't mean literally uploading it as texture data, but rather that that's the frame of thinking you should be using when considering how to transfer it.
I totally agree to that. Hence I recommend the use of a Texture Buffer Object (TBO), (a.k.a. "Buffer Texture").
Using glTexBuffer() you can basically re-interpret a buffer object as a texture. In your case, you can just pack the uint8_t[16][16][16] array into a buffer and interpret it as GL_R8UI "texture" format, like this:
//GLSL:
uniform usamplerBuffer terrainData;
void main() {
uint terrainType = texelFetch(terrainData, voxel.z * (16*16) + voxel.y * 16 + voxel.x).r
//Do whatever
}
//HOST:
struct terrain_data {
uint8_t data[16][16][16];
};
//....
terrain_data data = get_terrain_data();
GLuint tbo;
GLuint tex;
glGenBuffers(1, &tbo);
glBindBuffer(GL_TEXTURE_BUFFER, tbo);
glBufferData(GL_TEXTURE_BUFFER, sizeof(terrain_data)​, data.data​, usage);
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_BUFFER, tex);
glTexBuffer(GL_TEXTURE_BUFFER, GL_R8UI, tbo);
Note that this will not copy the data to some texture object. Accessing the texture means directly accessing the memory of the buffer.
TBOs also have the advantage that they are available since OpenGL 3.1.

glReadPixels usage with glPixelStore

I looked at multiple tutorials about glReadPixels but I'm confused:
void glReadPixels(GLint x, GLint y, GLsizei width, GLsizei height, GLenum format, GLenum type, GLvoid * data)
The last argument is a void?
I saw tutorials and they declared the argument as a vector, unsigned char, GLubyte,...
But what does it really mean?
And do you need to call glPixelStoref( , )?
A void* is C/C++ speak for "pointer to block of memory". The purpose of glReadPixels is to take some part of the framebuffer and write that pixel data into your memory. The data parameter is the "your memory" that it writes into.
Exactly what it writes and how much depends on the pixel transfer parameters, format and type. That's why it takes a void*; because it could be writing an array of bytes, an array of ints, an array of floats, etc. It all depends on what those two parameters say.

OpenGL type mismatches

Why are there mismatching types in OpenGL?
For example, if I have a vertex buffer object,
GLuint handle = 0;
glGenBuffers(1, &handle_); // this function takes (GLsizei, GLuint*)
Now if I want to know the currently bound buffer
glGetIntegerv( GL_ARRAY_BUFFER_BINDING, reinterpret_cast<GLint *>(&handle ) ); // ouch, type mismatch
Why not have a glGetUnsignedIntegerv or
have glGenBuffers take an GLint * instead.
That is because glGetIntegerv function is intended to get any integral type of information back from OpenGL. It includes also GLint type values (negative ones). And also it includes multiple component values like GL_VIEWPORT:
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
From one point of view - it is simpler to have just one function for getting values back, instead of hundreds for each specific parameter.
Form other point of view - of course it's a bit ugly to cast types.
But no idea why they didn't use GLint for buffer id.
Anyway - you shouldn't bee calling any glGet... functions. They are slow and often requires waiting on GPU complete previous commands - meaning CPU will wait idle in that time.

What is the size of GLSL boolean

There is a bool type for shader variables I'd like to use, but I couldn't find what size it have. This matters because when setting up vertex attribute pointer I specify the type of data which can be
GL_BYTE,
GL_UNSIGNED_BYTE,
GL_SHORT,
GL_UNSIGNED_SHORT,
GL_INT,
GL_UNSIGNED_INT,
GL_FLOAT, or
GL_DOUBLE
In c++ generally bool should have the same size as 4 byte int, but can I assume the same for GLSL or does it have only 1 byte?
This matters because when setting up vertex attribute pointer I specify the type of data which can be
It's irrelevant, since vertex attributes cannot be booleans. From the GLSL 3.30 specification:
Vertex shader inputs can only be float, floating-point vectors, matrices, signed and unsigned integers and integer vectors. Vertex shader inputs can also form arrays of these types, but not structures.
Booleans are not on that list.
However, if you want to know what the size of a GLSL bool is in terms of uniform blocks, it has the same size as uint: 32-bits.