glReadPixels usage with glPixelStore - c++

I looked at multiple tutorials about glReadPixels but I'm confused:
void glReadPixels(GLint x, GLint y, GLsizei width, GLsizei height, GLenum format, GLenum type, GLvoid * data)
The last argument is a void?
I saw tutorials and they declared the argument as a vector, unsigned char, GLubyte,...
But what does it really mean?
And do you need to call glPixelStoref( , )?

A void* is C/C++ speak for "pointer to block of memory". The purpose of glReadPixels is to take some part of the framebuffer and write that pixel data into your memory. The data parameter is the "your memory" that it writes into.
Exactly what it writes and how much depends on the pixel transfer parameters, format and type. That's why it takes a void*; because it could be writing an array of bytes, an array of ints, an array of floats, etc. It all depends on what those two parameters say.

Related

glTexImage2D() parameters and generating framebuffers

According to the opengl's reference page, glTexImage2D's structure looks like this:
void glTexImage2D( GLenum target,
GLint level,
GLint internalformat,
GLsizei width,
GLsizei height,
GLint border,
GLenum format,
GLenum type,
const GLvoid * data);
As far as I know, the last 3 parameters are how the function should interpret the image data given by const GLvoid * data
Meanwhile, I was studying the topic of framebuffers. And in the section called 'Floating point framebuffers' of this link https://learnopengl.com/Advanced-Lighting/HDR, the writer creates a framebuffer's color attachment like this
glBindTexture(GL_TEXTURE_2D, colorBuffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGBA, GL_FLOAT, NULL);
My question is, why did he give GL_FLOAT as the parameter for GLenum type? If const GLvoid * data is NULL anyway, is there a need to use GL_FLOAT? I first thought it's something related to GL_RGBA16F, but 16 bits is 2 bytes and floats are 4 bytes, so I guess it's not related at all.
Furthermore, before this tutorial, the writer used to make color attachments like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
In this case, why did he use GL_UNSIGNED_BYTE for the GLenum type parameter?
Whether the last parameter is NULL or not (assuming PBOs aren't employed), the format and the type parameters must always be legal values relative to the internalformat parameter.
That being said, it is not strictly necessary that the pixel transfer format and type parameters exactly match the internalformat. But they do have to be compatible with it, in accord with the rules on pixel transfer compatibility. In the specific cases you cite, the format and type values are all compatbile with either of the internalforamts used. So you could in theory do this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
That being said, you shouldn't, for two reasons:
You shouldn't write code that you wouldn't want executed for real. Having an implementation manually convert from unsigned normalized bytes into 16-bit floats kills performance. Even if it's not actually going to do that conversion because the pointer is NULL, you still would never want it to actually happen. So write the code that makes sense.
You have to remember all of the rules of pixel transfer compatibility. Because if you get them wrong, if you ask for an illegal conversion, you get an OpenGL error and thus broken code. If you're not used to always think about what your format and type parameters are, it's really easy to switch to integer textures and get an immediate error because you didn't use one of the _INTEGER formats. Whereas if you're always thinking about the pixel transfer parameters that represent the internalformat you're actually using, you'll never encounter that error.

How to send an array of int to my shader

I'm making a voxel engine and I can render a chunk. I'm using instanced rendering, meaning that I can render all of the chunk with a single draw call. Every blocks of a chunk has a single int (From 0 to 4095) that defines his block type (0 for air, 1 for dirt, etc...). I wanna be able to render my block by applying the good texture in my fragment shader. My chunk contains a tri-dimensionnal array :
uint8_t blocks[16][16][16]
The problem is that I can't find a way to send my array of int to the shader. I tried using a VBO but it makes no-sense (I didn't get any result). I also tried to send my array with glUniform1iv() but I failed.
Is it possible to send an array of int to a shader with glUniformX() ?
In order to prevent storing big data, can I set a byte (uint8_t) instead of int with glUniformX() ?
Is there a good way to send that much data to my shader ?
Is instanced drawing a good way to draw the same model with different textures/types of blocks.
For all purposes and intents, data of this type should be treated like texture data. This doesn't mean literally uploading it as texture data, but rather that that's the frame of thinking you should be using when considering how to transfer it.
Or, in more basic terms: don't try to pass this data as uniform data.
If you have access to OpenGL 4.3+ (which is a reasonably safe bet for most hardware no older than 6-8 years), then Shader Storage Buffers are going to be the most laconic solution:
//GLSL:
layout(std430, binding = 0) buffer terrainData
{
int data[16][16][16];
};
void main() {
int terrainType = data[voxel.x][voxel.y][voxel.z];
//Do whatever
}
//HOST:
struct terrain_data {
int data[16][16][16];
};
//....
terrain_data data = get_terrain_data();
GLuint ssbo;
GLuint binding = 0;//Should be equal to the binding specified in the shader code
glGenBuffers(1, &ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo);
glBufferData(GL_SHADER_STORAGE_BUFFER, GLsizeiptr size​, data.data​, GLenum usage);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, binding, ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
Any point after this where you need to update the data, simply bind ssbo, call glBufferData (or your preferred method for updating buffer data), and then you're good to go.
If you're limited to older hardware, you do have some options, but they quickly get clunky:
You can use Uniform Buffers, which behave very similarly to Shader Storage Buffers, but
Have limited storage space (65kb in most implementations)
Have other restrictions that may or may not be relevant to your use case
You can use textures directly, where you convert the terrain data to floating point values (or use as integers, if the hardware supports integer formats internally), and then convert back inside the shader
Compatible with almost any hardware
But requires extra complexity and calculations in your shader code
I do second the approach as laid out in #Xirema's answer, but come to a slightly different recommendation. Since your original data type is just uint8_t, using an SSBO or UBO directly will require to either waste 3 bytes per element or to manually pack 4 elements into a single uint. From #Xirema's answer:
For all purposes and intents, data of this type should be treated like texture data. This doesn't mean literally uploading it as texture data, but rather that that's the frame of thinking you should be using when considering how to transfer it.
I totally agree to that. Hence I recommend the use of a Texture Buffer Object (TBO), (a.k.a. "Buffer Texture").
Using glTexBuffer() you can basically re-interpret a buffer object as a texture. In your case, you can just pack the uint8_t[16][16][16] array into a buffer and interpret it as GL_R8UI "texture" format, like this:
//GLSL:
uniform usamplerBuffer terrainData;
void main() {
uint terrainType = texelFetch(terrainData, voxel.z * (16*16) + voxel.y * 16 + voxel.x).r
//Do whatever
}
//HOST:
struct terrain_data {
uint8_t data[16][16][16];
};
//....
terrain_data data = get_terrain_data();
GLuint tbo;
GLuint tex;
glGenBuffers(1, &tbo);
glBindBuffer(GL_TEXTURE_BUFFER, tbo);
glBufferData(GL_TEXTURE_BUFFER, sizeof(terrain_data)​, data.data​, usage);
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_BUFFER, tex);
glTexBuffer(GL_TEXTURE_BUFFER, GL_R8UI, tbo);
Note that this will not copy the data to some texture object. Accessing the texture means directly accessing the memory of the buffer.
TBOs also have the advantage that they are available since OpenGL 3.1.

Writing to a floating point OpenGL texture in CUDA via a surface

I'm writing an OpenGL/CUDA (6.5) interop application. I get a compile time error trying to write a floating point value to an OpenGL texture through a surface reference in my CUDA kernel.
Here I give a high level description of how I set up the interop, but I am successfully reading from my texture in my CUDA kernel, so I believe this is done correctly. I have an OpenGL texture declared with
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB32F_ARB, 512, 512, 0, GL_RGB, GL_FLOAT, NULL);
After creating the texture I call cudaGraphicsGLRegisterImage with cudaGraphicsRegisterFlagsSurfaceLoadStore set. Before running my CUDA kernel, I unbind the texture and call cudaGraphicsMapResources on the cudaGraphicsResource pointers obtained from cudaGraphicsGLRegisterImage. Then I get a cudaArray from cudaGraphicsSubResourceGetMappedArray, create an appropriate resource descriptor for the array, and call cudaCreateSurfaceObject to get a pointer to a cudaSurfaceObject_t. I then call cudaMemcpy with cudaMemcpyHostToDevice to copy the cudaSurfaceObject_t to a buffer on the device allocated by cudaMalloc.
In my CUDA kernel I can read from the surface reference with something like this, and I have verified that this works as expected.
__global__ void cudaKernel(cudaSurfaceObject_t tex) {
int x = blockIdx.x*blockDim.x + threadIdx.x;
int y = blockIdx.y*blockDim.y + threadIdx.y;
float4 sample = surf2Dread<float4>(tex, (int)sizeof(float4)*x, y, cudaBoundaryModeClamp);
In the kernel I want to modify sample and write it back to the texture. The GPU has compute capability 5.0, so this should be possible. I am trying this
surf2Dwrite<float4>(sample, tex, (int)sizeof(float4)*x, y, cudaBoundaryModeClamp);
But I get the error:
error: no instance of overloaded function "surf2Dwrite" matches the argument list
argument types are: (float4, cudaSurfaceObject_t, int, int, cudaSurfaceBoundaryMode)
I can see in
cuda-6.5/include/surface_functions.h
that there are only prototypes for integral versions of surf2Dwrite that accept a void * for the second argument. I do see prototypes for surf2Dwrite which accept a float4 with a templated surface object, However, I'm not sure how I could declare a templated surface object with OpenGL interop. I haven't been able to find anything else on how to do this. Any help is appreciated. Thanks.
It turns out the answer was pretty simple, though I don't know why it works. Instead of calling
surf2Dwrite<float4>(sample, tex, (int)sizeof(float4)*x, y, cudaBoundaryModeClamp);
I needed to call
surf2Dwrite(sample, tex, (int)sizeof(float4)*x, y, cudaBoundaryModeClamp);
To be honest I'm not sure I fully understand CUDA's use of templating in c++. Anyone have an explanation?
For a complete example of CUDA writing to a surface that's linked to an OpenGL texture, refer to this project:
https://github.com/nvpro-samples/gl_cuda_interop_pingpong_st
From the CUDA Documentation, here is the definition of surface template functions:
template<class T>
T surf2Dread(cudaSurfaceObject_t surfObj,
int x, int y,
boundaryMode = cudaBoundaryModeTrap);
template<class T>
void surf2Dread(T* data,
cudaSurfaceObject_t surfObj,
int x, int y,
boundaryMode = cudaBoundaryModeTrap);

OpenGL big "structured" buffer

I need to access a one dimensional big(~2MB) buffer from a shader. However, I don't know which type of OpenGL buffer I should use. I'm going to store floats(16F) and unsigned integers (16UI). My data will be like an struct:
struct{
float d; //16F
int a[7]; //or a1,a2,a3,a4,a5,a6,a7; //7x16UI
}
I read about buffer texture and other kind of buffers(Passing a list of values to fragment shader), but it only works for one type of data (float or int), not both. I could use two buffers, but I think this won't be cache friendly nor easy.

OpenGL type mismatches

Why are there mismatching types in OpenGL?
For example, if I have a vertex buffer object,
GLuint handle = 0;
glGenBuffers(1, &handle_); // this function takes (GLsizei, GLuint*)
Now if I want to know the currently bound buffer
glGetIntegerv( GL_ARRAY_BUFFER_BINDING, reinterpret_cast<GLint *>(&handle ) ); // ouch, type mismatch
Why not have a glGetUnsignedIntegerv or
have glGenBuffers take an GLint * instead.
That is because glGetIntegerv function is intended to get any integral type of information back from OpenGL. It includes also GLint type values (negative ones). And also it includes multiple component values like GL_VIEWPORT:
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
From one point of view - it is simpler to have just one function for getting values back, instead of hundreds for each specific parameter.
Form other point of view - of course it's a bit ugly to cast types.
But no idea why they didn't use GLint for buffer id.
Anyway - you shouldn't bee calling any glGet... functions. They are slow and often requires waiting on GPU complete previous commands - meaning CPU will wait idle in that time.