Can OpenGL convert integer pixel data to floating point or UNORM? - opengl

glTexImage2D takes internalFormat (which specifies number of bits and data type/encoding), format (without number of bits and encoding) and type.
Is it possible, for example, to let OpenGL convert passed pixel data containing 32 bit integers from format GL_RGB_INTEGER and type GL_INT to internal format GL_RGB32F?
The wiki article https://www.khronos.org/opengl/wiki/Pixel_Transfer#Format_conversion suggests to me it's possible by stating:
Pixels specified by the user must be converted between the user-specified format (with format​ and type​) and the internal representation controlled by the image format of the image.
But I wasn't able to read from floating point sampler in shader.

The _INTEGER pixel transfer formats are only to be used for transferring data to integer image formats. You are filling in a floating-point texture, so that doesn't qualify. You should have gotten an OpenGL Error.
Indeed, the very article you linked to spells this out:
Also, if "_INTEGER" is specified but the image format is not integral, then the transfer fails.
GL_RGBA32F is not an integral image format.
If you remove the _INTEGER part, then the pixel transfer will "work". OpenGL will assume that the integer data are normalized values, and therefore you will get floating-point values on the range [-1, 1]. That is, if you pass a 32-bit integer value of 1, the corresponding floating-point value will be 1/(2^31-1), which is a very small number (and thus, almost certainly just 0.0).
If you want OpenGL to cast the integer as if by a C cast (float)1... well, there's actually no way to do that. You'll just have to convert the data yourself.

Related

Clearing a double-precision buffer in OpenGL

Is there a fast way to clear an OpenGL buffer with a double-precision data type or set a default value with an API call to avoid using a compute shader?
For half- and single-precision types, glClearBufferData/glClearNamedBufferData can be used, but it appears like there is no internal format enum for 64 bit types, which makes the switch from single- to double-precision data in scientific computing applications more cumbersome. Or am I missing an extension?
I am looking for a solution that works with OpenGL 4.6, Nvidia-specific extensions are fine.
At the end of the day, a "double" is just a way of interpreting 64-bits of data. Your goal is to get the right 64-bits into your buffer.
As far as buffer clearing is concerned, the image format and pixel transfer parameters are just an explanation of how to interpret the data you pass. If the internal format of the clearing operation is GL_RG32UI, then each "pixel" in the buffer is 64-bits of data.
Given that, all you need to do is to get the clearing function to take a block of 64-bits and copy it exactly as you provide it. To do this, you have to use the right pixel transfer parameters.
See, pixel transfer operations can perform data conversion, taking the data pointer you pass and converting it to match the internal format. You don't want that; you want a direct copy. So your pixel transfer parameters need to exactly match the internal format. Which is quite easy.
A format of GL_RG_INTEGER represents a two-component pixel that stores integer data, in red-green order. And a type of GL_UNSIGNED_INT means that each component is a 32-bit unsigned integer. This exactly matches the internal format of GL_RG32UI, so the copying algorithm won't mess with the bytes of your data.
So, given some 64-bit double value in C or C++, clearing a buffer to that double ought to be as simple as:
void clear_buffer_to_double(GLuint buffer, double dbl)
{
glClearNamedBufferData(buffer, GL_RG32UI, GL_RG_INTEGER, GL_UNSIGNED_INT, &dbl);
}

What's the use case of glTexParameterIiv and glTexParameterIuiv?

The OpenGL documentation says very little about these two functions. When it would make sense to use glTexParameterIiv instead of glTexParameteriv or even glTexParameterfv?
If the values for GL_TEXTURE_BORDER_COLOR are specified with glTexParameterIiv or glTexParameterIuiv, the values are stored unmodified with an internal data type of integer. If specified with glTexParameteriv, they are converted to floating point with the following equation: f=(2c+1)/(2b−1). If specified with glTexParameterfv, they are stored unmodified as floating-point values.
You sort of answered your own question with the snippet you pasted. Traditional textures are fixed-point (unsigned normalized, where values like 255 are converted to 1.0 through normalization), but GL 3.0 introduced integral (signed / unsigned integer) texture types (where integer values stay integers).
If you had an integer texture and wanted to assign a border color (for use with the GL_CLAMP_TO_BORDER wrap mode), you would use one variant of those two functions (depending on whether you want signed or unsigned).
You cannot filter integer textures, but you can still have texture coordinate wrap behavior. Since said textures are integer and glTexParameteriv (...) normalizes the color values it is passed, an extra function had to be created to keep the color data integer.
You will find this same sort of thing with glVertexAttribIPointer (...) and so forth; adding support for integer data (as opposed to simply converting integer data to floating-point) to the GL pipeline required a lot of new commands.

what is need of integer and SNORM textures

What is the advantage or need of integer and SNORM textures? when do we use (-1,0) range for textures?
Any example where we need specifically integer or SNORM textures?
According to https://www.opengl.org/registry/specs/EXT/texture_snorm.txt,
signed normalized integer texture formats are used to represent a floating-point values in the range [-1.0, 1.0] with an exact value of floating point zero.
It is nice for storing noise, especially Perlin.
Better than float textures because no bits wasted on the exponent.
Unfortunately, as of GL4.4 none of the snorm formats can be rendered to.
You can use the unorm formats and *2-1 for the same effect, although there may be issues with getting an exact 0.

Using GL_RGB10_A2UI internal format in glCopyTexImage1D() OpenGL 3.3

I am using GL_RGB10_A2UI internal format in glCopyTexImage1D() API but getting GL_INVALID_OPERATION error. Does OpenGL 3.3 support GL_RGB10_A2UI in glCopyTexImage1D() ?
GL_RGB10_A2UI is an integral image format; it contains integers, not normalized floating-point values that are stored as integers. Therefore, unless your framebuffer also contains unsigned integer values, this copy operation will fail with the expected error.
Of course, the only way for your framebuffer to have unsigned integers (rather than unsigned normalized integers, which is the usual case) would be to use an FBO. In which case, you could just be rendering directly to this texture, and you wouldn't need to copy from it.
I'm guessing you probably meant to use GL_RGB10_A2, which represent unsigned normalized values.

What are the differences between CV_8U and CV_32F and what should I worry about when converting between them?

I have some code that is acting up and I suspect it's because I'm operating on the wrong types of data or converting between them poorly.
It is mixing cv::Mat objects of types CV_8U (which is what is created when reading a jpg as grayscale with cv::imread), CV_32F, and CV_32S.
What are the differences between these data types, and what do I need to be sure of when converting between them?
CV_8U is unsigned 8bit/pixel - ie a pixel can have values 0-255, this is the normal range for most image and video formats.
CV_32F is float - the pixel can have any value between 0-1.0, this is useful for some sets of calculations on data - but it has to be converted into 8bits to save or display by multiplying each pixel by 255.
CV_32S is a signed 32bit integer value for each pixel - again useful of you are doing integer maths on the pixels, but again needs converting into 8bits to save or display. This is trickier since you need to decide how to convert the much larger range of possible values (+/- 2billion!) into 0-255
Basically they just describe what the individual components are:
CV_8U: 1-byte unsigned integer (unsigned char).
CV_32S: 4-byte signed integer (int).
CV_32F: 4-byte floating point (float).
What you always have to keep in mind is that you cannot just cast them from one into the other (or it probably won't do what you want), especially between differently sized types.
So always make sure you use real conversion functions for converting between them, like cv::convert or cv::Mat::convertTo. Don't just try to access the elements of e.g. a cv::Mat of CV_8U type using e.g. cv::Mat::at<float> or cv::Mat_<float>.
Or if you just want to convert individual elements and don't want to create a new matrix of the other type, access the elements using the appropriate function (in the example cv::Mat::at<unsigned char>) and convert the result to float.
Likewise is there also a difference between the number of components and a cv::Mat of CV_8UC3 type is different from an image of CV_8UC1 type and should (usually) not be accessed by cv::Mat::at<unsigned char>, but by cv::Mat::at<cv::Vec3b>.
EDIT: Seeing Martin's answer it may be that you are aware of this all and his explanations are more what you have been looking for.