Dynamic arrays as texture GLSL - c++

I have worked with C++/OpenSceneGraph/GLSL integration and I need to handle dynamic array on shader.
My dynamic data array of vec3 was converted into 1D-texture to pass as uniform to fragment (I'm using GLSL 1.3), as follows:
osg::ref_ptr<osg::Image> image = new osg::Image;
image->setImage(myVec3Array->size(), 1, 1, GL_RGBA8, GL_RGBA, GL_FLOAT, (unsigned char*) &myVec3Array[0], osg::Image::NO_DELETE);
// Pass the texture to GLSL as uniform
osg::StateSet* ss = scene->getOrCreateStateSet();
ss->addUniform( new osg::Uniform("vertexMap", texture) );
For now, I would like to retrieve my raw array of vec3 on fragment shader. How can I do this process? Does a texture2D function only return normalized values?

Does a texture2D function only return normalized values?
No. It returns values depedning on the internal format of the texture.
image->setImage(myVec3Array->size(), 1, 1, GL_RGBA8, GL_RGBA, GL_FLOAT, (unsigned char*) &myVec3Array[0], osg::Image::NO_DELETE);
^^^^^^^^
GL_RGBA8 is an unsigned normalized integer format ("UNORM" for short). So the values in the texture are unsigned integers with 8 bit per channel, and [0,255] is mapped to [0,1] when sampling the texture.
If you want unnormalized floats, you must use some appropriate format, like GL_RGBA32F.

Related

Framebuffer objects with single channel texture attachment? (Color attachment)

I would like the fragment-shader to output a single byte value. Rendered to a one-channel texture attached to a framebuffer object. I have only ever used the default fragment shader to output a vec4 for color. Is this possible? If I initialized the texture bound to the fbo as color attachment like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, width, height, 0, GL_RED, GL_UNSIGNED_BYTE, NULL);
If so, would I change the fragment shaders out variable from:
out vec4 color
to:
out int color?
(I am trying to render a height map)
Well, your render target is not an integer texture; it's a normalized integer format, which counts as a float. So the corresponding output variable from your shader should be a floating-point type.
But you can use a vec4 if you like; any components that are not part of the render target will be discarded.

texture with internal format GL_RGBA8 turns up in fragment shader as float

I have created a texture with this call:
glTexImage2D(GL_TEXTURE_2D, 0,GL_RGBA8, w, h, 0,GL_RGBA, GL_UNSIGNED_BYTE, 0);
I render to the texture with a dark grey color:
glColor3ub(42, 42, 42);
glBegin()...
Now I render this texture to the backbuffer using a custom shader. In the fragment I access the texture like this:
#version 130
#extension GL_EXT_gpu_shader4 : require
uniform usampler2D text_in;
and get the data using:
uvec4 f0 = texture(text_in, gl_TexCoord[0].st);
I would expect the value in f0[0] to be 42u, but it is 1042852009u, which happens to be
float x = 42/255.0f;
unsigned i = *reinterpret_cast<int*>(&x);
What am I doing wrong? I would like to work with integer textures, so that I in the fragment shader can compare a pixel value to an exact integer value. I know that the render-to-textures works well, because if I render the texture to the backbuffer without the custom shader, I get 42-grey as expected.
An RGBA8 format is an unsigned, normalized format. When you read from/write to it, it is therefore treated as if it were a floating point format, whose values are restricted to the [0, 1] range. Normalized values are basically just compressed floats.
If you want an unsigned integer format, you're looking for GL_RGBA8UI. Note that you also must use GL_RGBA_INTEGER in the pixel transfer format field when using glTexImage2D.

Pass texture with float values to shader

I am developing an application in C++ in QT QML 3D. I need to pass tens of float values (or vectors of float numbers) to fragment shader. These values are positions and colors of lights, so I need values more then range from 0.0 to 1.0. However, there is no support to pass an array of floats or integers to a shader in QML. My idea is to save floats to a texture, pass the texture to a shader and get the values from that.
I tried something like this:
float array [4] = {100.5, 60.05, 63.013, 0.0};
uchar data [sizeof array * sizeof(float)];
memcpy(data, &array, sizeof array * sizeof(float));
QImage img(data, 4, 1, QImage::Format_ARGB32);
and pass this QImage to a fragment shader as sampler2D. But is there there a method like memcpy to get values from texture in GLSL?
texelFetch method only returns me vec4 with float numbers range 0.0 to 1.0. So how can I get the original value from the texture in a shader?
I prefer to use sampler2D type, however, is there any other type supporting direct memory access in GLSL?
This will create a float texture from a float array.
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, width, height, 0, GL_RED, GL_FLOAT, &array);
You can access float values from your fragment shader like this:
uniform sampler2D float_texture;
...
float float_texel = float( texture2D(float_texture, tex_coords.xy) );
"OpenGL float texture" is a well documented subject.
What exactly is a floating point texture?
The texture accessing functions do directly read from the texture. The problem is that Format_ARGB32 is not 32-bits per channel; it's 32-bits per pixel. From this documentation page, it seems clear that QImage cannot create floating-point images; it only deals in normalized formats. QImage itself seems to be a class intended for dealing with GUI matters, not for loading OpenGL images.
You'll have to ditch Qt image shortcuts and actually use OpenGL directly.

Sampling integers in OpenGL shader doesn't work. Result is always 0 [duplicate]

This question already has an answer here:
OpenGL 2 Texture Internal Formats GL_RGB8I, GL_RGB32UI, etc
(1 answer)
Closed 6 years ago.
I have a heightmap of 16 bit signed integers. If I sample the texture using normalised float, it samples fine. So for normalised floats I upload the texture as:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16_SNORM, 512, 512, 0, GL_RED, GL_SHORT, data);
Then in the shader I do:
layout (binding = 0) uniform sampler2D heightmap;
float height = texture(heightmap, inTexCoords).r;
Then height is the -32768 to 32767 normalised to -1.0 and 1.0. This works fine. But I want to read the value as an integer, so I do:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16I, 512, 512, 0, GL_RED, GL_SHORT, data);
And in the shader:
layout (binding = 0) uniform isampler2D heightmap; // < -- Notice I put isampler here
float height = texture(heightmap, inTexCoords).r; // < -- This should return an ivec4, the r value should have my 16 bit signed value, right?
But no matter what I do it ALWAYS samples 0. I don't know what to do, I've tried disabling texture filtering and blending (as I've heard they might not work) and still nothing.
Are the other arguments to glTexImage2D correct? GL_RED, and GL_SHORT?
Thank you.
Are the other arguments to glTexImage2D correct? GL_RED, and GL_SHORT?
No. For an GL_R16I internal format, you must use GL_RED_INTEGER as the client side data format.

glsl fragmentshader render objectID

How do I properly render an integer ID of an object to an integer texture buffer?
Say I have a texture2D with internal format GL_LUMINANCE16 and i attach it as color attachment to my FBO.
When rendering an object, i pass an integer ID to the shader and would like to render this id into my integer texture.
fragmentshader output is of type vec4 however.
How do I properly transform my ID to four component float and avoid conversion inaccuracies such that in the end the integervalue in my integer texture target corresponds to the integer ID i wanted to render?
I still don't think that there is a clear answer here. So here is how I made it work through a 2D texture:
// First, create a frame buffer:
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
// Then generate your texture and define an unsigned int array:
glGenTextures(1, &textureid);
glBindTexture(GL_TEXTURE_2D, textureid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, w, h, 0, GL_RED_INTEGER, GL_UNSIGNED_INT, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
// Attach it to the frame buffer object:
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, textureid, 0);
// Before rendering
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
GLuint buffers[2] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 }; // this assumes that there is another texture that is for the render buffer. Color attachment1 is preserved for the element ids.
glDrawBuffers(2, buffers);
// Clear and render here
glFlush(); // Flush after just in case
glBindFramebuffer(GL_FRAMEBUFFER, 0);
On the GLSL side the fragment shader should have (4.3 core profile code here):
layout(location = 0) out vec4 colorOut; // The first element in 'buffers' so location 0
layout(location = 1) out uvec4 elementID; // The second element in 'buffers' so location 1. unsigned int vector as color
// ...
void main()
{
//...
elementID = uvec4( elementid, 0, 0, 0 ); // Write the element id as integer to the red channel.
}
You can read the values on the host side:
unsigned int* ids = new unsigned int[ w*h ];
glBindTexture(GL_TEXTURE_2D, textureid);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RED_INTEGER, GL_UNSIGNED_INT, ids);
There are several problems with your question.
First, GL_LUMINANCE16 is not an "integer texture." It is a texture that contains normalized unsigned integer values. It uses integers to represent floats on the range [0, 1]. If you want to store actual integers, you must use an actual integer image format.
Second, you cannot render to a luminance texture; they are not color-renderable formats. If you actually want to render to a single-channel texture, you must create a single-channel image format. So instead of GL_LUMINANCE16, you use GL_R16UI, which is a 16-bit single-channel unsigned integral image format.
Now that you have this set up correctly, it's pretty trivial. Define a uint fragment shader output and have your fragment shader write your uint value to it. This uint could come from the vertex shader or from a uniform; however you want to do it.
Obviously you'll also need to attach your texture or renderbuffer to an FBO, but I'm fairly sure you know that.
One final thing: don't use the phrase "texture buffer" unless you mean one of these. Otherwise, it gets confusing.
I suppose that your integer ID is an identifier for a set of primitives; indeed it can be defined as an shader uniform:
uniform int vertedObjectId;
Once you render, you want to store the fragment processing into an integer texture. Note that integer textures shall be sampled with integer samplers (isampler2D), which returns integer vectors (i.e. ivec3).
This texture can be attached to a framebuffer object (be aware of framebuffer completeness). The framebuffer attachment can be bound to an integer output variable:
out int fragmentObjectId;
void main() {
fragmentObjectId = vertedObjectId;
}
You need a few extension supports, or an advanced OpenGL version (3.2?).