Pass texture with float values to shader - c++

I am developing an application in C++ in QT QML 3D. I need to pass tens of float values (or vectors of float numbers) to fragment shader. These values are positions and colors of lights, so I need values more then range from 0.0 to 1.0. However, there is no support to pass an array of floats or integers to a shader in QML. My idea is to save floats to a texture, pass the texture to a shader and get the values from that.
I tried something like this:
float array [4] = {100.5, 60.05, 63.013, 0.0};
uchar data [sizeof array * sizeof(float)];
memcpy(data, &array, sizeof array * sizeof(float));
QImage img(data, 4, 1, QImage::Format_ARGB32);
and pass this QImage to a fragment shader as sampler2D. But is there there a method like memcpy to get values from texture in GLSL?
texelFetch method only returns me vec4 with float numbers range 0.0 to 1.0. So how can I get the original value from the texture in a shader?
I prefer to use sampler2D type, however, is there any other type supporting direct memory access in GLSL?

This will create a float texture from a float array.
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, width, height, 0, GL_RED, GL_FLOAT, &array);
You can access float values from your fragment shader like this:
uniform sampler2D float_texture;
...
float float_texel = float( texture2D(float_texture, tex_coords.xy) );
"OpenGL float texture" is a well documented subject.
What exactly is a floating point texture?

The texture accessing functions do directly read from the texture. The problem is that Format_ARGB32 is not 32-bits per channel; it's 32-bits per pixel. From this documentation page, it seems clear that QImage cannot create floating-point images; it only deals in normalized formats. QImage itself seems to be a class intended for dealing with GUI matters, not for loading OpenGL images.
You'll have to ditch Qt image shortcuts and actually use OpenGL directly.

Related

Framebuffer objects with single channel texture attachment? (Color attachment)

I would like the fragment-shader to output a single byte value. Rendered to a one-channel texture attached to a framebuffer object. I have only ever used the default fragment shader to output a vec4 for color. Is this possible? If I initialized the texture bound to the fbo as color attachment like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, width, height, 0, GL_RED, GL_UNSIGNED_BYTE, NULL);
If so, would I change the fragment shaders out variable from:
out vec4 color
to:
out int color?
(I am trying to render a height map)
Well, your render target is not an integer texture; it's a normalized integer format, which counts as a float. So the corresponding output variable from your shader should be a floating-point type.
But you can use a vec4 if you like; any components that are not part of the render target will be discarded.

texture with internal format GL_RGBA8 turns up in fragment shader as float

I have created a texture with this call:
glTexImage2D(GL_TEXTURE_2D, 0,GL_RGBA8, w, h, 0,GL_RGBA, GL_UNSIGNED_BYTE, 0);
I render to the texture with a dark grey color:
glColor3ub(42, 42, 42);
glBegin()...
Now I render this texture to the backbuffer using a custom shader. In the fragment I access the texture like this:
#version 130
#extension GL_EXT_gpu_shader4 : require
uniform usampler2D text_in;
and get the data using:
uvec4 f0 = texture(text_in, gl_TexCoord[0].st);
I would expect the value in f0[0] to be 42u, but it is 1042852009u, which happens to be
float x = 42/255.0f;
unsigned i = *reinterpret_cast<int*>(&x);
What am I doing wrong? I would like to work with integer textures, so that I in the fragment shader can compare a pixel value to an exact integer value. I know that the render-to-textures works well, because if I render the texture to the backbuffer without the custom shader, I get 42-grey as expected.
An RGBA8 format is an unsigned, normalized format. When you read from/write to it, it is therefore treated as if it were a floating point format, whose values are restricted to the [0, 1] range. Normalized values are basically just compressed floats.
If you want an unsigned integer format, you're looking for GL_RGBA8UI. Note that you also must use GL_RGBA_INTEGER in the pixel transfer format field when using glTexImage2D.

sampler2DArray - setup and use

I'm studying OpenGL and I had to use sampler2DArray. I am in torment all day long - all to no avail. I have two questions:
How to create a list of textures?
How to use sampler2DArray in the shader?
Here is the result of my attempts to create a list of textures:
// textures - ids loaded textures
private int createTextureArray(GL2 gl, int[] textures, int width, int height) {
int layerCount = textures.length;
int mipLevelCount = 1;
IntBuffer texture = IntBuffer.allocate(1);
gl.glGenTextures(1, texture);
gl.glActiveTexture(GL.GL_TEXTURE0);
gl.glBindTexture(GL2.GL_TEXTURE_2D_ARRAY, texture.get(0));
gl.glTexStorage3D(GL2.GL_TEXTURE_2D_ARRAY, mipLevelCount, GL2.GL_RGBA8, width, height, layerCount);
for (int i = 0; i<textures.length; i++) {
gl.glTexSubImage3D(GL2.GL_TEXTURE_2D_ARRAY, i, // error here
0, 0, 0,
width, height, layerCount,
GL2.GL_RGBA, GL2.GL_UNSIGNED_BYTE,
textures[i]);
}
// Always set reasonable texture parameters
gl.glTexParameteri(GL2.GL_TEXTURE_2D_ARRAY, GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_LINEAR);
gl.glTexParameteri(GL2.GL_TEXTURE_2D_ARRAY, GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_LINEAR);
gl.glTexParameteri(GL2.GL_TEXTURE_2D_ARRAY, GL2.GL_TEXTURE_WRAP_S, GL2.GL_CLAMP_TO_EDGE);
gl.glTexParameteri(GL2.GL_TEXTURE_2D_ARRAY, GL2.GL_TEXTURE_WRAP_T, GL2.GL_CLAMP_TO_EDGE);
return texture.get(0);
}
Shader example:
#version 130
uniform sampler2DArray textures;
varying vec2 UV;
...
void main() {
...
int layer = 0;
gl_FragColor = texture2DArray(textures, vec3(UV, layer));
}
I will be grateful for the help.
An array texture is not a "list of textures". An array texture is a single OpenGL texture, one which individually has a number of quasi-independent layers in it. While you may conceptually think of each layer of an array texture as a separate conceptual texture, in OpenGL (and GLSL), it is a single object.
Given this, the interface in your function is incorrect. It should return a single texture object, and it should take as a parameter, not an array of int (note: OpenGL objects are unsigned integers), but a single integer: the number of array layers to create in that texture.
How you use an array texture in GLSL is simple. Your uniform for the sampler uses an array-texture sampler type (for example sampler2DArray for 2D array textures). You bind the array texture to the same texture image unit that you specified as the binding for the sampler uniform (just as you would for a non-array 2D texture).
Your GLSL is missing one thing. There is no texture2DArray function. The correct function to use is just texture. The texture type in post-GL 3.0 is specified solely by the parameter, not by the name of the function anymore.
In addition to what #NicolBolas already said: There is a bunch of problems with the shader code, mostly due to functionality that has been deprecated in version 130:
There is no method texture2DArray in any standard glsl version. There has been one in the EXT_texture_array extension, but this has never been integrated since in glsl 130 all texture lookup functions (texture2D, texture3D, ...) have been replaced by a overloaded texture command. If you are targeting 130 without extensions, you should use texture(textures, vec3(UV, layer))
The varying keyword is deprecated in glsl 130 and should be replaced by in/out
gl_FragColor is deprecated and a user defined output (out) variable should be used.
You might want to have a look at Section 1.2.1 of the GLSL 130 Spec which describes the deprecations and how they should be handled. In general I would encourage everyone not to use 130 at all today unless there is a special reason for it. Better move to OpenGL 3.3+ Core Profile and GLSL 330+.

Dynamic arrays as texture GLSL

I have worked with C++/OpenSceneGraph/GLSL integration and I need to handle dynamic array on shader.
My dynamic data array of vec3 was converted into 1D-texture to pass as uniform to fragment (I'm using GLSL 1.3), as follows:
osg::ref_ptr<osg::Image> image = new osg::Image;
image->setImage(myVec3Array->size(), 1, 1, GL_RGBA8, GL_RGBA, GL_FLOAT, (unsigned char*) &myVec3Array[0], osg::Image::NO_DELETE);
// Pass the texture to GLSL as uniform
osg::StateSet* ss = scene->getOrCreateStateSet();
ss->addUniform( new osg::Uniform("vertexMap", texture) );
For now, I would like to retrieve my raw array of vec3 on fragment shader. How can I do this process? Does a texture2D function only return normalized values?
Does a texture2D function only return normalized values?
No. It returns values depedning on the internal format of the texture.
image->setImage(myVec3Array->size(), 1, 1, GL_RGBA8, GL_RGBA, GL_FLOAT, (unsigned char*) &myVec3Array[0], osg::Image::NO_DELETE);
^^^^^^^^
GL_RGBA8 is an unsigned normalized integer format ("UNORM" for short). So the values in the texture are unsigned integers with 8 bit per channel, and [0,255] is mapped to [0,1] when sampling the texture.
If you want unnormalized floats, you must use some appropriate format, like GL_RGBA32F.

GLSL stops rendering

I want to write a signed distance interpretation. For that I am creating a voxelgrid 100*100*100 for example (the size will increase if it is working).
Now my plans are to load a point cloud into a 1d texture:
glEnable(GL_TEXTURE_1D);
glGenTextures(1, &_texture);
glBindTexture(GL_TEXTURE_1D, _texture);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA, pc->pc.size(), 0, GL_RGBA, GL_FLOAT, &pc->pc.front());
glBindTexture(GL_TEXTURE_1D, 0);
'pc' is just a class which holds a vector of structure Point, which has only floats x,y,z,w.
Than I want to render the hole 100x100x100 grid, so each voxel and iterate trough all points of that texture, calculate the distance to my current voxel and store that distance in a new texture (1000x1000). For the moment this texture I am creating holds only color valuables which stores the distance in the red and green component and blue is set to 1.0.
So I can see the result on screen.
My problem is now, that when I have about 500 000 points in my point cloud, It seems to stop rendering after a few voxels(less than 50 000). My guess is that if it takes to long, it stops and just trow out the buffer that it has.
I don't know if that can be the case but if it is, is there something I can do against it, or maybe something I can do to make this procedure better/faster.
My second guess is, that there is something I don't consider with the 1D Texture. But is there a better way to pass in a high amount of data? Because I will surely need a few hundred thousand points data.
I don't know if it helps if I show the full fragment shader, so I will only show some parts, which I think is important for that problem:
Distance calculation and iteration through all points:
for(int i = 0; i < points; ++i){
vec4 texInfo = texture(textureImg, getTextCoord(i));
vec4 pos = position;
pos.z /= rows*rows;
vec4 distVector = texInfo-pos;
float dist = sqrt(distVector.x*distVector.x + distVector.y*distVector.y + distVector.z*distVector.z);
if(dist < minDist){
minDist = dist;
}
}
Function getTexCoord:
float getTextCoord(float a)
{
return (a * 2.0f + 1.0f) / (2.0f * points);
}
*Edit:
vec4 newPos = vec4(makeCoord(position.x+Col())-1,
makeCoord(position.y+Row())-1,
0,
1.0);
float makeCoord(float a){
return (a/rows)*2;
}
int Col(){
float a = mod(position.z,rows);
return int(a);
}
int Row()
{
float a = position.z/rows;
return int(a);
}
You absolutely shouldn`t be looping through all of your points in a fragment shader, as it gets calculated N times per frame (where N equals the number of pixels), which effectively gives you O(N2) computational complexity.
All textures have limits on how much data they can hold per dimension. Two most important values here are GL_MAX_TEXTURE_SIZE and GL_MAX_3D_TEXTURE_SIZE. As stated in official docs,
Texture sizes have a limit based on the GL implementation. For 1D and 2D textures (and any texture types that use similar dimensionality, like cubemaps) the max size of either dimension is GL_MAX_TEXTURE_SIZE. For array textures, the maximum array length is GL_MAX_ARRAY_TEXTURE_LAYERS. For 3D textures, no dimension can be greater than GL_MAX_3D_TEXTURE_SIZE in size.
Within these limits, the size of a texture can be any value. It is advised however, that you stick to powers-of-two for texture sizes, unless you have a significant need to use arbitrary sizes.
The most typical values are listed here and here.
If you really have to use large data amounts inside your frag shader, consider a 2D or 3D texture with known power-of-2 dimensions and GL_NEAREST / GL_REPEAT coordinates. This will enable you to compute 2D texture coords just by multiplying the source offset by a precomputed 1/width value (Y coord; the remainder is by definition smaller than 1 texel and can be safely ignored in the presence of GL_NEAREST) and using it as-is for X coord (GL_REPEAT guarantees that only the remainder gets used). Personally I implemented this approach when I needed to pass 128 MB of data to a GLSL 1.20 shader.
If you are targeting a recent enough OpenGL (≥ 3.0), you also can use buffer textures.
And the last, but not the least. You cannot pass integer-precision values greater than 224 through standard IEEE floats.