Load float array with 3D texture? - opengl

i have a float array with intensity value, i need load this array as 3d texture in opengl, and in the fragment shader read this as red color (float sample= texture(cord,volumeText).r).
the size of array is 256*256*256, and the intensity value are between 0.0 to 256.0.
this is a sample of intensity values:
75.839354473071637,
64.083049468866022,
65.253933716444365,
79.992431196592577,
84.411485976957096,
0.0000000000000000,
82.020319431382831,
76.808403454586994,
79.974774618246158,
0.0000000000000000,
91.127273013466336,
84.009956557448433,
90.221356094672814,
87.567422484025627,
71.940263118478072,
0.0000000000000000,
0.0000000000000000,
74.487058398181944,
..................,
..................

To load a texture like this you can use the input format GL_RED and type GL_FLOAT. A proper sized internal format for is GL_R16F. See glTexImage3D:
glTexImage3D(GL_TEXTURE_3D, 0, GL_R16F, 256, 256, 256, 0, GL_RED, GL_FLOAT, dataPtr)
The internal format GL_R16F is a floating point format. This means when you read the red color channel (.r) from the texture in the fragment shader, then the values are still in range [0.0, 256.0].

Related

Framebuffer objects with single channel texture attachment? (Color attachment)

I would like the fragment-shader to output a single byte value. Rendered to a one-channel texture attached to a framebuffer object. I have only ever used the default fragment shader to output a vec4 for color. Is this possible? If I initialized the texture bound to the fbo as color attachment like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, width, height, 0, GL_RED, GL_UNSIGNED_BYTE, NULL);
If so, would I change the fragment shaders out variable from:
out vec4 color
to:
out int color?
(I am trying to render a height map)
Well, your render target is not an integer texture; it's a normalized integer format, which counts as a float. So the corresponding output variable from your shader should be a floating-point type.
But you can use a vec4 if you like; any components that are not part of the render target will be discarded.

texture with internal format GL_RGBA8 turns up in fragment shader as float

I have created a texture with this call:
glTexImage2D(GL_TEXTURE_2D, 0,GL_RGBA8, w, h, 0,GL_RGBA, GL_UNSIGNED_BYTE, 0);
I render to the texture with a dark grey color:
glColor3ub(42, 42, 42);
glBegin()...
Now I render this texture to the backbuffer using a custom shader. In the fragment I access the texture like this:
#version 130
#extension GL_EXT_gpu_shader4 : require
uniform usampler2D text_in;
and get the data using:
uvec4 f0 = texture(text_in, gl_TexCoord[0].st);
I would expect the value in f0[0] to be 42u, but it is 1042852009u, which happens to be
float x = 42/255.0f;
unsigned i = *reinterpret_cast<int*>(&x);
What am I doing wrong? I would like to work with integer textures, so that I in the fragment shader can compare a pixel value to an exact integer value. I know that the render-to-textures works well, because if I render the texture to the backbuffer without the custom shader, I get 42-grey as expected.
An RGBA8 format is an unsigned, normalized format. When you read from/write to it, it is therefore treated as if it were a floating point format, whose values are restricted to the [0, 1] range. Normalized values are basically just compressed floats.
If you want an unsigned integer format, you're looking for GL_RGBA8UI. Note that you also must use GL_RGBA_INTEGER in the pixel transfer format field when using glTexImage2D.

Dynamic arrays as texture GLSL

I have worked with C++/OpenSceneGraph/GLSL integration and I need to handle dynamic array on shader.
My dynamic data array of vec3 was converted into 1D-texture to pass as uniform to fragment (I'm using GLSL 1.3), as follows:
osg::ref_ptr<osg::Image> image = new osg::Image;
image->setImage(myVec3Array->size(), 1, 1, GL_RGBA8, GL_RGBA, GL_FLOAT, (unsigned char*) &myVec3Array[0], osg::Image::NO_DELETE);
// Pass the texture to GLSL as uniform
osg::StateSet* ss = scene->getOrCreateStateSet();
ss->addUniform( new osg::Uniform("vertexMap", texture) );
For now, I would like to retrieve my raw array of vec3 on fragment shader. How can I do this process? Does a texture2D function only return normalized values?
Does a texture2D function only return normalized values?
No. It returns values depedning on the internal format of the texture.
image->setImage(myVec3Array->size(), 1, 1, GL_RGBA8, GL_RGBA, GL_FLOAT, (unsigned char*) &myVec3Array[0], osg::Image::NO_DELETE);
^^^^^^^^
GL_RGBA8 is an unsigned normalized integer format ("UNORM" for short). So the values in the texture are unsigned integers with 8 bit per channel, and [0,255] is mapped to [0,1] when sampling the texture.
If you want unnormalized floats, you must use some appropriate format, like GL_RGBA32F.

Pass texture with float values to shader

I am developing an application in C++ in QT QML 3D. I need to pass tens of float values (or vectors of float numbers) to fragment shader. These values are positions and colors of lights, so I need values more then range from 0.0 to 1.0. However, there is no support to pass an array of floats or integers to a shader in QML. My idea is to save floats to a texture, pass the texture to a shader and get the values from that.
I tried something like this:
float array [4] = {100.5, 60.05, 63.013, 0.0};
uchar data [sizeof array * sizeof(float)];
memcpy(data, &array, sizeof array * sizeof(float));
QImage img(data, 4, 1, QImage::Format_ARGB32);
and pass this QImage to a fragment shader as sampler2D. But is there there a method like memcpy to get values from texture in GLSL?
texelFetch method only returns me vec4 with float numbers range 0.0 to 1.0. So how can I get the original value from the texture in a shader?
I prefer to use sampler2D type, however, is there any other type supporting direct memory access in GLSL?
This will create a float texture from a float array.
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, width, height, 0, GL_RED, GL_FLOAT, &array);
You can access float values from your fragment shader like this:
uniform sampler2D float_texture;
...
float float_texel = float( texture2D(float_texture, tex_coords.xy) );
"OpenGL float texture" is a well documented subject.
What exactly is a floating point texture?
The texture accessing functions do directly read from the texture. The problem is that Format_ARGB32 is not 32-bits per channel; it's 32-bits per pixel. From this documentation page, it seems clear that QImage cannot create floating-point images; it only deals in normalized formats. QImage itself seems to be a class intended for dealing with GUI matters, not for loading OpenGL images.
You'll have to ditch Qt image shortcuts and actually use OpenGL directly.

Passing grayscale OpenCV image to an OpenGL texture

I want to use a grayscale image generated in OpenCV in a GLSL shader.
Based on the question on OpenCV image loading for OpenGL Texture, I've managed to come up with the code that passes RGB image to the shader:
cv::Mat image;
// ...acquire and process image somehow...
//create and bind a GL texture
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, // Type of texture
0, // Pyramid level (for mip-mapping) - 0 is the top level
GL_RGB, // Internal colour format to convert to
image.cols, image.rows, // texture size
0, // Border width in pixels (can either be 1 or 0)
GL_BGR, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
GL_UNSIGNED_BYTE, // Image data type
image.ptr()); // The actual image data itself
glGenerateMipmap(GL_TEXTURE_2D);
and then in the fragment shader I just use this texture:
#version 330
in vec2 tCoord;
uniform sampler2D texture;
out vec4 color;
void main() {
color = texture2D(texture, tCoord);
}
and it all works great.
But now I want to do some grayscale processing on that image, starting with cv::cvtColor(image, image, CV_BGR2GRAY);, doing some more OpenCV stuff to it, and then passing the grayscale to the shaders.
I thought I should use GL_LUMINOSITY as the colour format to convert to, and probably as the input image format as well - but all I'm getting is a black screen.
Can anyone please help me with it?
input format
I'd use GL_RED, since the GL_LUMINANCE format has been deprecated
internalFormat
depends on what you want to do in your shader, although you should always specify a sized internal format, e.g. GL_RGBA8 which gives you 8 bits per channel. Although, with GL_RGBA8, the green, blue and alpha channels will be zero anyway since your input data only has a single channel, so you should probably use the GL_R8 format instead. Also, you can use texture swizzling:
GLint swizzleMask[] = {GL_RED, GL_RED, GL_RED, GL_RED};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
which will cause all channels to 'mirror' the red channel when you access the texture in the shader.