OpenSceneGraph float Image - c++

Using C++ and OSG I'm trying to upload a float texture to my shader, but somehow it does not seem to work. At the end I posted some part of my code. Main question is how to create an osg::Image object using data from a float array. In OpenGL the desired code would be
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, width, height, 0,
GL_LUMINANCE, GL_FLOAT, data);
but in this case I have to use OSG.
The code runs fine when using
Image* image = osgDB::readImageFile("someImage.jpg");
instead of
image = new Image;
but I need to upload generated float data. It's also not possible to switch to unsigned char arrays as I need the GL_LUMINANCE32F_ARB data range in the shader code.
I hope someone can help me here as Google couldn't help me with it (googled for eg: osg float image). So here's my code.
using namespace std;
using namespace osg;
//...
float* data = new float[width*height];
fill_n(data, size, 1.0); // << I actually do this for testing purposes
Texture2D* texture = new Texture2D;
Image* image = new Image;
osg::State* state = new osg::State;
Uniform* uniform = new Uniform(Uniform::SAMPLER_2D, "texUniform");
texture->setInternalFormat(GL_LUMINANCE32F_ARB);
texture->setDataVariance(osg::Object::DYNAMIC);
texture->setFilter(osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR);
texture->setFilter(osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR);
texture->setWrap(osg::Texture2D::WRAP_T, osg::Texture2D::CLAMP_TO_EDGE);
texture->setWrap(osg::Texture2D::WRAP_S, osg::Texture2D::CLAMP_TO_EDGE);
if (data == NULL)
cout << "texdata null" << endl; // << this is not printed
image->setImage(width, height, 1, GL_LUMINANCE32F_ARB,
GL_LUMINANCE, GL_FLOAT,
(unsigned char*)data, osg::Image::USE_NEW_DELETE);
if (image->getDataPointer() == NULL)
cout << "datapointernull" << endl; // << this is printed
if (!image->valid())
exit(1); // << here the code exits (hard exit just for testing purposes)
osgDB::writeImageFile(*image, "blah.png");
texture->setInternalFormat(GL_LUMINANCE32F_ARB);
texture->setImage(image);
camera->getOrCreateStateSet()->setTextureAttributeAndModes(4, texture);
state->setActiveTextureUnit(4);
texture->apply(*state);
uniform->set(4);
addProgrammUniform(uniform);
I found another way on the web, letting osg::Image create the data and fill it afterwards. But somehow this also does not work. I inserted this just after the new XYZ; lines.
image->setInternalTextureFormat(GL_LUMINANCE32F_ARB);
image->allocateImage(width,height,1,GL_LUMINANCE,GL_FLOAT);
if (image->data() == NULL)
cout << "null here?!" << endl; // << this is printed.

I use the following (simplified) code to create and set a floating-point texture:
// Create texture and image
osg::Texture* texture = new osg::Texture2D;
osg::Image* image = new osg::Image();
image->allocateImage(size, size, 1, GL_LUMINANCE, GL_FLOAT);
texture->setInternalFormat(GL_LUMINANCE32F_ARB);
texture->setFilter(osg::Texture::MIN_FILTER, osg::Texture::LINEAR);
texture->setFilter(osg::Texture::MAG_FILTER, osg::Texture::LINEAR);
texture->setWrap(osg::Texture::WRAP_S, osg::Texture::CLAMP_TO_EDGE);
texture->setWrap(osg::Texture::WRAP_T, osg::Texture::CLAMP_TO_EDGE);
texture->setImage(image);
// Set texture to node
osg::StateSet* stateSet = node->getOrCreateStateSet();
stateSet->setTextureAttributeAndModes(TEXTURE_UNIT_NUMBER, texture);
// Set data
float* data = reinterpret_cast<float*>(image->data());
/* ...data processing... */
image->dirty();
You may want to change some of the parameters, but this should give you a start. I believe that in your case TEXTURE_UNIT_NUMBER should be set to 4.

but I need to upload generated float data. It's also not possible to switch to unsigned char arrays as I need the GL_LUMINANCE32F_ARB data range in the shader code.
osgDB::writeImageFile(*image, "blah.png");
png files don't support 32bit per channel data, so you can not write your texture to file this way. See the libpng book:
PNG grayscale images support the widest range of pixel depths of any image type. Depths of 1, 2, 4, 8, and 16 bits are supported, covering everything from simple black-and-white scans to full-depth medical and raw astronomical images.[63]
[63] Calibrated astronomical image data is usually stored as 32-bit or 64-bit floating-point values, and some raw data is represented as 32-bit integers. Neither format is directly supported by PNG, although one could, in principle, design an ancillary chunk to hold the proper conversion information. Conversion of data with more than 16 bits of dynamic range would be a lossy transformation, however--at least, barring the abuse of PNG's alpha channel or RGB capabilities.
For 32 bit per channel, check out the OpenEXR format.
If however 16bit floating points (i.e. half floats) suffice, then you can go about it like so:
osg::ref_ptr<osg::Image> heightImage = new osg::Image;
int pixelFormat = GL_LUMINANCE;
int type = GL_HALF_FLOAT;
heightImage->allocateImage(tex_width, tex_height, 1, pixelFormat, type);
Now to actually use and write half floats, you can use the GLM library. You get the half float type by including <glm/detail/type_half.hpp>, which is then called hdata.
You now need to get the data pointer from your image and cast it to said format:
glm::detail::hdata *data = reinterpret_cast<glm::detail::hdata*>(heightImage->data());
This you can then access like you would a one dimensional array, so for example
data[currentRow*tex_width+ currentColumn] = glm::detail::toFloat16(3.1415f);
Not that if you write this same data to a bmp or tif file (using the osg plugins), the result will be incorrect. In my case I just got the left half of the intended image stretched onto the full width and not in grayscale, but in some strange color encoding.

Related

SDL putting lots of pixel data onto the screen

I am creating a program that allows you to view fractals like the Mandelbrot or Julia set. I would like to render them as quickly as possible. I would love a way to put an array of uint8_t pixel values onto the screen. The array is formatted like this...
{r0,g0,b0,r1,g1,b1,...}
(A one dimensional array or RGB color values)
I know I have the proper data because before I just set individual points and it worked...
for(int i = 0;i < height * width;++i) {
//setStroke and point are functions that I made that together just draw a colored point
r.setStroke(data[i*3],data[i*3+1],data[i*3+2]);
r.point(i % r.window.w,i / r.window.w);
}
This is a pretty slow operation especially if the screen is big (which I would like it to be)
Is there any faster way to just put all the data onto the screen.
I tried doing something like this
void* pixels;
int pitch;
SDL_Texture* img = SDL_CreateTexture(ren,
SDL_GetWindowPixelFormat(win),SDL_TEXTUREACCESS_STREAMING,window.w,window.h);
SDL_LockTexture(img, NULL, &pixels, &pitch);
memcpy(pixels, data, window.w * 3 * window.h);
SDL_UnlockTexture(img);
SDL_RenderCopy(ren,img,NULL,NULL);
SDL_DestroyTexture(img);
I have no idea what I'm doing so please have mercy
Edit (thank you for comments :))
So here is what I do now
SDL_Texture* img = SDL_CreateTexture(ren, SDL_PIXELFORMAT_RGB888,SDL_TEXTUREACCESS_STREAMING,window.w,window.h);
SDL_UpdateTexture(img,NULL,&data[0],window.w * 3);
SDL_RenderCopy(ren,img,NULL,NULL);
SDL_DestroyTexture(img);
But I get this Image... which is not what it should look like
I am thinking that my data is just formatted wrong, right now it is formatted as an array of uint8_t in RGB order. Is there another way I should be formatting it (note I do not need an alpha channel)

Unable to create image from compressed texture data (S3TC)

I've been trying to load compressed images with S3TC (BC/DXT) compression in Vulkan, but so far I haven't had much luck.
Here is what the Vulkan specification says about compressed images:
https://www.khronos.org/registry/dataformat/specs/1.1/dataformat.1.1.html#S3TC:
Compressed texture images stored using the S3TC compressed image formats are represented as a collection of 4×4 texel blocks, where each block contains 64 or 128 bits of texel data. The image is encoded as a normal 2D raster image in which each 4×4 block is treated as a single pixel.
https://www.khronos.org/registry/vulkan/specs/1.0/xhtml/vkspec.html#resources-images:
For images created with linear tiling, rowPitch, arrayPitch and depthPitch describe the layout of the subresource in linear memory. For uncompressed formats, rowPitch is the number of bytes between texels with the same x coordinate in adjacent rows (y coordinates differ by one). arrayPitch is the number of bytes between texels with the same x and y coordinate in adjacent array layers of the image (array layer values differ by one). depthPitch is the number of bytes between texels with the same x and y coordinate in adjacent slices of a 3D image (z coordinates differ by one). Expressed as an addressing formula, the starting byte of a texel in the subresource has address:
// (x,y,z,layer) are in texel coordinates
address(x,y,z,layer) = layerarrayPitch + zdepthPitch + yrowPitch + xtexelSize + offset
For compressed formats, the rowPitch is the number of bytes between compressed blocks in adjacent rows. arrayPitch is the number of bytes between blocks in adjacent array layers. depthPitch is the number of bytes between blocks in adjacent slices of a 3D image.
// (x,y,z,layer) are in block coordinates
address(x,y,z,layer) = layerarrayPitch + zdepthPitch + yrowPitch + xblockSize + offset;
arrayPitch is undefined for images that were not created as arrays. depthPitch is defined only for 3D images.
For color formats, the aspectMask member of VkImageSubresource must be VK_IMAGE_ASPECT_COLOR_BIT. For depth/stencil formats, aspect must be either VK_IMAGE_ASPECT_DEPTH_BIT or VK_IMAGE_ASPECT_STENCIL_BIT. On implementations that store depth and stencil aspects separately, querying each of these subresource layouts will return a different offset and size representing the region of memory used for that aspect. On implementations that store depth and stencil aspects interleaved, the same offset and size are returned and represent the interleaved memory allocation.
My image is a normal 2D image (0 layers, 1 mipmap), so there's no arrayPitch or depthPitch. Since S3TC compression is directly supported by the hardware, it should be possible to use the image data without decompressing it first. In OpenGL this can be done using glCompressedTexImage2D, and this has worked for me in the past.
In OpenGL I've used GL_COMPRESSED_RGBA_S3TC_DXT1_EXT as image format, for Vulkan I'm using VK_FORMAT_BC1_RGBA_UNORM_BLOCK, which should be equivalent.
Here's my code for mapping the image data:
auto dds = load_dds("img.dds");
auto *srcData = static_cast<uint8_t*>(dds.data());
auto *destData = static_cast<uint8_t*>(vkImageMapPtr); // Pointer to mapped memory of VkImage
destData += layout.offset(); // layout = VkImageLayout of the image
assert((w %4) == 0);
assert((h %4) == 0);
assert(blockSize == 8); // S3TC BC1
auto wBlocks = w /4;
auto hBlocks = h /4;
for(auto y=decltype(hBlocks){0};y<hBlocks;++y)
{
auto *rowDest = destData +y *layout.rowPitch(); // rowPitch is 0
auto *rowSrc = srcData +y *(wBlocks *blockSize);
for(auto x=decltype(wBlocks){0};x<wBlocks;++x)
{
auto *pxDest = rowDest +x *blockSize;
auto *pxSrc = rowSrc +x *blockSize; // 4x4 image block
memcpy(pxDest,pxSrc,blockSize); // 64Bit per block
}
}
And here's the code for initializing the image:
vk::Device device = ...; // Initialization
vk::AllocationCallbacks allocatorCallbacks = ...; // Initialization
[...] // Load the dds data
uint32_t width = dds.width();
uint32_t height = dds.height();
auto format = dds.format(); // = vk::Format::eBc1RgbaUnormBlock;
vk::Extent3D extent(width,height,1);
vk::ImageCreateInfo imageInfo(
vk::ImageCreateFlagBits(0),
vk::ImageType::e2D,format,
extent,1,1,
vk::SampleCountFlagBits::e1,
vk::ImageTiling::eLinear,
vk::ImageUsageFlagBits::eSampled | vk::ImageUsageFlagBits::eColorAttachment,
vk::SharingMode::eExclusive,
0,nullptr,
vk::ImageLayout::eUndefined
);
vk::Image img = nullptr;
device.createImage(&imageInfo,&allocatorCallbacks,&img);
vk::MemoryRequirements memRequirements;
device.getImageMemoryRequirements(img,&memRequirements);
uint32_t typeIndex = 0;
get_memory_type(memRequirements.memoryTypeBits(),vk::MemoryPropertyFlagBits::eHostVisible,typeIndex); // -> typeIndex is set to 1
auto szMem = memRequirements.size();
vk::MemoryAllocateInfo memAlloc(szMem,typeIndex);
vk::DeviceMemory mem;
device.allocateMemory(&memAlloc,&allocatorCallbacks,&mem); // Note: Using the default allocation (nullptr) doesn't change anything
device.bindImageMemory(img,mem,0);
uint32_t mipLevel = 0;
vk::ImageSubresource resource(
vk::ImageAspectFlagBits::eColor,
mipLevel,
0
);
vk::SubresourceLayout layout;
device.getImageSubresourceLayout(img,&resource,&layout);
auto *srcData = device.mapMemory(mem,0,szMem,vk::MemoryMapFlagBits(0));
[...] // Map the dds-data (See code from first post)
device.unmapMemory(mem);
The code runs without issues, however the resulting image isn't correct. This is the source image:
And this is the result:
I'm certain that the problem lies in the first code snipped I've posted, however, in case it doesn't, I've written a small adaption of the triangle demo from the Vulkan SDK which produces the same result. It can be downloaded here. The source-code is included, all I've changed from the triangle demo are the "demo_prepare_texture_image"-function in tri.c (Lines 803 to 903) and the "dds.cpp" and "dds.h" files. "dds.cpp" contains the code for loading the dds, and mapping the image memory.
I'm using gli to load the dds-data (Which is supposed to "work perfectly with Vulkan"), which is also included in the download above. To build the project, the Vulkan SDK include directory has to be added to the "tri" project, and the path to the dds has to be changed (tri.c, Line 809).
The source image ("x64/Debug/test.dds" in the project) uses DXT1 compression. I've tested in on different hardware as well, with the same result.
Any example code for initializing/mapping compressed images would also help a lot.
Your problem is actually quite simple - in the demo_prepare_textures function, the first line, there is a variable tex_format, which is set to VK_FORMAT_B8G8R8A8_UNORM (which is what it is in the original sample). This eventually gets used to create the VkImageView. If you just change this to VK_FORMAT_BC1_RGBA_UNORM_BLOCK, it displays the texture correctly on the triangle.
As an aside - you can verify that your texture loaded correctly, with RenderDoc, which comes with the Vulkan SDK installation. Doing a capture of it, the and looking in the TextureViewer tab, the Inputs tab shows that your texture looks identical to the one on disk, even with the incorrect format.

How to use a .raw file in opengl

I'm trying to read a .raw image format and do some modifications on it in OpenGL. I can read the image like this:
int width, height;
BYTE * data;
FILE * file;
file = fopen( filename, "rb" );
if ( file == NULL ) return 0;
width = 256;
height = 256;
data = malloc( width * height * 3 );
fread( data, width * height * 3, 1, file );
fclose( file );
But i dont know how to use glDrawPixels to draw the picture.
My second problem is that I dont know how can I access each pixel. I mean in a .raw image format, each pixel should have 3 integers for storing RGB values(Am I right?). How can I access these RGB values directly?
There's no such thing as a .raw in the hard and fast sense. The name implies image data with no header but doesn't specify the format of the data. RGB is likely but so is RGBA and it's trivial to think of almost endless other possibilities.
Assuming RGB ordering, one byte per channel, then: each pixel is three bytes wide. So the nth pixel is:
r = data[n*3 + 0]
g = data[n*3 + 1]
b = data[n*3 + 2]
Assuming the data is set out so that the pixels are stored in left-to-right order, line by line, then on the first line the pixel at x=3 is at n=3, on the second it's at n=(width of first line)+3, on the third it's at n=(combined width of first two lines)+3, etc.
So:
r = data[(x + y*width)*3 + 0]
g = data[(x + y*width)*3 + 1]
b = data[(x + y*width)*3 + 2]
To use glDrawPixels just follow what the manual tells you to specify as the parameters. It says:
void glDrawPixels( GLsizei width,
GLsizei height,
GLenum format,
GLenum type,
const GLvoid * data);
You say that width and height are 256. You've said that the format is RGB. Scan down the documentation and you'll see that the corresponding GLenum is GL_RGB. You're saying each channel is a single byte in size. So that's GL_UNSIGNED_BYTE. You've loaded the data to data. So:
glDrawPixels(256, 256, GL_RGB, GL_UNSIGNED_BYTE, data);
Further comments: obviously get this working first so you've something to build on but glDrawPixels is almost unused in practice. As a result it isn't even part of OpenGL ES or, correspondingly, WebGL. Look at the semantics of the thing. You supply your buffer every time you call. OpenGL can't know whether it has been modified since the last call. So every call transfers your data from CPU to GPU. Look into submitting your data once as a texture and drawing using geometry. That'll save the per-call transfer cost and therefore be a lot more efficient.

32bit (int) Buffer to Greyscale/Colour-mapped Image in OpenGL, Single Channel 32 bit Texture or TBO?

I have an int buffer of intensity values, I want to display this as a greyscale/colour-mapped image in OpenGL.
What is the best way to achieve this?
Standard Texture?
Can I do it via a standard glTexture, so something like:
gl.TexImage2D(OpenGL.GL_TEXTURE_2D, 0, OpenGL.GL_R32f, width, height, 0, OpenGL.GL_RED_INTEGER, OpenGL.GL_UNSIGNED_INT, pixels);
In the shader I am under the impression I would use it the same as any other texture except I would use usampler2D instead of sampler2D, at which point I would get the true integer value (i.e. not 0-1 range).
TBO?
Or would it be better to achieve with a TBO and do something like:
gl.TexBuffer(OpenGL.GL_TEXTURE_BUFFER, OpenGL.GL_R32F, bufferID);
In terms of the shader I am actually quite confused. I have seen things like g = texelFetch(u_tbo_tex, offset + 1).r.. So I am guessing I would have to translate the texture coordinates into an offset, something like:
int offset = tex_coord.s + (tex_coord.t * imageWidth);
but then texelFetch actually returns a vec4, so presumably I would use:
int intensity = texelFetch( buffer, offset).r
But then as tex_coord.s & t are in 0-1, that would imply the need to:
int offset = tex_coord.s*imageHeight + ((tex_coord.t * imageWidth) * imageWidth);
Other Buffer
I have very little experience with buffer objects I feel like really all I am doing is using a buffer in GL....so I do feel like I am over complicating it and I am missing the "penny drop".
Important Notes
Why Int? : In some cases I do some manipulation on the data before turning into a colour and would prefer to do this at 32 bit precision to avoid potential precision errors. Arguably it might not make a difference as it eventually becomes a screen color...
Data update frequency: the intensity data is updated occasionally by user events but certainly not multiple times per frame (so I am presuming STATIC is more appropriate then DYNAMIC in this case?)
Use: The data is mainly for GL so _DRAW There is the possibility that the application could make use of GL to compute some values for it but I would probably create a separate READ buffer in this case
The highest integer value I have seen so far is "90,000" so I know it goes out of the 16 bit integer range.
Note: I am doing this through SharpGL and I have been unable to test at the moment as it has no definition for GL_R32f, so I shall have to find the gl.h on my windows platform (always fun) and add the correct const number*
You can use a normal texture with integer/unsigned integer format:
gl.TexImage2D(OpenGL.GL_TEXTURE_2D, 0, OpenGL.GL_R32UI, width, height, 0, OpenGL.GL_RED_INTEGER, OpenGL.GL_UNSIGNED_INT, pixels);
In the shader you can use a usampler2D, since the texture function has an overload for this you directly get the integer values:
uniform usampler myUTexture;
uint value = texture(myUTexture, texCoord).r;
Edit:
Just for completness: texelFetch has also an overload for all types of 2d-sampler. The difference between texture and texelFetch is the coordinate system used ([0,1] for texture and pixel coordinates for texelFetch) and that texelFetch does not take any interpolation/mipmap into account.

sliding window std filter for a one channel image

I would like to apply "std filter" with a fixed patch size to a single channel image.
That is I want out[i,j] to equal the std of the pixels values at a neighborhood around img[i,j].
For those of you who are familiar with Matlab, I'm looking for the equivalent of
>> out = nlfilter( img, [P P], #std );
Is there a way to do this using ippi functions?
I came across ippiMean_StdDev but it seems to work for a single window, and not a sliding window (returning a scalar value rather than an array).
I also saw ippiRectStdDev but the manual states this function is for integral images - and I don't see how this applies in my case.
Does anyone has a working example or a more detailed manual for this?
Finally I figured it out.
input image must be in uint8 format
need to allocate 2 buffers (32bit float and 64bit float in my case)
sizes of arrays:
input size HxW
filter size, PxP
result size H-P+1xW-P+1
intermidate buffers (32f and 64f) sizes H+1xW+1 (note the plus one for integral image boundary!)
// first, compute integral and sqIntegral image
IppiSize sz; sz.width = W; sz.height = H;
ippiSqrIntegral_8u32f64f_C1R( uint8ImgPtr, W*sizeof(unsigned char),
d32ImgPtr, (W+1)*sizeof(float),
d64ImgPtr, (W+1)*sizeof(double),
sz, 0, 0 );
// using the integral images compute the std filter result
IppiRect rect = { 0, 0, P, P };
IppiSize dsz; dsz.width = W-P+1; dsz.height = H-P+1;
ippiRectStdDev_32f_C1R( d32ImgPtr, (W+1)*sizeof(float),
d64ImgPtr, (W+1)*sizeof(double),
dstPtr, (W-P+1)*sizeof(float), dsz, rect );