The documentation of glTexImage2D for internalformat states, that it may be 1, 2, 3, or 4 to specify the number of components. Does this only apply to 'color'-textures or could I pass 1 instead of GL_DEPTH_COMPONENT and subsequently use it as a depth buffer target?
If you use generic numbers (which you should never do. Always use internal formats with explicit sizes), then you will get a color image format with at least as many channels as you ask for.
Related
I have an image with the format R8G8B8A8 (Unorm).
I want to write uint data on it (to be able to use atomic function).
So, when I want to "write" it, I am using in the glsl :
layout(set = 0, binding = 2, r32ui) restrict writeonly uniform uimage3D dst;
However, when I am performing something like
imageStore(dst, coords, uvec4(0xffffffff));
RenderDoc (and my app as well) tells me that all my values are 0 (instead of 1.0 (255 unorm)).
If I replace the r32ui by rgba8 everything works fine but I can not use atomic values. So I wonder if it is possible to do such thing. (However, if I use a r32f instead of rgba8, it works fine as well).
Do you have any solution?
Vulkan specification guarantees that atomic operations must be supported for storage images (VK_FORMAT_FEATURE_STORAGE_IMAGE_ATOMIC_BIT) with only R32_UINT and R32_SINT formats. Implementations may add such support for other formats as well but it's not obligatory. So it's nothing strange that atomic operations don't work with rgba8 format.
Next. You can create an image view with a different format than the format of the image. In such case the image view's format must be compatible with the image's format. In case of the R8G8B8A8 format, both SINT and UINT R32 formats are compatible (have the same number of bits). But to be able to create an image view with a different format, image itself must be created with a VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT flag.
One last thing - there is a note in the specification about format compatibility/mutable images:
Values intended to be used with one view format may not be exactly
preserved when written or read through a different format. For
example, an integer value that happens to have the bit pattern of a
floating point denorm or NaN may be flushed or canonicalized when
written or read through a view with a floating point format.
Similarly, a value written through a signed normalized format that has
a bit pattern exactly equal to -2^b may be changed to -2^b + 1 as
described in Conversion from Normalized Fixed-Point to Floating-Point.
Maybe this is the problem? Though it seems that there should be no conversion between rgba8 (unorm) and r32 (uint). Did validation layers report any warnings or errors? What layout is Your image in when You try to store data in it? Don't forget that:
Load and store operations on storage images can only be done on images
in the VK_IMAGE_LAYOUT_SHARED_PRESENT_KHR or VK_IMAGE_LAYOUT_GENERAL
layout.
I did search and read stuff about this but couldn't understand it.
What's the difference between a texture internal format and format in a call like
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 32, 32, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
?
Let's assume that data is an array of 32 x 32 pixel values where there are four bytes per each pixel (unsigned char data 0-255) for red, green, blue and alpha.
What's the difference between the first GL_RGBA and the second one? Why is GL_RGBA_INTEGER invalid in this context?
The format (7th argument), together with the type argument, describes the data you pass in as the last argument. So the format/type combination defines the memory layout of the data you pass in.
internalFormat (2nd argument) defines the format that OpenGL should use to store the data internally.
Often times, the two will be very similar. And in fact, it is beneficial to make the two formats directly compatible. Otherwise there will be a conversion while loading the data, which can hurt performance. Full OpenGL allows combinations that require conversions, while OpenGL ES limits the supported combinations so that conversions are not needed in most cases.
The reason GL_RGBA_INTEGER is not legal in this case that there are rules about which conversions between format and internalFormat are supported. In this case, GL_RGBA for the internalFormat specifies a normalized format, while GL_RGBA_INTEGER for format specifies that the input consists of values that should be used as integers. There is no conversion defined between these two.
While GL_RGBA for internalFormat is still supported for backwards compatibility, sized types are generally used for internalFormat in modern versions of OpenGL. For example, if you want to store the data as an 8-bit per component RGBA image, the value for internalFormat is GL_RGBA8.
Frankly, I think there would be cleaner ways of defining these APIs. But this is just the way it works. Partly it evolved this way to maintain backwards compatibility to OpenGL versions where features were much more limited. Newer versions of OpenGL add the glTexStorage*() entry points, which make some of this nicer because it separates the internal data allocation and the specification of the data.
The internal format describes how the texture shall be stored in the GPU. The format describes how the format of your pixel data in client memory (together with the type parameter).
Note that the internal format does specify both the number of channels (1 to 4) as well as the data type, while for the pixel data in client memory, both are specified via two separate parameters.
The GL will convert your pixel data to the internal format. If you want efficient texture uploads, you should use matching formats so that there is no conversion needed. But be aware that most GPUs store the texture data in BGRA order, this still is represented by the internal format GL_RBGA - the internal format only describes the number of channels and the data type, the internal layout is totally GPU-specific. However, that means that it is often recommended for maximum performance to use GL_BGRA as the format of your pixel data in client memory.
Let's assume that data is an array of 32 x 32 pixel values where there
are four bytes per each pixel (unsigned char data 0-255) for red,
green, blue and alpha.
What's the difference between the first GL_RGBA and the second one?
The first, internalFormat tells the GL that it should store the texture as 4 channel (RGBA) with normalized integer in the preferred precision (8 bit per channel). The second one, format tells the Gl that you are providing 4 channels per pixel in the R,G,B,A order.
You could for example supply the data as 3-channel RGB data and the GL would automatically extend this to RGBA (with setting A to 1) if the internal format is left at RGBA. You also could supply only the Red channel.
The other way around, if you use GL_RED as internalFormat, the GL would ignore the GB and A channel in your input data.
Also note that the data types will be converted. If you provide a pixel RGB with 32 bit float per channel, you could use GL_FLOAT. However, when you still use the GL_RGBA internal format, the GL will convert these to normalized integers with 8 bpit per channel, so the extra precision is lost. If you want the GL to use the floating point precision, you would also have to use a floating point texture format like GL_RGBA32F.
Why is GL_RGBA_INTEGER invalid in this context?
the _INTEGER formats are for unnormalized integer textures. There is no automatic conversion for integer textures in the GL. You have to use an integer internal format, AND you have to specify your pixel data with some _INTEGER format, otherwise it will result in an error.
I'm trying to find ways to copy multidimensional arrays from host to device in opencl and thought an approach was to use an image... which can be 1, 2, or 3 dimensional objects. However I'm confused because when reading a pixle from an array, they are using vector datatypes. Normally I would think double pointer, but it doesn't sound like that is what is meant by vector datatypes. Anyway here are my questions:
1) What is actually meant to vector datatype, why wouldn't we just specify 2 or 3 indices when denoting pixel coordinates? It looks like a single value such as float2 is being used to denote coordinates, but that makes no sense to me. I'm looking at the function read_imageui and read_image.
2) Can the input image just be a subset of the entire image and sampler be the subset of the input image? I don't understand how the coordinates are actually specified here either since read_image() only seams to take a single value for input and a single value for sampler.
3) If doing linear algebra, should I just bite the bullet and translate 1-D array data from the buffer into multi-dim arrays in opencl?
4) I'm still interested in images, so even if what I want to do is not best for images, could you still explain questions 1 and 2?
Thanks!
EDIT
I wanted to refine my question and ask, in the following khronos documentation they define...
int4 read_imagei (
image2d_t image,
sampler_t sampler,
int2 coord)
But nowhere can I find what image2d_t's definition or structure is supposed to be. The samething for sampler_t and int2 coord. They seem like structs to me or pointers to structs since opencl is supposed to be based on ansi c, but what are the fields of these structs or how do I note the coord with what looks like a scala?! I've seen the notation (int2)(x,y), but that's not ansi c, that looks like scala, haha. Things seem conflicting to me. Thanks again!
In general you can read from images in three different ways:
direct pixel access, no sampling
sampling, normalized coordinates
sampling, integer coordinates
The first one is what you want, that is, you pass integer pixel coordinates like (10, 43) and it will return the contents of the image at that point, with no filtering whatsoever, as if it were a memory buffer. You can use the read_image*() family of functions which take no sampler_t param.
The second one is what most people want from images, you specify normalized image coords between 0 and 1, and the return value is the interpolated image color at the specified point (so if your coordinates specify a point in between pixels, the color is interpolated based on surrounding pixel colors). The interpolation, and the way out-of-bounds coordinates are handled, are defined by the configuration of the sampler_t parameter you pass to the function.
The third one is the same as the second one, except the texture coordinates are not normalized, and the sampler needs to be configured accordingly. In some sense the third way is closer to the first, and the only additional feature it provides is the ability to handle out-of-bounds pixel coordinates (for instance, by wrapping or clamping them) instead of you doing it manually.
Finally, the different versions of each function, e.g. read_imagef, read_imagei, read_imageui are to be used depending on the pixel format of your image. If it contains floats (in each channel), use read_imagef, if it contains signed integers (in each channel), use read_imagei, etc...
Writing to an image on the other hand is straightforward, there are write_image{f,i,ui}() functions that take an image object, integer pixel coordinates and a pixel color, all very easy.
Note that you cannot read and write to the same image in the same kernel! (I don't know if recent OpenCL versions have changed that). In general I would recommend using a buffer if you are not going to be using images as actual images (i.e. input textures that you sample or output textures that you write to only once at the end of your kernel).
About the image2d_t, sampler_t types, they are OpenCL "pseudo-objects" that you can pass into a kernel from C (they are reserved types). You send your image or your sampler from the C side into clSetKernelArg, and the kernel gets back a sampler_t or an image2d_t in the kernel's parameter list (just like you pass in a buffer object and it gets a pointer). The objects themselves cannot be meaningfully manipulated inside the kernel, they are just handles that you can send into the read_image/write_image functions, along with a few others.
As for the "actual" low-level difference between images and buffers, GPU's often have specially reserved texture memory that is highly optimized for "read often, write once" access patterns, with special texture sampling hardware and texture caches to optimize scatter reads, mipmaps, etc..
On the CPU there is probably no underlying difference between an image and a buffer, and your runtime likely implements both as memory arrays while enforcing image semantics.
For reasons detailed
here
I need to texture a quad using a bitmap (as in, 1 bit per pixel, not an 8-bit pixmap).
Right now I have a bitmap stored in an on-device buffer, and am mounting it like so:
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, BFR.G[(T+1)%2]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, W, H, 0, GL_COLOR_INDEX, GL_BITMAP, 0);
The OpenGL spec has this to say about glTexImage2D:
"If type is GL_BITMAP, the data is considered as a string of unsigned bytes (and format must be GL_COLOR_INDEX). Each data byte is treated as eight 1-bit elements..."
Judging by the spec, each bit in my buffer should correspond to a single pixel. However, the following experiments show that, for whatever reason, it doesn't work as advertised:
1) When I build my texture, I write to the buffer in 32-bit chunks. From the wording of the spec, it is reasonable to assume that writing 0x00000001 for each value would result in a texture with 1-px-wide vertical bars with 31-wide spaces between them. However, it appears blank.
2) Next, I write with 0x000000FF. By my apparently flawed understanding of the bitmap mode, I would expect that this should produce 8-wide bars with 24-wide spaces between them. Instead, it produces a white 1-px-wide bar.
3) 0x55555555 = 1010101010101010101010101010101, therefore writing this value ought to create 1-wide vertical stripes with 1 pixel spacing. However, it creates a solid gray color.
4) Using my original 8-bit pixmap in GL_BITMAP mode produces the correct animation.
I have reached the conclusion that, even in GL_BITMAP mode, the texturer is still interpreting 8-bits as 1 element, despite what the spec seems to suggest. The fact that I can generate a gray color (while I was expecting that I was working in two-tone), as well as the fact that my original 8-bit pixmap generates the correct picture, support this conclusion.
Questions:
1) Am I missing some kind of prerequisite call (perhaps for setting a stride length or pack alignment or something) that will signal to the texturer to treat each byte as 8-elements, as it suggests in the spec?
2) Or does it simply not work because modern hardware does not support it? (I have read that GL_BITMAP mode was deprecated in 3.3, I am however forcing a 3.0 context.)
3) Am I better off unpacking the bitmap into a pixmap using a shader? This is a far more roundabout solution than I was hoping for but I suppose there is no such thing as a free lunch.
Just getting started with OpenFrameworks and I'm trying to do something that should be simple : test the colour of the pixel at a particular point on the screen.
I find there's no nice way to do this in openFrameworks, but I can drop down into openGL and glReadPixels. However, I'm having a lot of trouble with it.
Based on http://www.opengl.org/sdk/docs/man/xhtml/glReadPixels.xml I started off trying to do this:
glReadPixels(x,y, 1,1, GL_RGB, GL_INT, &an_int);
I figured that as I was checking the value of a single pixel (width and height are 1) and giving it a GL_INT as type GL_RGB as format, a single pixel should take up a single int (4 bytes) Hence I passed a pointer to an int as the data argument.
However, the first thing I noticed was that glReadPixels seemed to be clobbering some other local variables in my function, so I changed to making an array of 10 ints and now pass that. This has stopped any weird side-effect, but I still have no idea how to interpret what it's returning.
So ... what's the right combination of format and type arguments that I should be passing to safely get something that can easily be unpacked into its RGB values? (Note that I'm doing this through openFrameworks so I'm not explicitly setting up openGL myself. I guess I'm just getting the openFramework / openGL defaults. The only bit of configuration I know I'm doing is NOT setting up alpha-blending, which I believe means that pixels are represented by 3 bytes (R,G,B but no alpha)). I assume that GL_RGB is the format that corresponds to this.
If you do so, you need three int: one for R, one for G, one for B. If think you should use:
unsigned char RGB[3];
glReadPixels(x,y, 1,1, GL_RGB, GL_UNSIGNED_BYTE, rgb);