Using `GL_UNSIGNED_INT_24_8` with `glTexImage2D` - opengl

According to the wiki and this answer, it should be possible to use the enums GL_UNSIGNED_INT_24_8 and GL_FLOAT_32_UNSIGNED_INT_24_8_REV with glTexImage2D to upload image data for packed depth stencil formats, but according to the reference pages, these types are not supported by that function (they are listed in the opengl es reference pages).
Is this a mistake in the reference pages, or is it not possible to use these formats for pixel upload? If so, is there a way to upload to this type of texture (other than rendering to it)?

The reference page is missing information (as it is for glTexSubImage2D). And that's not the only missing information. For example, GL_UNSIGNED_INT_5_9_9_9_REV isn't listed as a valid type, but it is listed in the errors section as if it were a valid type. For whatever reason, they've been doing a better job keeping the ES pages updated and accurate than the desktop GL pages.
It's best to look at the OpenGL specification for these kinds of details, especially if you see a contradiction like this.

Related

Render to swap chain using compute shader

I'm trying to use a compute shader to render directly to the swap chain.
For this, I need to create the swapchain with the usage VK_IMAGE_USAGE_STORAGE_BIT.
The problem is that the swapchain needs to be created with the format VK_FORMAT_B8G8R8A8_UNORM or VK_FORMAT_B8G8R8A8_SRGB and neither of the 2 allows the format feature VK_FORMAT_FEATURE_STORAGE_IMAGE_BIT with the physical device I use.
Do I said something wrong or it's impossible to render to the swapchain with compute shader using my configuration?
Vulkan imposes no requirement on the implementation that it permit direct usage of a swapchain image in a compute shader operation (FYI: "rendering" typically refers to a very specific operation; it's not something that happens in a compute shader). Therefore, it is entirely possible that the implementation will forbid you from using a swapchain image in a CS through various means.
If you cannot create a swapchain image in your preferred format, then your next best bet is to see if you can find a compatible format for an image view of the format you can use as a storage image. This however requires that the implementation support the KHR extension swapchain_mutable_format, and the creation flags for the swapchain must include VK_SWAPCHAIN_CREATE_MUTABLE_FORMAT_BIT_KHR as well as a VkImageFormatListCreateInfoKHR list of formats that you intend to create views for.
Also, given support, this would mean that your CS will have to swap the ordering of the data. And don't forget that, when you create the swapchain, you have to ask it if you can use its images as storage images (imageUsage). It may directly forbid this.

Should each texture have its own dedicated renderer in SDL?

I'm attempting to learn SDL2 and am having difficulties from a practical perspective. I feel like I have a good understanding of SDL windows, renderers, and textures from an abstract perspective. However, I feel like I need to know more about what's going on under the hood to use them appropriately.
For example, when creating a texture I am required to provide a reference to a renderer. I find this odd. A texture seems like it is a resource that is loaded into VRAM. Why should I need to give a resource a reference to a renderer? I understand why it would be necessary to give a renderer a reference to a texture, however, vice versa it doesn't make any sense.
So that leads to another question. Since each texture requires a renderer, should each texture have its own dedicated renderer, or should multiple textures share a renderer?
I feel like there are consequences going down one route versus the other.
Short Answers
I believe the reason a SDL_Texture requires a renderer is because some backend implementations (OpenGL?) have contexts (this is essentially what SDL_Renderer is) and the image data must be associated with that particular context. You cannot use a texture created in one context inside of another.
for your other question, no, you don't need or want a renderer for each texture. That probably would only produce correct results with the software backend for the same reason (context).
As #keltar correctly points out none of the renderer's will work with a texture that was created with a different renderer due to a check in SDL_RenderCopy. However, this is strictly an API requirement to keep things consistent, my point above is to highlight that even if that check were absent it would not work for backends such as OpenGL, but there is no technical reason it would not work for the software renderer.
Some Details about SDL_Renderer
Remember that SDL_Renderer is an abstract interface to multiple possible backends (OpenGL, OpenGLES, D3D, Metal, Software, more?). Each of these are going to possibly have restrictions on sharing data between contexts and therefore SDL has to limit itself in the same way to maintain sanity.
Example of OpenGL restrictions
Here is a good resource for general restrictions and platform dependent functionality on OpenGL contexts.
As you can see from that page that sharing between contexts has restrictions.
Sharing can only occur in the same OpenGL implementation
This means that you certainly cant share between an SDL_Renderer using OpenGL an a different SDL_Renderer using another backend.
You can share data between different OpenGL Contexts
...
This is done using OS Specific extensions
Since SDL is cross platform this means they would have to write special code for each platform to support this, and all OpenGL implementations may not support it at all so its better for SDL to simply not support this either.
each extra render context has a major impact of the applications
performance
while not a restriction, it is a reason why adding support for sharing textures is not worthwhile for SDL.
Final Note: the 'S' in SDL stands for "simple". If you need to share data between contexts SDL is simply the wrong tool for the job.

Pango layout flow around container (image)

I'm using Pango for text layouting without the cairo backend (currently testing with the win32 backend). And I like to know if pango is capable of a flow layout around an image, or any given container. Or maybe inside a custom container.
Something like this: Flow around image
I have checked many examples and the Pango API and didn't found such a feature. Maybe I'm missing something or Pango does not have this feature.
As I said in this answer, you can't. I went through the source code Pango graphics handling is primitive to the point of uselessness. Unless there's been some major reworking in the past year, which the release notes don't indicate, it's probably the same now.
The image you provide as an example is only available as PDF at the moment which requires every line, word and glyph be hard-positioned on the page. While theoretically possible to check the alpha channel of the image to wrap the text around the actual image instead of the block it contains, this has not (to the best of my knowledge) ever been implemented in a dynamic output system.
Pango, specifically, cannot even open "holes" in the text for graphics to be added later and, at the code level, doesn't even have the concept of a multi-line cell - hence a line being the size of its largest component.
Your best bet is to look at WebKit for more complex displays. I, for one, have pretty much given up on Pango and it seems to be getting less popular.

What is the purpose of 'framebuffer' when setting up a GL context?

In this example code it deals with framebuffers before setting up the context.
I've read the man pages of the functions, but I still don't understand exactly what's going on.
So my question is, what exactly is a framebuffer in GLX and how significant is configuring it?
A framebuffer is an area of memory that holds a displayable image. You need one when creating an OpenGL context so that OpenGL has a place to store the image it renders.

Difference between OpenGL and D3D pixel formats

I'm continuing to try and develop an OpenGL path for my software. I'm using abstract classes with concrete implementations for both, but obviously I need a common pixel format enumerator so that I can describe texture, backbuffer/frontbuffer and render target formats between the two. I provide a function in each concrete implementation that accepts my abstract identifier for say, R8G8B8A8, and provides the concrete implementation with an enum suitable for either D3D or OpenGL
I can easily enumerate all D3D pixel formats using CheckDeviceFormat. For OpenGL, I'm firstly iterating through Win32 available accelerated formats (using DescribePixelFormat) and then looking at the PIXELFORMATDESCRIPTOR to see how it's made up, so I can assign it one of my enums. This is where my problems start:
I want to be able to discover all accelerated formats that OpenGL supports on any given system, as comparable to a D3D format. But according to the format descriptors, there aren't any RGB formats (they're all BGR). Further, things like DXT1 - 5 formats, enumerable in D3D, aren't enumerable using the above method. For the latter, I suppose I can just assume if the extension is available, it's a hardware accelerated format.
For the former (how to interpret the format descriptor in terms of RGB/BGR, etc.), I'm not too sure how it works.
Anyone know about this stuff?
Responses appreciated.
Ok, I think I found what I was looking for:
OpenGL image formats
Some image formats are defined by the spec (for backbuffer/depth-stencil, textures, render-targets, etc.), so there is a guarantee, to an extent, that these will be available (they don't need enumerating). The pixel format descriptor can still be used to work out available front buffer formats for the given window.