In Vulkan, you can use vkCreateGraphicsPipeline or vkCreateComputePipeline to create pipeline derivates, with the basePipelineHandle or basePipelineIndex members of VkGraphicsPipelineCreateInfo/VkComputePipelineCreateInfo. The documentation states that this feature is available for performance reasons:
The goal of derivative pipelines is that they be cheaper to create using the parent as a starting point, and that it be more efficient (on either host or device) to switch/bind between children of the same parent.
This raises quite a few questions for me:
Is there a way to indicate which state is shared between parent and child pipelines, or does the implementation decide?
Is there any way to know whether the implementation is actually getting any benefit from using derived pipelines (other than profiling)?
The parent pipeline needs to be created with VK_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT. Is there a downside to always using this flag (eg. in case you may create a derived pipeline from this one in the future)?
I came to this question investigating whether pipeline derivatives provide a benefit. Here's some resources I found from vendors:
Tips and Tricks: Vulkan Dos and Don’ts, Nvidia, June 6, 2019
Don’t expect speedup from Pipeline Derivatives.
Vulkan Usage Recommendations, Samsung
Pipeline derivatives let applications express "child" pipelines as incremental state changes from a similar "parent"; on some architectures, this can reduce the cost of switching between similar states. Many mobile GPUs gain performance primarily through pipeline caches, so pipeline derivatives often provide no benefit to portable mobile applications.
Recommendations
Create pipelines early in application execution. Avoid pipeline creation at draw time.
Use a single pipeline cache for all pipeline creation.
Write the pipeline cache to a file between application runs.
Avoid pipeline derivatives.
Vulkan Best Practice for Mobile Developers - Pipeline Management, Arm Software, Jul 11, 2019
Don't
Create pipelines at draw time without a pipeline cache (introduces performance stutters).
Use pipeline derivatives as they are not supported.
Vulkan Samples, LunarG, API-Samples/pipeline_derivative/pipeline_derivative.cpp
/*
VULKAN_SAMPLE_SHORT_DESCRIPTION
This sample creates pipeline derivative and draws with it.
Pipeline derivatives should allow for faster creation of pipelines.
In this sample, we'll create the default pipeline, but then modify
it slightly and create a derivative. The derivatve will be used to
render a simple cube.
We may later find that the pipeline is too simple to show any speedup,
or that replacing the fragment shader is too expensive, so this sample
can be updated then.
*/
It doesn't look like any vendor is actually recommending the use of pipeline derivatives, except maybe to speed up pipeline creation.
To me, that seems like a good idea in theory on a theoretical implementation that doesn't amount to much in practice.
Also, if the driver is supposed to benefit from a common parent of multiple pipelines, it should be completely able to automate that ancestor detection. "Common ancestors" could be synthesized based on whichever specific common pipeline states provide the best speed-up. Why specify it explicitly through the API?
Is there a way to indicate which state is shared between parent and child pipelines
No; the pipeline creation API provides no way to tell it what state will change. The idea being that, since the implementation can see the parent's state, and it can see what you ask of the child's state, it can tell what's different.
Also, if there were such a way, it would only represent a way for you to accidentally misinform the implementation as to what changed. Better to just let the implementation figure out the changes.
Is there any way to know whether the implementation is actually getting any benefit from using derived pipelines (other than profiling)?
No.
The parent pipeline needs to be created with VK_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT. Is there a downside to always using this flag (eg. in case you may create a derived pipeline from this one in the future)?
Probably. Due to #1, the implementation needs to store at least some form of the parent pipeline's state, so that it can compare it to the child pipeline's state. And it must store this state in an easily readable form, which will probably not be the same form as the GPU memory and tokens to be copied into the command stream. As such, there's a good chance that parent pipelines will allocate additional memory for such data. Though the likelihood of them being slower at binding/command execution time is low.
You can test this easily enough by passing an allocator to the pipeline creation functions. If it allocates the same amount of memory as without the flag, then it probably isn't storing anything.
I'm no expert in computer graphics, but my understanding (partly includes intuition) is the following:
Is there a way to indicate which state is shared between parent and child pipelines, or does the implementation decide?
There are certain aspects of the pipeline that are not specified at render time (and so are fixed), for example which shaders to use. My speculation is that the derived from and the derived pipelines likely share these "read-only" information (or in C terms, they point to the same object). That's why creation of derived pipelines is faster.
Switching between these pipelines would also be faster because there is less need to change resources on changing pipelines, because some of the resources are shared and the same.
The parent pipeline needs to be created with VK_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT. Is there a downside to always using this flag (eg. in case you may create a derived pipeline from this one in the future)?
This is very likely implementation-dependent. My speculation is that, when you allow derivatives, you enable resource (e.g. shader) sharing, which means the implementation is likely going to do reference counting for these resources. That would be an unnecessary cost if the resources are not going to be shared. Also, when changing pipelines, the driver wouldn't need to check whether each resource is shared and can stay on the GPU, or is not and needs changing. If there is no sharing, all resources would be changed, and there is no overhead of checking. None of these are that much of an overhead, so either Vulkan is staying on the safe side, or there is another reason I don't know about.
Related
Recently I'm reading https://github.com/ARM-software/vulkan_best_practice_for_mobile_developers/blob/master/samples/vulkan_basics.md, and it said:
OpenGL ES uses a synchronous rendering model, which means that an API call must behave as if all earlier API calls have already been processed. In reality no modern GPU works this way, rendering workloads are processed asynchronously and the synchronous model is an elaborate illusion maintained by the device driver. To maintain this illusion the driver must track which resources are read or written by each rendering operation in the queue, ensure that workloads run in a legal order to avoid rendering corruption, and ensure that API calls which need a data resource block and wait until that resource is safely available.
Vulkan uses an asynchronous rendering model, reflecting how the modern GPUs work. Applications queue rendering commands into a queue, use explicit scheduling dependencies to control workload execution order, and use explicit synchronization primitives to align dependent CPU and GPU processing.
The impact of these changes is to significantly reduce the CPU overhead of the graphics drivers, at the expense of requiring the application to handle dependency management and synchronization.
Could someone help explain why asynchronous rendering model could reduce CPU overhead? Since in Vulkan you still have to track state yourself.
Could someone help explain why asynchronous rendering model could
reduce CPU overhead?
First of all, let's get back to the original statement you are referring to, emphasis mine:
The impact of these changes is to significantly reduce the CPU
overhead of the graphics drivers, [...]
So the claim here is that the driver itself will need to consume less CPU, and it is easy to see as it can more directly forward your requests "as-is".
However, one overall goal of a low-level rendering API like Vulkan is also a potentially reduced CPU overhead in general, not only in the driver.
Consider the following example: You have a draw call which renders to a texture. And then you have another draw call which samples from this texture.
To get the implicit synchronization right, the driver has to track the usage of this texture, both as render target and as source for texture sampling operations.
It doesn't know in advance if the next draw call will need any resources which are still to be written to in previous draw calls. It has to always track every possible such conflicts, no matter if they can occur in your application or not. And it also must be extremely conservative in its decisions. It might be possible that you have a texture bound for a framebuffer for a draw call, but you may know that with the actual uniform values you set for this shaders the texture is not modified. But the GPU driver can't know that. If it can't rule - out with absolute certainty - that a resource is modified, it has to assume it is.
However, your application will more like know such details. If you have several render passes, and the second pass will depend on the texture rendered to in the first, you can (and must) add proper synchronization primitives - but the GPU driver doesn't need to care why there is any synchronization necessary at all, and it doesn't need track any resource usage to find out - it can just do as it is told. And your application also doesn't need to track it's own resource usage in many cases. It is just inherent from the usage as you coded it that a synchronization might be required at some point. There might be still cases where you need to track your own resource usage to find out though, especially if you write some intermediate layer like some more high-level graphics library where you know less and less of the structure of the rendering - then you are getting into a position similar to what a GL driver has to do (unless you want to forward all the burden of synchronization on the users of your library, like Vulkan does).
As far as I can tell it is possible for a queue family to support presenting to the screen but not support graphics. Say I have a queue family that supports both graphics and presenting, and another queue family that only supports presenting. Should I use the first queue family for both processes or should I delegate the first to graphics and the latter to presenting? Or would there be no noticeable difference between these two approaches?
No such HW exists, so best approach is no approach. If you want to be really nice, you can handle the separate present queue family case with expending minimal brain-power on it. Though you have no way to test it on real HW that needs it. So I would say abort with a nice error message would be as adequate, until you can get your hands on actual HW that does it.
I think there is bit of a design error here on Khronoses part. Separate present queue does look like a more explicit way. But then, present op itself is not a queue operation, so the driver can use whatever it wants anyway. Also separate present requires extra semaphore, and Queue Family Ownership Transfer (or VK_SHARING_MODE_CONCURRENT resource). The history went the way that no driver is so extremist to report a separate present queue. So I made KhronosGroup/Vulkan-Docs#1234.
For rough notion of what happens at vkQueuePresentKHR, you can inspect Mesa code: https://github.com/mesa3d/mesa/blob/bf3c9d27706dc2362b81aad12eec1f7e48e53ddd/src/vulkan/wsi/wsi_common.c#L1120-L1232. There's probably no monkey business there using the queue you provided except waiting on your semaphore, or at most making a blit of the image. If you (voluntarily) want to use separate present queue, you need to measure and whitelist it only for drivers (and probably other influences) it actually helps (if any such exist, and if it is even worth your time).
First off, I assume you mean "beneficial" in terms of performance, and whenever it comes to questions like that you can never have a definite answer except by profiling the different strategies. If your application needs to run on a variety of hardware, you can have it profile the different strategies the first time it's run and save the results locally for repeated use, provide the user with a benchmarking utility they can run if they see poor performance, etc. etc. Trying to reason about it in the abstract can only get you so far.
That aside, I think the easiest way to think about questions like this is to remember that when it comes to graphics programming, you want to both maximize the amount of work that can be done in parallel and minimize the amount of work overall. If you want to present an image from a non-graphics queue and you need to perform graphics operations on it, you'll need to transfer ownership of it to the non-graphics queue when graphics operations on it have finished. Presumably, that will take a bit of time in the driver if nothing else, so it's only worth doing if it will save you time elsewhere somehow.
A common situation where this would probably save you time is if the device supports async compute and also lets you present from the compute queue. For example, a 3D game might use the compute queue for things like lighting, blur, UI, etc. that make the most sense to do after geometry processing is finished. In this case, the game engine would transfer ownership of the image to be presented to the compute queue first anyway, or even have the compute queue own the swapchain image from beginning to end, so presenting from the compute queue once its work for the frame is done would allow the graphics queue to stay busy with the next frame. AMD and NVIDIA recommend this sort of approach where it's possible.
If your application wouldn't otherwise use the compute queue, though, I'm not sure how much sense it makes or not to present on it when you have the option. The advantage of that approach is that once graphics operations for a given frame are over, you can have the graphics queue immediately release ownership of the image for it and acquire the next one without having to pause to present it, which would allow presentation to be done in parallel with rendering the next frame. On the other hand, you'll have to transfer ownership of it to the compute queue first and set up presentation there, which would add some complexity and overhead. I'm not sure which approach would be faster and I wouldn't be surprised if it varies with the application and environment. Of course, I'm not sure how many realtime Vulkan applications of any significant complexity fit this scenario today, and I'd guess it's not very many as "per-pixel" things tend to be easier and faster to do with a compute shader.
I'm looking into whether it's better for me to stay with OpenGL or consider a Vulkan migration for intensive bottlenecked rendering.
However I don't want to make the jump without being informed about it. I was looking up what benefits Vulkan offers me, but with a lot of googling I wasn't able to come across exactly what gives performance boosts. People will throw around terms like "OpenGL is slow, Vulkan is way faster!" or "Low power consumption!" and say nothing more on the subject.
Because of this, it makes it difficult for me to evaluate whether or not the problems I face are something Vulkan can help me with, or if my problems are due to volume and computation (and Vulkan would in such a case not help me much).
I'm assuming Vulkan does not magically make things in the pipeline faster (as in shading in triangles is going to be approximately the same between OpenGL and Vulkan for the same buffers and uniforms and shader). I'm assuming all the things with OpenGL that cause grief (ex: framebuffer and shader program changes) are going to be equally as painful in either API.
There are a few things off the top of my head that I think Vulkan offers based on reading through countless things online (and I'm guessing this certainly is not all the advantages, or whether these are even true):
Texture rendering without [much? any?] binding (or rather a better version of 'bindless textures'), which I've noticed when I switched to bindless textures I gained a significant performance boost, but this might not even be worth mentioning as a point if bindless textures effectively does this and therefore am not sure if Vulkan adds anything here
Reduced CPU/GPU communication by composing some kind of command list that you can execute on the GPU without needing to send much data
Being able to interface in a multithreaded way that OpenGL can't somehow
However I don't know exactly what cases people run into in the real world that demand these, and how OpenGL limits these. All the examples so far online say "you can run faster!" but I haven't seen how people have been using it to run faster.
Where can I find information that answers this question? Or do you know some tangible examples that would answer this for me? Maybe a better question would be where are the typical pain points that people have with OpenGL (or D3D) that caused Vulkan to become a thing in the first place?
An example of answer that would not be satisfying would be a response like
You can multithread and submit things to Vulkan quicker.
but a response that would be more satisfying would be something like
In Vulkan you can multithread your submissions to the GPU. In OpenGL you can't do this because you rely on the implementation to do the appropriate locking and placing fences on your behalf which may end up creating a bottleneck. A quick example of this would be [short example here of a case where OpenGL doesn't cut it for situation X] and in Vulkan it is solved by [action Y].
The last paragraph above may not be accurate whatsoever, but I was trying to give an example of what I'd be looking for without trying to write something egregiously wrong.
Vulkan really has four main advantages in terms of run-time behavior:
Lower CPU load
Predictable CPU load
Better memory interfaces
Predictable memory load
Specifically lower GPU load isn't one of the advantages; the same content using the same GPU features will have very similar GPU performance with both of the APIs.
In my opinion it also has many advantages in terms of developer usability - the programmer's model is a lot cleaner than OpenGL, but there is a steeper learning curve to get to the "something working correctly" stage.
Let's look at each of the advantages in more detail:
Lower CPU load
The lower CPU load in Vulkan comes from multiple areas, but the main ones are:
The API encourages up-front construction of descriptors, so you're not rebuilding state on a draw-by-draw basis.
The API is asynchronous and can therefore move some responsibilities, such as tracking resource dependencies, to the application. A naive application implementation here will be just as slow as OpenGL, but the application has more scope to apply high level algorithmic optimizations because it can know how resources are used and how they relate to the scene structure.
The API moves error checking out to layer drivers, so the release drivers are as lean as possible.
The API encourages multithreading, which is always a great win (especially on mobile where e.g. four threads running slowly will consume a lot less energy than one thread running fast).
Predictable CPU load
OpenGL drivers do various kinds of "magic", either for performance (specializing shaders based on state only known late at draw time), or to maintain the synchronous rendering illusion (creating resource ghosts on the fly to avoid stalling the pipeline when the application modifies a resource which is still referenced by a pending command).
The Vulkan design philosophy is "no magic". You get what you ask for, when you ask for it. Hopefully this means no random slowdowns because the driver is doing something you didn't expect in the background. The downside is that the application takes on the responsibility for doing the right thing ;)
Better memory interfaces
Many parts of the OpenGL design are based on distinct CPU and GPU memory pools which require a programming model which gives the driver enough information to keep them in sync. Most modern hardware can do better with hardware-backed coherency protocols, so Vulkan enables a model where you can just map a buffer once, and then modify it adhoc and guarantee that the "other process" will see the changes. No more "map" / "unmap" / "invalidate" overhead (provided the platform supports coherent buffers, of course, it's still not universal).
Secondly Vulkan separates the concept of the memory allocation and how that memory is used (the memory view). This allows the same memory to be recycled for different things in the frame pipeline, reducing the amount of intermediate storage you need allocated.
Predictable memory load
Related to the "no magic" comment for CPU performance, Vulkan won't generate random resources (e.g. ghosted textures) on the fly to hide application problems. No more random fluctuations in resource memory footprint, but again the application has to take on the responsibility to do the right thing.
This is at risk of being opinion based. I suppose I will just reiterate the Vulkan advantages that are written on the box, and hopefully uncontested.
You can disable validation in Vulkan. It obviously uses less CPU (or battery\power\noise) that way. In some cases this can be significant.
OpenGL does have poorly defined multi-threading. Vulkan has well defined multi-threading in the specification. Meaning you do not immediately lose your mind trying to code with multiple threads, as well as better performance if otherwise the single thread would be a bottleneck on CPU.
Vulkan is more explicit; it does not (or tries to not) expose big magic black boxes. That means e.g. you can do something about micro-stutter and hitching, and other micro-optimizations.
Vulkan has cleaner interface to windowing systems. No more odd contexts and default framebuffers. Vulkan does not even require window to draw (or it can achieve it without weird hacks).
Vulkan is cleaner and more conventional API. For me that means it is easier to learn (despite the other things) and more satisfying to use.
Vulkan takes binary intermediate code shaders. While OpenGL used not to. That should mean faster compilation of such code.
Vulkan has mobile GPUs as first class citizen. No more ES.
Vulkan have open source, and conventional (GitHub) public tracker(s). Meaning you can improve the ecosystem without going through hoops. E.g. you can improve\implement a validation check for error that often trips you. Or you can improve the specification so it does make sense for people that are not insiders.
If we have several OpenGL contexts, each in its own process, the driver somehow virtualises the device, so that each program thinks it exclusively runs the GPU. That is, if one program calls glEnable, the other one will never notice that.
This could be otherwise done with a ton of glGet calls to save state and its counterparts to restore it afterwards. Obviously, the driver does it more efficiently. However, in userspace we need to track which changes we made to the state and handle them selectively. Maybe it's just me missing something, but I thought it would be nice, for one, to adjust Viewport for a Framebuffer, and then just undo those changes to whatever state they were before.
Maybe there is a way of achieving the effect of a context switch yet within a single program?
Maybe there is a way of achieving the effect of a context switch yet within a single program?
You may create as many OpenGL contexts in a single process as you like and switch between them. Also with modern GPUs the state of the OpenGL context has little resemblance of what's actually happening on the GPU.
For pre-Core OpenGL there's glPushAttrib()/glPopAttrib() that will let you store off some GL state.
You're probably better off writing your own client-side state shadowing though.
The state machine (and command queue, discussed below) are unique to each context. It is much, much higher-level than you are thinking; the state is generally wrapped up nicely in usermode.
As for context-switching in a single process, be aware that each render context in GL is unsynchronized. An implicit flush is generated during a context switch in order to help alleviate this problem. As long as a context is only used by a single thread, this is generally adequate but probably going to negatively impact performance.
Suppose I'm trying to make some kind of a small opengl graphics engine on C++. I've read that accessing opengl state via glGet* functions can be quite expensive (while accessing opengl state seems to be an often operation), and it's strongly recommended to store a copy of opengl state somewhere with fast read/write access.
I'm currently thinking of storing the opengl state as a global thread_local variable of some appropriate type. How bad is that design? Are there any pitfalls?
If you want to stick with OpenGL's design (where your context pointer could be considered "thread_local") I guess it's a valid option... Obviously, you will need to have full control over all OpenGL calls in order to keep your state copy in sync with the current context's state.
I personally prefer to wrap the OpenGL state of interest using an "OpenGLState" class with a bunch of settable/gettable properties each mapping to some part of the state. You can then also avoid setting the same state twice. You could make it thread_local, but I couldn't (Visual C++ only supports thread_local for POD types).
You will need to be very careful, as some OpenGL calls indirectly change seemingly unrelated parts of the context's state. For example, glDeleteTextures will reset any binding of the deleted texture(s) to 0. Also, some toolkits are very "helpful" in changing OpenGL state behind your back (for example, QtOpenGLContext on OSX changes your viewport for you when made current).
Since you can only (reasonably) use a GL context with one thread, why do you need thread local? Yes, you can make a context current in different threads at different times, but this is not a wise design.
You will usually have one context and one thread accessing it. In rare cases, you will have two contexts (often shared) with two threads. In that case, you can simply put any additional state you wish to save into your context class, of which each instance is owned by exactly one thread.
But most of the time, you need not explicitly "remember" states anyway. All states have well-documented initial states, and they only change when you change them (exception being changes made by a "super smart" toolkit, but storing a wrong state doesn't help in that case either).
You will usually try to batch together states and do many "similar" draw calls with one set of states, the reason being that state changes are stalling the pipeline and need expensive validations being done before the next draw calls.
So, start off with the defaults, and set everything that needs to be non-default before drawing a batch. Then change what needs to be different for the next batch.
If you can't be bothered to dig through the specs for default values and keep track, you can redundantly set everything all the time. Then run your application in GDebugger, which will tell you what state changes are redundant, so you can elimiate them.