Which shaders have to have input layout? - c++

I'm creating a game based on DirectX 11.1. Now I'm coding shaders part and I have one question: How many shader types have to have their own separate input layout? I have every existing DirectX 11.1 in mind, including compute shaders, geometry shaders and other.

Assuming you're talking about ID3D11InputLayout, the only shader stage that requires this is the vertex shader. The other stages have their inputs/outputs defined as the arguments and return types of their main function, respectively.

Related

Automatically compile OpenGL Shaders for Vulkan

Is there any way to automatically compile OpenGL shaders for Vulkan? The problem is with the uniforms.
'non-opaque uniforms outside a block' : not allowed when using GLSL for Vulkan
I have tried compiling for OpenGL, then decompiling with spirv-cross with --vulkan-semantics, but it still has non-opaque uniforms.
spirv-cross seems to only have tools for compiling Vulkan shaders for OpenGL.
[--glsl-emit-push-constant-as-ubo]
[--glsl-emit-ubo-as-plain-uniforms]
A shader meant for OpenGL consumption will not work on Vulkan. Even ignoring the difference in how they consider uniforms, they have very different resource models. Vulkan uses descriptor sets and binding points, with all resources using the same binding indices (set+binding). By contrast, OpenGL gives each kind of resource its own unique set of indices. So a GLSL shader meant for OpenGL consumption might assign a texture uniform and a uniform block to the same binding index. But you can't do that in a GLSL shader meant for Vulkan, not unless the two resources are in different descriptor sets.
If you want to share shaders, you're going to need to employ pre-processor trickery to make sure that the shader assigns resources (including how it apportions uniforms) for the specific target that the shader is being compiled for.

How to remove unused resources from an OpenGL program

I am trying to create something like effect system for OpenGL, and I want to be able to define a number of shaders in the same file. But I discovered the following problem. Say I have two shaders: A and B. Shader A uses texA and shader B uses texB. Then despite the fact that neither shader A uses texB nor shader B uses texA, both textures will be enumerated in both programs (I am using separate programs, so every shader corresponds to one program). One consequence is that I cannot have many textures defined in one file since the shader will fail to link (it compiles successfully but the linker then complains that the number of texture samplers exceeds the HW limit). Other problem is that I am doing automatic resource binding and my shaders have lots of false resource dependencies.
So is there a way to tell the shader compiler/linker to remove all unused resources from the separate program?
Shader sampler units are not there to select textures, but to pass texture units to the shader. The textures themself are bound to the texture units. So the selection which texture to use should not be done in the shader, but in the host program.
Or you could use bindless textures if your OpenGL implementation (=GPU driver) supports these.

D3D11 Writing to buffer in geometry shader

I have some working OpenGL code that I was asked to port to Direct3D 11.
In my code i am using Shader Storage Buffer Objects (SSBOs) to read and write data in a geometry shader.
I am pretty new of Direct3D programming. Thanks to google I've been able to identify the D3D equivalent of SSBOs, RWStructuredBuffer (I think).
The problem is that I am not sure at all I can use them in a geometry shader in D3D11, which, from what i understand, can generally only use up to 4 "stream out"s (are these some sort of transform feedback buffer?).
The question is: is there any way with D3D11/11.1 to do what I'm doing in OpenGL (that is writing to SSBOs from the geometry shader)?
UPDATE:
Just found this page: http://msdn.microsoft.com/en-us/library/windows/desktop/hh404562%28v=vs.85%29.aspx
If i understand correctly the section "Use UAVs at every pipeline stage", it seems that accessing such buffers is allowed in all shader stages.
Then i discovered that DX11.1 are available only on Windows 8, but some features are also ported to Windows 7.
Is this part of Direct3D included in those features available on Windows 7?
RWBuffers are not related to the geometry shader outputting geometry, they are found in compute shader mostly and in a less percentage in pixel shader, and as you spot, other stages needs D3D 11.1 and Windows 8.
What you are looking for is stream output. The API to bind buffers to the output of the geometry shader stage is ID3D11DeviceContext::SOSetTargets and buffers need to be created with the flag D3D11_BIND_STREAM_OUTPUT
Also, outputting geometry with a geometry shader was an addition from D3D10, in D3D11, it is often possible to have something at least as efficient and simpler with compute shaders. That's not an absolute advice of course.
The geometry shader is processed once per assembled primitive and can generate one or more primitives as a result.
The output of the geometry shader can be redirected towards an output buffer instead of passed on further for rasterization.
See this overview diagram of the pipeline and this description of the pipeline stages.
A geometry shader has access to other resources, bound via the GSSetShaderResources method on the device context. However, these are generally resources that are "fixed" at shader execution time such as constants and textures. The data that varies for each execution of the geometry shader is the input primitive to the shader.
just been pointed to this page:
http://nvidia.custhelp.com/app/answers/detail/a_id/3196/~/fermi-and-kepler-directx-api-support .
In short, nvidia does not support the feature on cards < Maxell.
This pretty much answers my question. :/

Multiple instances of shaders

I recently read that you can
"have multiple instances of OpenGL shaders"
but no other details were given on this.
I'd like some clarification as to what exactly this means.
For one, I know you can have more than one glProgram running, and that you can switch between them. Is that all this is referring to? I assuming that switching between several created shader programs per frame would essentially mean I am using several programs "simultaneously".
Or does it somehow refer to having multiple "instances" of the same shader program? That would make no sense to me.
Some basic clarification would be enjoyed here!
When you create a program object you're linking together several shaders. Usually at least a vertex and a fragment shader. Now say you want to render, say some glow around some object. That glow would be created by a different fragment shader, but the vertex shader would be the same as for the regular appearance. Now to save resources you can use the same vertex shader in multiple programs but with different fragment shaders being linked in. Of course you could also have the same fragment shader and different vertex shaders.
In short you can link a single shader into an arbitrary number of programs. As long as the linked shader stages are compatible with each other this helps with modularization.

Can you have multiple pixel (fragment) shaders in the same program?

I would like to have two pixel shaders; the first doing one thing, and then the next doing something else. Is this possible, or do I have to pack everything into the one shader?
You can do do this, e.g. by doing function calls from the main entrypoint to functions that are implemented in the various shader objects.
main() {
callToShaderObject1()
callToShaderObject2()
}
each of those callToShaderObject functions can live in different shader objects, but all objects have to be attached and linked in the same program before it can be used.
They can't run at the same time, but you are free to use different shaders for different geometry, or to render in multiple passes using different shaders.
The answer depends on the GL version, you need to check glAttachShader documentation for the version you are using. GLES versions (including webgl) do not allow attaching multiple fragment shaders to a single program and will raise GL_INVALID_OPERATION error when attempted.
glAttachShader - OpenGL 4:
It is permissible to attach multiple shader objects of the same type because each may contain a portion of the complete shader.
glAttachShader - OpenGL ES 3.2:
It is not permissible to attach multiple shader objects of the same type.