Automatically compile OpenGL Shaders for Vulkan - opengl

Is there any way to automatically compile OpenGL shaders for Vulkan? The problem is with the uniforms.
'non-opaque uniforms outside a block' : not allowed when using GLSL for Vulkan
I have tried compiling for OpenGL, then decompiling with spirv-cross with --vulkan-semantics, but it still has non-opaque uniforms.
spirv-cross seems to only have tools for compiling Vulkan shaders for OpenGL.
[--glsl-emit-push-constant-as-ubo]
[--glsl-emit-ubo-as-plain-uniforms]

A shader meant for OpenGL consumption will not work on Vulkan. Even ignoring the difference in how they consider uniforms, they have very different resource models. Vulkan uses descriptor sets and binding points, with all resources using the same binding indices (set+binding). By contrast, OpenGL gives each kind of resource its own unique set of indices. So a GLSL shader meant for OpenGL consumption might assign a texture uniform and a uniform block to the same binding index. But you can't do that in a GLSL shader meant for Vulkan, not unless the two resources are in different descriptor sets.
If you want to share shaders, you're going to need to employ pre-processor trickery to make sure that the shader assigns resources (including how it apportions uniforms) for the specific target that the shader is being compiled for.

Related

Which shaders have to have input layout?

I'm creating a game based on DirectX 11.1. Now I'm coding shaders part and I have one question: How many shader types have to have their own separate input layout? I have every existing DirectX 11.1 in mind, including compute shaders, geometry shaders and other.
Assuming you're talking about ID3D11InputLayout, the only shader stage that requires this is the vertex shader. The other stages have their inputs/outputs defined as the arguments and return types of their main function, respectively.

How to remove unused resources from an OpenGL program

I am trying to create something like effect system for OpenGL, and I want to be able to define a number of shaders in the same file. But I discovered the following problem. Say I have two shaders: A and B. Shader A uses texA and shader B uses texB. Then despite the fact that neither shader A uses texB nor shader B uses texA, both textures will be enumerated in both programs (I am using separate programs, so every shader corresponds to one program). One consequence is that I cannot have many textures defined in one file since the shader will fail to link (it compiles successfully but the linker then complains that the number of texture samplers exceeds the HW limit). Other problem is that I am doing automatic resource binding and my shaders have lots of false resource dependencies.
So is there a way to tell the shader compiler/linker to remove all unused resources from the separate program?
Shader sampler units are not there to select textures, but to pass texture units to the shader. The textures themself are bound to the texture units. So the selection which texture to use should not be done in the shader, but in the host program.
Or you could use bindless textures if your OpenGL implementation (=GPU driver) supports these.

Is there any way to access mipmaps in a GLSL 1.2 fragment program?

I would like to access different levels of detail in my GLSL fragment program. I'm currently stuck with using legacy OpenGL, including GLSL 1.2. Unfortunately, I don't have control over that.
I see that the texture2DLod() method exists, but it appears it can only be used in a vertex program.
I have read this question, but they appear to be working with GLSL 1.4 or later. Unfortunately, I do not have that option.
Is there any way in a GLSL 1.2 fragment program to sample a specific mipmap level?
If there's no function for doing it directly, is it possible to send the mipmaps in as separate textures without doing 8 copies?
It is not possible for a fragment shader (in GLSL 1.20) to access a specific texture mipmap. However, you can always change the base/max mipmap levels of a texture before you use it. By setting them both to the same level, you force any texture accesses from that texture to use a specific mipmap level.
Now, you can't make these separate textures (unless you're using NVIDIA hardware and have access to ARB_texture_view). So you'll have to live with changing the texture's base/max level every time you want to pick a new mipmap.

Drawing geometry in opengl

Taking the standard opengl 4.0+ functions & specifications into consideration; i've seen that geometries and shapes can be created in either two ways:
making use of VAO & VBO s.
using shader programs.
which one is the standard way of creating shapes?? are they consistent with each other? or they are two different ways for creating geometry and shapes?
Geometry is loaded into the GPU with VAO & VBO.
Geometry shaders produce new geometry based on uploaded. Use them to make special effects like particles, shadows(Shadow Volumes) in more efficient way.
tessellation shaders serve to subdivide geometry for some effects like displacement mapping.
I strongly (like really strongly) recommend you reading this http://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-graphics-pipeline-2011-index/
VAOs and VBOs how about what geometry to draw (specifying per-vertex data). Shader programs are about how to draw them (which program gets applied to each provided vertex, each fragment and so on).
Let's lay out the full facts.
Shaders need input. Without input that changes, every shader invocation will produce exactly the same values. That's how shaders work. When you issue a draw call, a number of shader invocations are launched. The only variables that will change from invocation to invocation within this draw call are in variables. So unless you use some sort of input, every shader will produce the same outputs.
However, that doesn't mean you absolutely need a VAO that actually contains things. It is perfectly legal (though there are some drivers that don't support it) to render with a VAO that doesn't have any attributes enabled (though you have to use array rendering, not indexed rendering). In which case, all user-defined inputs to the vertex shader (if any) will be filled in with context state, which will be constant.
The vertex shader does have some other, built-in per-vertex inputs generated by the system. Namely gl_VertexID. This is the index used by OpenGL to uniquely identify this particular vertex. It will be different for every vertex.
So you could, for example, fetch geometry data yourself based on this index through uniform buffers, buffer textures, or some other mechanism. Or you can procedurally generate vertex data based on the index. Or something else. You could pass that data along to tessellation shaders for them to tessellate the generated data. Or to geometry shaders to do whatever it is you want with those. However you want to turn that index into real data is up to you.
Here's an example from my tutorial series that generates vertex data from nothing more than an index.
i've seen that geometries and shapes can be created in either two ways:
Not either. In modern OpenGL-4 you need both data and programs.
VBOs and VAOs do contain the raw geometry data. Shaders are the programs (usually executed on the GPU) that turn the raw data into pixels on the screen.
Vertex shaders can be used to displace vertices, or to generate them from a builtin formula and the vertex index, which is available as a built in attribute in later open gl versions.
The difference between vertex and geometry shaders is that vertex shader is a 1:1 mapping, while geometry shader can create more vertices -- can be utilized in automatic Level of Detail generation for e.g. NURBS or perlin noise based terrains etc.

How is OpenGL fragment shading performed?

I have a 3D object I've created in Maya (and exported to OBJ and MTL files) and I've created a model viewer app in OGL to view it. If my assumptions are correct (and you know what they say about assumptions...) then because I haven't specified my own GLSL shader, OGL should be using the FFP to determine the fragment colour for each pixel? Is this correct?
In my understanding, the FFP must be implement some sort of default shader because it is able to display specular highlights and reflections etc. Can someone give me some information on this and perhaps tell me how this shading is done?
I understand the material definitions are used to set the properties of the materials of the objects, but I'm unsure of how to the final effects of the lights interacting with material display in the OGL window, without manually specifying a shader (hence my belief that there is some default shader).
In the case of 3rd generation and later GPUs all your assumptions are correct, indeed. As long as no custom shader is specified the driver provides the GPU with a default shader mimicking the FFP.
The default shader usually implements a Phong Lighting model, with the exact details depending on the set parameters of texture environment and such.
For older GPU generations the fixed function pipeline is hardwired.