I did a glsl -> metal parser that works pretty well if the glsl code is not using very old features.
The biggest issue I'm facing is when a glsl code does not explicitly define a texture binding location, like:
uniform layout(binding = 0) sampler2D myTexture; // this is fine
uniform sampler2D myTexture2; // this is the problem
In metal you have to define a texture index on compile time. While in glsl you are allowed to specify (and change) it on run-time.
I believe this is not possible, specially because I can't find it on the spec, but does anyone know if it's possible to specify (or change) a texture index of a metal shader on the fly in CPU side?
texture2d < half > myTexture[[ texture(IDX) ]] // where IDX is not constexpr?
Related
I'm using a shader transpiler tool called 'glslcc' and it supports transpiling into glsl. However I think the GLSL outputs are Vulkan GLSL since it contains things like the following but I might be wrong.
layout(std140) uniform u_Test
{
vec4 test;
} _34;
Will this shader work in OpenGL? If not is there a way to convert from this format to something else so it can be loaded in OpenGL?
This is a Uniform Block that can of course be used with OpenGL (see also Uniform Buffer Object). There is no Vulkan exclusive declaration in this code. The Layout Qualifier std140 was introduced with OpenGL 3.1. See The OpenGLĀ® Shading Language, Version 4.60.7.
I am trying to use mipmapping with vulkan. I do understand that I should use vkCmdBlit between each layer for each image, but before doing that, I just wanted to know how to change the layer in GLSL.
Here is what I did.
First I load and draw a texture (using layer 0) and there was no problem. The "rendered image" is the texture I load, so it is good.
Second, I use this shader (so I wanted to use the second layer (number 1)) but the "rendered image" does not change :
#version 450
layout(set = 0, binding = 0) uniform sampler2D tex;
in vec2 texCoords;
layout(location = 0) out vec4 outColor;
void main() {
outColor = textureLod(tex, texCoords, 1);
}
According to me, the rendered image should be changed, but not at all, it is always the same image, even if I increase the "1" (the number of the layer).
Third instead changing anything in the glsl code, I change the layer number into the ImageSubresourceRange to create the imageView, and the "rendered image" changed, so it seems normal to me and when I will use vkCmdBlit, I must see the original image in lower resolution.
The real problem is, when I try to use a mipmapping (through mipmapping) in GLSL, it does not affect at all the rendered image, but in C++ it does (and that seems fair).
here is (all) my source code
https://github.com/qnope/Vulkan-Example/tree/master/Mipmap
Judging by your default sampler creation info (https://github.com/qnope/Vulkan-Example/blob/master/Mipmap/VkTools/System/sampler.cpp#L28) you always set the maxLod member of your samplers to zero, so your lod is always clamped between 0.0 and 0.0 (minLod/maxLod). This would fit the behaviour you described.
So try setting the maxLod member of your sampler creation info to the actual number of mip maps in your texture and changing the lod level in the shader shoudl work fine.
Most GLSL shaders are using a attribute for the color in the vertex shader, which will be forwarded as varying to the fragment shader. Like this:
attribute vec4 position;
attribute vec4 color;
uniform mat4 mvp;
varying vec4 destinationColor;
void main(){
destinationColor = color;
gl_Position = mvp * position;
};
Setting the color can be done with glVertexAtribPointer() to pass one color per vertex or with glVertexAttrib4fv() to pass a global color for all vertexes. I try to understand the difference to the predefined variable gl_Color in the vertex shader (if there is any difference at all). i.e.
attribute vec4 position;
uniform mat4 mvp;
varying vec4 destinationColor;
void main(){
destinationColor = gl_Color;
gl_Position = mvp * position;
};
and using glColorPointer() to pass one color per vertex or glColor4fv() to use a global color for all vertexes. To me the second shader looks better (= more efficient?), because it uses less attributes. But all tutorials & online resources are using the first approach - so I wonder if I missed anything or if there is no difference at all.
What is better practice when writing GLSL shaders?
To me the second shader looks better (= more efficient?), because it uses less attributes.
It does not use fewer attributes. It just uses fewer explicit attribute declarations. All of the work needed to get that color value to OpenGL is still there. It's still being done. The hardware is still fetching data from a buffer object or getting it from the glColor context value or whatever.
You just don't see it in your shader's text. But just because you don't see it doesn't mean that it happens for free.
User-defined attributes are preferred for the following reasons:
User-defined attributes make it clear how many resources your shaders are using. If you want to know how many attributes you need to provide to a shader, just look at the global declarations. But with predefined attributes, you can't do this; you have to scan through the entire vertex shader for any gl_* names that name a predefined attribute.
User-defined attributes can do more things. If you want to pass integer values as integers to the vertex shader, you must use a user-defined attribute. If you need to pass a double-precision float to the vertex shader, again, a predefined attribute cannot help you.
Predefined attributes were removed from core OpenGL contexts. OSX, for example, does not allow the compatibility profile. You can still use OpenGL 2.1, but if you want to use any OpenGL version 3.2 or greater on OSX, you cannot use removed functionality. And the built-in vertex attributes were removed in OpenGL 3.1.
Predefined attributes were never a part of OpenGL ES 2.0+. So if you want to write shaders that can work in OpenGL ES, you again cannot use them.
So basically, there's no reason to use them these days.
if I remember correctly gl_Color is deprecated remnant from the old style API without VAO/VBO using glBegin() ... glEnd(). If you go to core profile there is no gl_Color anymore ... so I assume you use old OpenGL version or compatibility profile.
If you try to use gl_Color in core profile (for example 4.00) you got:
0(35) : error C7616: global variable gl_Color is removed after version 140
Which means gl_Color was removed from GLSL 1.4
It is not entirely matter of performance but the change in graphic rendering SW architecture or hierarchy of the GL calls if you want.
I'm trying to write a shader using cg (for ogre3d). I can't seem to parse a working shader that I'd like to use as a starting point for my own code.
Here's the declaration for the shader:
void main
(
float2 iTexCoord0 : TEXCOORD0,
out float4 oColor : COLOR,
uniform sampler2D covMap1,
uniform sampler2D covMap2,
uniform sampler2D splat1,
uniform sampler2D splat2,
uniform sampler2D splat3,
uniform sampler2D splat4,
uniform sampler2D splat5,
uniform sampler2D splat6,
uniform float splatScaleX,
uniform float splatScaleZ
)
{...}
My questions:
iTexCoord0 is obviously an input parameter. Why is it not declared uniform?
(oColor is obviously an output parameter. No question)
covMap1 - splat6 are textures. Are these parameters or something loaded into the graphics card memory (like globals)? The ogre definition for the shader program doesn't list them as parameters.
Are splatScaleX and splatScaleZ also parameters? The ogre definition for the shader program also doesn't list these as parameters.
Does the order of declaration mean anything when sending values from an external program?
I'd like to pass in an array of floats (the height map). I assume that would be
uniform float splatScaleZ,
uniform float heightmap[1024]
)
{...}
If I don't pass one of the parameters will the shader just not be executed (and my object will be invisible because it has no texture)?
Is there a better way to debug these than just hit/miss and guess?
iTexCoord0 is obviously an input parameter. Why is it not declared uniform?
Uniforms are not the same things as input parameters. iTexCoord is a varying input, which is to say that for every vertex, it can have a unique value. This is set with commands like glVertexAttribPointer. Things like vertex coordinates, normals, texcoords, vertex colors, are examples of what you might use a varying input for.
Uniforms are the other hand are intended to be static for an entire draw call, or potentially for the entire frame or life of the program. They are set with glUniform* commands. A uniform might be something like the modelview matrix for an object, or the position of the sun for a lighting calculation. They don't change very often.
[edit] These specific commands I think actually work with GLSL, but the theory should be the same for CG. Lookup a cg specific tutorial to figure out the exact commands to set varyings and uniforms.
covMap1 - splat6 are textures. Are these parameters or something loaded into the graphics card memory (like globals)? The ogre definition for the shader program doesn't list them as parameters.
My CG is a little rusty, but if its the same as GLSL then the sampler2d is a uniform that takes an index that represent which sampler you want to sample from. When you do something like glActiveTexture(GL_TEXTURE3), glBindTexture(n), then you set the sampler uniform to "3" you can sample from that texture.
Does the order of declaration mean anything when sending values from an external program?
No, the variables are referred to in the external program by their string variable names.
If I don't pass one of the parameters will the shader just not be executed (and my object will be invisible because it has no texture)?
Unknown. They will likely have "some" initial value, though whether that makes any sense and will generate anything visible is hard to guess.
Is there a better way to debug these than just hit/miss and guess?
http://http.developer.nvidia.com/CgTutorial/cg_tutorial_appendix_b.html
See section B.3.8
For clarity, I start with my question:
Is it possible to use (in the shader code) the custom attribute name which I set for the TEXCOORD usage in the (OpenGL) stream mapping in RenderMonkey 1.82 or do I have to use gl_MultiTexCoord0?
(The question might be valid for the NORMAL usage too, i.e custom name or gl_Normal)
Background:
Using RenderMonkey version 1.82.
I have successfully used the stream mapping to map the general vertex attribute "position" (and maybe "normal"), but the texture coordinates does not seem to be forwarded correctly.
For the shader code, I use #version 330 and the "in" qualifier in GLSL, which should be OK since RM does not compile the shaders itself (the OpenGL driver do).
I have tried both .obj and .3ds files (exported from blender), and when checking the wavefront .obj-file, all texture coordinate information is there, as well as the vertex positions and normals.
If it is not possible, the stream mapping is broken and there is no point in naming the variables in the stream mapping editor (besides for the vertex position stream, which works), since one has to use the built-in variables anyway.
Update:
If using the deprecated built-in variables, one has to use compatibility mode in the shader e.g
#version 330 compatibility
out vec2 vTexCoord;
and, in the main function:
vTexCoord = vec2(gl_MultiTexCoord0);
(Now I'm not sure about the stream mapping of normals either. As soon as I got the texture coordinates working, I had normal problems and had to revert to gl_Normal.)
Here is a picture of a working solution, but with built-in variables (and yes, the commented texcoord variable in the picture does not have the same name as in the stream mapping dialog, but it had the same name when I tried to use it, so it's OK.):
You could try to use the generic vertices's attributes, see http://open.gl, it's a great tutorial ;)
(but I think it imply you'd have to rewrite the code to manual handle the transformations...)
#version 330
layout(location = 0) in vec3 bla_bla_bla_Vertex;
layout(location = 2) in vec3 bla_bla_bla_Normal;
layout(location = 8) in vec3 bla_bla_bla_TexCoord0;
This is a working solution for RM 1.82