SPIR-V allows for very verbose data formats.
GLSL has only basic data types (Chapter 4) that do not specify bit length.
As far as I am aware the most convenient way to program shaders for Vulkan is to program them in GLSL, then use the Vulkan SDK provided compiler (glslc.exe) to convert the file into a SPIR-V binary.
My question is how does one use these verbose data formats such as the VK_FORMAT_R4G4_UNORM_PACK8 (found in the SPIR-V link above) In GLSL while using glslc.exe to compile our shader code. Are there special data types that the compiler allows for? If not is there an alternative higher level language that one could use and then compile into the binary?
For example if this was the attribute descriptions used in the graphics pipeline:
struct Attributes {
vec2 pos;
char flags;
};
static inline std::array<VkVertexInputAttributeDescription, 3> getAttributeDescriptions() {
std::array<VkVertexInputAttributeDescription, 3> attributeDescriptions{};
attributeDescriptions[0].binding = 0;
attributeDescriptions[0].location = 0;
attributeDescriptions[0].format = VK_FORMAT_R32G32_SFLOAT;
attributeDescriptions[0].offset = offsetof(Attributes, pos);
attributeDescriptions[1].binding = 0;
attributeDescriptions[1].location = 1;
attributeDescriptions[1].format = VK_FORMAT_R4G4_UNORM_PACK8;
attributeDescriptions[1].offset = offsetof(Attributes, flags);
return attributeDescriptions;
The proceeding GLSL shader code would look something like this:
#version 450
#extension GL_ARB_separate_shader_objects : enable
//Instance Attributes
layout(location = 0) in vec2 pos;
layout(location = 1) in 4BitVec2DataType flags;
//4BitVec2DataType is a placeholder for whatever GLSL's equivalent of SPIR-V's VK_FORMAT_R4G4_UNORM_PACK8 would be
void main() {
...
}
The proceeding GLSL shader code would look something like this:
No, it wouldn't. You would receive a vec2 in the shader, because that's how vertex attributes work. The vertex format is not meant to exactly match the data format; the data will be converted from that format to the shader-expected bitdepth. Unsigned normalized values are floating-point data, so a 2-vector UNORM maps to a GLSL vec2.
And BTW, SPIR-V does not change this. The shader's input size need not exactly match the given data size; any conversion is just baked into the shader (this is also part of why the vertex format is part of the pipeline).
The GL_EXT_shader_16bit_storage extension offers more flexibility in GLSL for creating unusual sizes of data types within buffer-backed interface blocks. But these are specifically for data in UBOs/SSBOs, not vertex formats. However, this extension requires the SPV_KHR_16bit_storage and SPV_KHR_8bit_storage SPIR-V extensions.
Related
I want to use bindless textures, but spirv does not support it. I found here: https://www.khronos.org/opengl/wiki/Bindless_Texture that uint64_t can be converted to a sampler2D.
uint64_t values can be converted to any sampler or image type using constructors: sampler2DArray(some_uint64). They can also be converted back to 64-bit integers.
and I thought that I could upload a bindless handle as the uint64_t and then convert it to the sampler2D.
spir-v compiler gives me them errors:
error: 'sampler/image' : cannot construct this type
error: 'sampler2D' : sampler-constructor requires two arguments
error: 'constructor' : too many arguments
error: 'assign' : cannot convert from ' const float' to 'layout( location=0) out highp 4-component vector of float'
shader code
#version 460 core
#extension GL_EXT_shader_explicit_arithmetic_types_int64 : require
#extension GL_EXT_scalar_block_layout : require
layout(location = 0) out vec4 f_color;
layout(location = 0) in vec2 v_uv;
layout(std430, binding = 5) uniform Textures
{
uint64_t albedo;
};
void main()
{
f_color = texture(sampler2D(albedo), v_uv);
}
is it possible to convert uint64_t to sampler2D? How to do it?
As with all GLSL extensions, you must explicitly enable them. However, you have a more interesting problem: bindless texturing cannot be used with SPIR-V. What you want to do is only possible if you feed the GLSL directly to OpenGL, without the SPIR-V intermediary.
Vulkan does not support bindless textures; you need to use arrays of samplers to get an equivalent effect. And OpenGL doesn't support GL_EXT_scalar_block_layout. So the code you're writing cannot be used by any graphics system.
I'm writing ray-tracing on OGL computing shaders, to pass data to and from shaders I use buffers.
When size of vec2 output buffer (which is equal to number of rays multiplied by number of faces) reaches ~30Mb attempt of mapping buffer is stable returning NULL pointer. Range mapping also fails.
I can't find any info about GL_SHADER_STORAGE_BUFFER limitations in ogl documentation, but maybe someone can help me, is ~30Mb limit or this mapping-fail may happen because of something different?
And is there any way to avoid this except for calling shader multiple times?
Data declaration in shader:
#version 440
layout(std430, binding=0) buffer rays{
vec4 r[];
};
layout(std430, binding=1) buffer faces{
vec4 f[];
};
layout(std430, binding=2) buffer outputs{
vec2 o[];
};
uniform int face_count;
uniform vec4 origin;
Calling code (using some Qt5 wrappers):
QOpenGLBuffer ray_buffer;
QOpenGLBuffer face_buffer;
QOpenGLBuffer output_buffer;
QVector<QVector2D> output;
output.resize(rays[r].size()*faces.size());
if(!ray_buffer.create()) { /*...*/ }
if(!ray_buffer.bind()) { /*...*/ }
ray_buffer.allocate(rays.data(), rays.size()*sizeof(QVector4D));
if(!face_buffer.create()) { /*...*/ }
if(!face_buffer.bind()) { /*...*/ }
face_buffer.allocate(faces.data(), faces.size()*sizeof(QVector4D));
if(!output_buffer.create()) { /*...*/ }
if(!output_buffer.bind()) { /*...*/ }
output_buffer.allocate(output.size()*sizeof(QVector2D));
ogl->glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, ray_buffer.bufferId());
ogl->glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 1, face_buffer.bufferId());
ogl->glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 2, output_buffer.bufferId());
int face_count = faces.size();
compute.setUniformValue("face_count", face_count);
compute.setUniformValue("origin", pos);
ogl->glDispatchCompute(rays.size()/256, faces.size(), 1);
ray_buffer.destroy();
face_buffer.destroy();
QVector2D* data = (QVector2D*)output_buffer.map(QOpenGLBuffer::ReadOnly);
First of all, you have to understand that the OpenGL specification defines minimum maxima for a variety of values (the ones starting with a MAX_{*} prefix). That means that implementations are required to at least provide the specified amount as the maximum value, but are free to increase the limit as implementors see fit. This way, developers can at least rely on some upper bound, but can still make provisions for possibly larger values.
Section 23 - State Tables summarizes what has been previously specified in the corresponding sections. The information you were looking for is found in table 23.64 - Implementation Dependent Aggregate Shader Limits (cont.). If you want to know about which state belongs where (because there is per-object state, quasi-global state, program state and so on), you go to section 23.
The minimum maximum size of a shader storage buffer is represented by the symbolic constant MAX_SHADER_STORAGE_BLOCK_SIZE as per section 7.8 of the core OpenGL 4.5 specification.
Since their adoption into core, the required size (i.e. the minimum maximum) has been significantly increased. In core OpenGL 4.3 and 4.4, the minimum maximum was pow(2, 24) (or 16MB with 1 byte basic machine units and 1MB = 1024^2 bytes) - in core OpenGL 4.5 this value is now pow(2, 27) (or 128MB)
Summary: When in doubt about OpenGL state, refer to section 23 of the core specification.
From OpenGL Wiki:
SSBOs can be much larger. The OpenGL spec guarantees that UBOs can be
up to 16KB in size (implementations can allow them to be bigger). The
spec guarantees that SSBOs can be up to 128MB. Most implementations
will let you allocate a size up to the limit of GPU memory.
OpenGL < 4.5 guarantees only 16MiB (OpenGL 4.5 increased the minimum to 128MiB) , you can try using glGet() to query if you can bind more.
GLint64 max;
glGetInteger64v(GL_MAX_SHADER_STORAGE_BLOCK_SIZE, &max);
In fact problem seems to be in Qt wrappers. Didn't look in-depth, but when I've changed QOpenGLBuffer's create(), bind(), allocate() and map() to glCreateBuffers(), glBindBuffer(), glNamedBufferData() and glMapNamedBuffer(), all called through QOpenGLFunctions_4_5_Core, memory problem was gone until I reached 2Gb (which is GPU physical memory limit).
Second error I've made was not using glMemoryBarrier(), but it didn't help while QOpenGLBuffer was in use.
My vertex shader is ,
uniform Block1{ vec4 offset_x1; vec4 offset_x2;}block1;
out float value;
in vec4 position;
void main()
{
value = block1.offset_x1.x + block1.offset_x2.x;
gl_Position = position;
}
The code I am using to pass values is :
GLfloat color_values[8];// contains valid values
glGenBuffers(1,&buffer_object);
glBindBuffer(GL_UNIFORM_BUFFER,buffer_object);
glBufferData(GL_UNIFORM_BUFFER,sizeof(color_values),color_values,GL_STATIC_DRAW);
glUniformBlockBinding(psId,blockIndex,0);
glBindBufferRange(GL_UNIFORM_BUFFER,0,buffer_object,0,16);
glBindBufferRange(GL_UNIFORM_BUFFER,0,buffer_object,16,16);
Here what I am expecting is, to pass 16 bytes for each vec4 uniform. I get GL_INVALID_VALUE error for offset=16 , size = 16.
I am confused with offset value. Spec says it is corresponding to "buffer_object".
There is an alignment restriction for UBOs when binding. Any glBindBufferRange/Base's offset must be a multiple of GL_UNIFORM_BUFFER_OFFSET_ALIGNMENT. This alignment could be anything, so you have to query it before building your array of uniform buffers. That means you can't do it directly in compile-time C++ logic; it has to be runtime logic.
Speaking of querying things at runtime, your code is horribly broken in many other ways. You did not define a layout qualifier for your uniform block; therefore, the default is used: shared. And you cannot use `shared* layout without querying the layout of each block's members from OpenGL. Ever.
If you had done a query, you would have quickly discovered that your uniform block is at least 32 bytes in size, not 16. And since you only provided 16 bytes in your range, undefined behavior (which includes the possibility of program termination) results.
If you want to be able to define C/C++ objects that map exactly to the uniform block definition, you need to use std140 layout and follow the rules of std140's layout in your C/C++ object.
So we have in C:
auto if break int case long char register
continue return default short do sizeof
double static else struct entry switch extern
typedef float union for unsigned
goto while enum void const signed volatile
What new keywords OpenGL (ES) Shader Language provide to us?
I am new to GLSL and I want to create some highlight editing util for ease of use.
Math words included into GLSL will count as keywords..?
New ones (excluding the ones you listed above) according to the latest spec document:
attribute uniform varying
layout
centroid flat smooth noperspective
patch sample
subroutine
in out inout
invariant
discard
mat2 mat3 mat4 dmat2 dmat3 dmat4
mat2x2 mat2x3 mat2x4 dmat2x2 dmat2x3 dmat2x4
mat3x2 mat3x3 mat3x4 dmat3x2 dmat3x3 dmat3x4
mat4x2 mat4x3 mat4x4 dmat4x2 dmat4x3 dmat4x4
vec2 vec3 vec4 ivec2 ivec3 ivec4 bvec2 bvec3 bvec4 dvec2 dvec3 dvec4
uvec2 uvec3 uvec4
lowp mediump highp precision
sampler1D sampler2D sampler3D samplerCube
sampler1DShadow sampler2DShadow samplerCubeShadow
sampler1DArray sampler2DArray
sampler1DArrayShadow sampler2DArrayShadow
isampler1D isampler2D isampler3D isamplerCube
isampler1DArray isampler2DArray
usampler1D usampler2D usampler3D usamplerCube
usampler1DArray usampler2DArray
sampler2DRect sampler2DRectShadow isampler2DRect usampler2DRect
samplerBuffer isamplerBuffer usamplerBuffer
sampler2DMS isampler2DMS usampler2DMS
sampler2DMSArray isampler2DMSArray usampler2DMSArray
samplerCubeArray samplerCubeArrayShadow isamplerCubeArray usamplerCubeArray
Reserved for future use (will cause error at the moment):
common partition active
asm
class union enum typedef template this packed
goto
inline noinline volatile public static extern external interface
long short half fixed unsigned superp
input output
hvec2 hvec3 hvec4 fvec2 fvec3 fvec4
sampler3DRect
filter
image1D image2D image3D imageCube
iimage1D iimage2D iimage3D iimageCube
uimage1D uimage2D uimage3D uimageCube
image1DArray image2DArray
iimage1DArray iimage2DArray uimage1DArray uimage2DArray
image1DShadow image2DShadow
image1DArrayShadow image2DArrayShadow
imageBuffer iimageBuffer uimageBuffer
sizeof cast
namespace using
row_major
You probably want to get the OpenGL ES GLSL language specification. §3.6 lists the keywords (plus a number of reserved words that aren't keywords, but you're not supposed to use anyway, so they probably merit some sort of color coding as well).
Edit: Oops, I grabbed the wrong link there. My apologies. The current specs are:
OpenGL 4.1 GLSL
OpenGL ES 2.0 GLSL
Besides the above answers, there are also reserved identifiers, eg. in GLSL ES 3.0 spec:
Identifiers starting with gl_ are reserved for use by OpenGL ES, and may not be declared in a shader as either a variable or a function. It is an error to redeclare a variable, including those starting “gl_”.
And other things beyond these are reserved:
In addition, all identifiers containing two consecutive underscores (__) are reserved for use by underlying software layers. Defining such a name in a shader does not itself result in an error, but may result in unintended behaviors that stem from having multiple definitions of the same name.
It is relevant for syntax coloring and correctness checking.
Similarly, macros starting with GL_, and a bunch of things with __.
I'm currently working on a project using C++ and DirectX9 and I'm looking into creating a light source which varies in colour as time goes on.
I know C++ has a timeGetTime() function, but was wondering if anyone knows of a function in HLSL that will allow me to do this?
Regards.
Mike.
Use a shader constant in HLSL (see this introduction). Here is example HLSL code that uses timeInSeconds to modify the texture coordinate:
// HLSL
float4x4 view_proj_matrix;
float4x4 texture_matrix0;
// My time in seconds, passed in by CPU program
float timeInSeconds;
struct VS_OUTPUT
{
float4 Pos : POSITION;
float3 Pshade : TEXCOORD0;
};
VS_OUTPUT main (float4 vPosition : POSITION)
{
VS_OUTPUT Out = (VS_OUTPUT) 0;
// Transform position to clip space
Out.Pos = mul (view_proj_matrix, vPosition);
// Transform Pshade
Out.Pshade = mul (texture_matrix0, vPosition);
// Transform according to time
Out.Pshade = MyFunctionOfTime( Out.Pshade, timeInSeconds );
return Out;
}
And then in your rendering (CPU) code before you call Begin() on the effect you should call:
// C++
myLightSourceTime = GetTime(); // Or system equivalent here:
m_pEffect->SetFloat ("timeInSeconds ", &myLightSourceTime);
If you don't understand the concept of shader constants, have a quick read of the PDF. You can use any HLSL data type as a constant (eg bool, float, float4, float4x4 and friends).
I am not familiar with HLSL, but I am with GLSL.
Shaders have no concept of 'time' or 'frames'. Vertex shader "understands" vertices to render, and pixel shader "understands" textures to render.
Your only option is to pass a variable to the shader program, in GLSL it is called a 'uniform', but I am not sure about HLSL.
I'm looking into creating a light
source which varies in colour as time
goes on.
There is no need to pass anything with that, though. You can directly set the light source's color (at least, you can in OpenGL). Simply change the light color on the rendering scene and the shader should pick it up from the built-in uniforms.
Nope. Shaders are essentially "one-way". The CPU can affect what's happening on the GPU (specify which shader program to run, upload textures and constants and such), but the GPU can not access anything on the CPU side of the fence. If the GPU (and your shader) needs a piece of data, it must be set by the CPU as a constant or written as a texture (or as part of the vertex data)
If you're using HLSL to write a shader for Unity, a time in seconds variable is exposed as _Time.