I've been tuning my game's renderer for my laptop, which has a Radeon HD 3850. This chip has a decent amount of processing power, but rather limited memory bandwidth, so I've been trying to move more shader work into fewer passes.
Previously, I was using a simple multipass model:
Bind and clear FP16 blend buffer (with depth buffer)
Depth-only pass
For each light, do an additive light pass
Bind backbuffer, use blend buffer as a texture
Tone mapping pass
In an attempt to improve the performance of this method, I wrote a new rendering path that counts the number and type of lights to dynamically build custom GLSL shaders. These shaders accept all light parameters as uniforms and do all lighting in a single pass. I was expecting to run into some kind of limit, so I tested it first with one light. Then three. Then twenty-one, with no errors or artifacts, and with great performance. This leads me to my actual questions:
Is the maximum number of uniforms retrievable?
Is this method viable on older hardware, or are uniforms much more limited?
If I push it too far, at what point will I get an error? Shader compilation? Program linking? Using the program?
Shader uniforms are typically implemented by the hardware as registers (or sometimes by patching the values into shader microcode directly, e.g. nVidia fragment shaders). The limit is therefore highly implementation dependent.
You can retrieve the maximums by querying GL_MAX_VERTEX_UNIFORM_COMPONENTS_ARB and GL_MAX_FRAGMENT_UNIFORM_COMPONENTS_ARB for vertex and fragment shaders respectively.
See 4.3.5 Uniform of The OpenGLĀ® Shading Language specs:
There is an implementation dependent limit on the amount of storage for uniforms that can be used for
each type of shader and if this is exceeded it will cause a compile-time or link-time error. Uniform
variables that are declared but not used do not count against this limit.
It will fail at link or compile-time, but not using the program.
For how to get the max number supported by your OpenGL implementation, see moonshadow's answer.
For an idea of where the limit actually is for arbitrary GPUs, I'd recommend looking at which DX version that GPU supports.
DX9 level hardware:
vs2_0 supports 256 vec4. ps2_0 supports 32 vec4.
vs3_0 is 256 vec4, ps3_0 is 224 vec4.
DX10 level hardware:
vs4_0/ps4_0 is a minumum of 4096 constants per constant buffer - and you can have 16 of them.
In short, It's unlikely you'll run out with anything that is DX10 based.
I guess the maximum number of uniforms is determined by the amount of video memory,
as it's just a variable. Normal varaibles on the cpu are limited by your RAM too right?
Related
I've looked everywhere and couldn't find a definitive answer
vertex and fragment shaders do in fact have a limit on the maximum size of the shader and number of instructions, but i've never heard about those limitations on a compute shader
Since I need to port an existing CPU Path tracer with many different BRDFs I need to know in advance if this could be an issue and move to CUDA or if OpenGL's compute shaders could handle the work just fine
There are always limits. But these limits are implementation-defined; they're not expressed in any a-priori determinable way. So the only way you'll find out what they are is to cross them.
CUDA has limits too.
I have encountered a weird problem: I have a fragment shader containing several uniform variables (mat4, vec4), a single sampler2D and a gigantic SSBO (1GB-2GB). For each type of variables, it does not exceed the size limitation of the hardware. Without the SSBO, the shader works fine. With the SSBO, if the resolution of the texture image is low (768x768x4 float), the shader works fine too. However, if the resolution goes to 1024+ x 1024+, the program instantly crashes inside NVIDIA driver. I have tested it on GTX980 Ti and Quadro P5000.
This problem all happened.
I wonder if there are any limitations on the usage of shader resources.
According to this database of OpenGL implementations, there is no implementation that permits an SSBO to be being more than 2GB in size. That is, no implementation has GL_MAX_SHADER_STORAGE_BLOCK_SIZE greater than 2GB.
Note that Vulkan implementations are not much different. AMD implementations offer 4GB SSBOs, but they still have limitations that are separate from the amount of storage they have.
As I understand we store texture in GPU memory with glTexImage2D function and there is no limited number theoretically. However, when we try to use them there is a limitation which is 32 texture units GL_TEXTURE0 to GL_TEXTURE31. Why is there this limitation?
The limitation in question (which as mentioned in the comments, is the wrong limit) is on the number of textures that can be used with a single rendering command at one time. Each shader stage has its own limit on the number of textures it can access.
This limitation exists because hardware is not unlimited in its capabilities. Textures require resources, and some hardware has limits on how many of these resources can be used at once. Just like hardware has limits on the number of uniforms a shader stage can have and so forth.
Now, there is some hardware which is capable of accessing texture data in a more low-level way. This functionality is exposed via the ARB_bindless_texture. It's not a core feature because not all recent, GL 4.x-capable hardware can support it.
How many instructions can a vertex and fractal shader each have in WebGL in Chrome, without taking rendering time per frame into account?
from: https://groups.google.com/forum/#!topic/angleproject/5Z3EiyqfbQY
So the only way to know an instruction count limit has been exceeded
is to compile it?
Unfortunately yes. Note that you should probably try to compile and
link it to really be sure since some systems may not actually do much
at the compilation phase.
Someone must have some rough samples or a limiting factor?
There is no specific limit. It's up to the driver. A high end GPU will have a larger limit than a low end mobile GPU. There's also no way to know how many instructions a particular line of GLSL will translate into since that's also up to the particular GLSL compiler in that particular driver.
This is an aspect that is not mandated by the Khronos specification, and hence varies depending on the GPU, or the shader compiler if ANGLE is used.
I plan on writing a program that will take some paraemeters as input and will generate its own fragment shader string which will then be compiled, linked and used as a fragment shader (it will only be done once at the start of a program).
Im not an expert in computer graphics so I dont know if this is standard practice but I definitely think it has the potential for some interesting applications - not necessarily graphics applications but possibly computational ones.
My question is what is the code size limit of a shader in OpenGL i.e. how much memory can OpenGL reasonably allocate to a program on the graphics processor?
There is no code size limit. OK, there is, but:
OpenGL doesn't give you a way to query it because:
Such a number would be meaningless, since it does not translate to anything you can directly control in GLSL.
A long GLSL shader might compile while a short shader can't. Why? Because the compiler may have been able to optimize the long shader down to size, while the short shader expanded to lots of opcodes. In short, GLSL is too high-level to be able to effectively quantify such limitations.
In any case, given the limitations of GL 2.x-class hardware, you probably won't hit any length limitations unless you're trying to do so or are doing GPGPU work.