Which version of GLSL supports Indexing in Fragment Shader? - opengl

I have a fragment shader that iterates over some input data and on old hardwares I get:
error C6013: Only arrays of texcoords may be indexed in this profile, and only with a loop index variable
Googling around I saw a lot of things like "hardware prior to XX doesnt support indexing on fragment shader".
I was wondering if this behavior is standardized in GLSL versions, something like "glsl version pior to XX doesnt support indexing on fragment shader". And if so, which version starts supporting it.

What is your exact hardware ?
Old ATI cards (below X1600) and their drivers have such issues. Most certainly, not the most recent cards from Intel also suffer from this.
"Do you have any sugestion on how to detect if my hardware is capable of indexing in fragment shader?"
The only reliable yet not-so-beautiful way is to get the Renderer information:
glGetString(GL_RENDERER)
and check if this renderer occurs in the list of unsupported ones.

That particular error comes from the Nvidia compiler for nv4x (GeForce 6/7 cards), and is a limitation of the hardware. Any workaround would require disabling the hardware completely and using pure software rendering.
All versions of GLSL support indexing in the language -- this error falls under the catch-all of exceeding the hardware resource limits.

Related

Vulkan: Geometry Shader Validation incorrect?

I am currently using a NVIDIA GeForce GTX 780 (from Gigabyte if that matters - I don't know how much this could be affected by the onboard BIOS, I've also got two of them Installed but due to Vulkans incapeability of SLI I only use one device at a time in my code. However in the NVIDIA control center SLI is activated. I use the official Driver version 375.63). That GPU is fully capable of geometry shaders of course.
I am using a geometry shader with Vulkan API and it works allright and does everything I exspect it to do. However I get the validation layer report as follows: #[SC]: Shader requires VkPhysicalDeviceFeatures::geometryShader but is not enabled on the device.
Is this a bug? Does someone have similiar issues?
PS: http://vulkan.gpuinfo.org/displayreport.php?id=777#features is saying the support for "Geometry Shader" is "true" as exspected. I am using Vulkan 1.0.30.0 SDK.
Vulkan features work differently from OpenGL extensions. In OpenGL, if an extension is supported, then it's always active. In Vulkan, the fact that a feature is available is not sufficient. When you create a VkDevice, you must explicitly ask for all features you intend to use.
If you didn't ask for the Geometry Shader feature, then you can't use GS's, even if the VkPhysicalDevice advertises support for it.
So the sequence of steps should be to check to see if a VkPhysicalDevice supports the features you want to use, then supply those features in VkDeviceCreateInfo::pEnabledFeatures when you call vkCreateDevice.
Since Vulkan doesn't do validation checking on most of its inputs, the actual driver will likely assume you enabled the feature and just do what it normally would. But it is not required to do so; using a feature which has not been enabled is undefined behavior. So the validation layer is right to stop you.

ARB_draw_buffers available but not supported by shader engine

I'm trying to compile a fragment shader using:
#extension ARB_draw_buffers : require
but compilation fails with the following error:
extension 'ARB_draw_buffers' is not supported
However when I check for availability of this particular extensions, either by calling glGetString (GL_EXTENSIONS) or using OpenGL Extension Viewer I get positive results.
OpenGL version is 3.1,
The grapic card is Intel HD Graphics 3000.
What might be the cause of that?
Your driver in this scenario is 3.1; it is not clear what your targeted OpenGL version is.
If you can establish OpenGL 3.0 as the mininum required version, you can write your shader using #version 130 and avoid the extension directive altogether.
The ARB extension mentioned in the question is only there for drivers that cannot implement all of the features required by OpenGL 3.0, but have the necessary hardware support for this one feature.
That was its intended purpose, but there do not appear to be many driver / hardware combinations in the wild that actually have this problem. You probably do not want the headache of writing code that supports them anyway ;)

Intel OpenGL Driver bug?

Whenever I try to render my terrain with point light's it only works on my Nvidia gpu and driver, and not the Intel integrated and driver. I believe the problem is in my code and a bug in the Nvidia gpu since I heard Nvidia's OpenGL implementations are buggy and will let you get away with things your not supposed to. And since I get no error's I need help debugging my shader's.
Link:
http://pastebin.com/sgwBatnw
Note:
I use OpenGL 2 and GLSL Version 120
Edit:
I was able to fix the problem on my one, to anyone with similar problems it's not because I used the regular transformation matrix because when I did that I set the normals w value to 0.0; The problem was that with the intel integrated graphics there is apparently a max number of array's in a uniform or max uniform size in general and I was going over that limit but it was deciding not to report it. Another thing wrong with this code was that I was doing implicit type conversion (dividing vec3's by floats) so I corrected those things and it started to work. Here's my updated code.
Link: http://pastebin.com/zydK7YMh

Image load store equivalent in OpenGL 3

My project should greatly benefit from arbitrary/atomic read and write operations in a texture from glsl shaders. The Image load store extension is what I need. Only problem, my target platform does not support OpenGL 4.
Is there an extension for OGL 3 that achieves similar results? I mean, atomic read/write operations in a texture or shared buffer of some sort from fragment shaders.
Image Load Store and, especially atomic operations are features that must be backed up by specific hardware capabilities, that are very similar to features used in compute shaders. Only some of the GL3 hardware can handle it and only in a limited way.
Image Load Store in core profile since 4.2, so if your hardware (and driver) is capable of OpenGL 4.2, then you don't need any extensions at all
if your hardware (and driver) capabilities is lower than GL 4.2, but higher than GL 3.0, you can, probably, use ARB_shader_image_load_store extension.
quote: OpenGL 3.0 and GLSL 1.30 are required
obviously, not all 3.0 hardware (and drivers) will support this extension, so you must check for its support before use it
I believe, most NVIDIA GL 3.3 hardware supports it, but not AMD or Intel (that's my subjective observations ;) ).
If your hardware is lower than GL 4.2 and not capable of this extension, nothing really you can do. Just have an alternative code path with texture sampling and rendering to texture and no atomics (as I understood this is possible, but without "great benefit of atomic"), or simply report an error to those users, who not yet upgraded their rigs.
Hope it helps.

Differences between GLSL and GLSL ES 2

Two questions really...
Is GLSL ES 2 a totally separate language, or a special version of GLSL?
What are the differences between them, in terms of "standard library" functions, syntax and capabilities?
I am writing shaders for an application targeted at Windows, Mac and iPad and I would prefer not to have to add more versions of each shader - well simpler shaders anyway.
Is GLSL ES 2 a totally separate language, or a special version of GLSL?
Every version of GLSL is ultimately a "totally separate language;" some aren't even backwards compatible with previous versions. However, the ES variation of GLSL is even moreso.
Think of it as a fork of desktop GLSL. A fork of GLSL from six versions ago, mind you.
What are the differences between them, in terms of "standard library" functions, syntax and capabilities?
Between which versions?
The only possible reason you could have to try to share shaders between these platforms is if you're trying to share OpenGL code between them. So you'd have to be limiting yourself to desktop OpenGL 2.1 functionality.
That means you'd be using desktop GLSL version 1.20. This is similar to GLSL ES version 1.00 (the version of GLSL is not given the same number as the matching GL version. Well, until desktop GL 3.3/4.0 and GLSL 3.30/4.00), but still different. In ES, you have a lot of precision qualifiers that desktop GLSL 1.20 doesn't handle.
The best way to handle that is to use a "preamble" shader string. See, glShaderSource takes multiple shader strings, which it slaps together and compiles as one. You can take your main shader string (main and its various code, attribute definitions, uniforms, etc) and stick a "preamble" shader string in front of that. The preamble would be different depending on what you're compiling for.
Your ES GLSL 1.00 preamble would look like this:
#version 100 //Must start with a version specifier.
Your desktop GLSL 1.20 preamble would look like this:
#version 120
#define lowp
#define mediump
#define highp
And maybe a few other judicious #defines. This way, your shader can have those precision qualifiers set, and they'll just be #defined away when using desktop GLSL.
This is a lot like platform-neutral coding in C/C++, where you often have some platform-specific header that changes the definition and/or has #defines that you can condition code based on.
In Simpler terms Opengl ES ie opengl Embedded system was made to run on embedded devices like mobile phones the mobile phone system face the same problem as 90s pc that is computation issues and difference in gpu arcitecture.
ImgTech PowerVR SGX - TBDR (TileBasedDeferred)
ImgTech PowerVR MBX - TBDR (Fixed Function)
ARM Mali - Tiled (small tiles)
Qualcomm Adreno - Tiled (large tiles)
Adreno3xx - can switch to Traditional
glHint(GL_BINNING_CONTROL_HINT_QCOM, GL_RENDER_DIRECT_TO_FRAMEBUFFER_QCOM)
NVIDIA Tegra - Traditional