Bundle Shader source with application? - glsl

I've just wondering how to bundle my GLSL shader source files (for OpenGL ES(iOs)/OpenGL with GLUT (Mac/Windows)) with my application. As pure text files, they would be easily changeable by every user of my software and I'm afraid of undefined behavior...
On iOS I simply use XCodes "Copy Bundle Ressources" for my shaders (and retrieve them then from the Application Bundle) - is there a similiar possibility with Visual Studio?
Or ist there even a better cross plattform way, to do so?

GLSL shaders are pure text files (or text snippets, whatever way you want to look at it). There is no way (apart from digitally signing your shaders and refusing to run if the signature does not match) to prevent a user from trivially modifying your shaders in a text editor. (Of course you could make them kind of unreadable by rot13-encoding them or by putting them all into a .zip file and renaming the .zip file to something else, this will not prevent someone determined to find your shaders from doing so, but it will probably deter 90% of the average users.)
But then again, if people do edit your shaders and that results in undefined behaviour... bad luck for them. You know, there is a certain faction of people who feels urged to edit everything that is human readable and editable. Fine, it's their problem if they break their install. You can't prevent people from being stupid.
There is the shader binary extension on recent versions of OpenGL, but it is not intended to be used in a way that would solve your problem. It is merely intended as a caching mechanism to speed up compile/link times on subsequent runs. It is not suited to distribute "shader binaries".

Just so you know, even on OSX the shaders are in "pure text", the application bundle is a normal directory that includes a Resource/ folder where your shaders are placed.

Related

How do I tell QtCreator to accept the `texture()` function like it did for the `texture*D()` functions?

I am using QtCreator 4.12 as a generic C++ IDE, installed from my distribution's package manager, so this is a generic question about QtCreator usage, not related to Qt in particular, nor building QtCreator from source.
Like any IDE, QtCreator highlights potential errors while writing code.
in a .cpp file, if I write int x = 0 and press enter, the 0 will be underlined in red, and there will be a tooltip telling me that I forgot the ; at the end of the line.
This is described in the QtCreator documentation, but I couldn't find anything in that documentation about GLSL.
My actual project is a C++ with openGl game, and I'm editing my GLSL shaders within QtCreator.
Reading the answer to this question, I've learned that all the texture*D() functions were deprecated since openGL 3.3, and have to be replaced with texture() which infers the texture dimension, so I decided to update my shaders.
Within QtCreator, when I use the texture() function, the whole line gets underlined with red color, with a tooltip saying expression too complex, whereas when I use texture2D() (or texture1D() or else), the line isn't underlined as shown in following pictures :
deprecated GLSL:
non-deprecated GLSL:
This doesn't prevent my shaders to work as designed at all, so there's no real problem here, but it's really disturbing.
I don't know anything about the syntax error checking mechanism more than what is written in the linked documentation page, and I'm looking for a way to change this mechanism to accept GLSL 3.3+. I would accept an answer telling me how to silence this specific false positive as a workaround, or a way to deactivate the syntax error checking for .glsl files, but I would really prefer to understand how I could tweak the error checking mechanism to accept modern glsl as it does for legacy glsl.
In the end I wrote a bug report : QTCREATORBUG-24068.
There's a patch addressing the issue, which I could test. It will be merged in QT Creator's source v4.14.

Is there a way to run the mesa compiler to reduce the size of shader files?

The mesa drivers, as part of their compilation process, reduce the size of the glsl shader files.
Some libraries, like this one, use this fact to create shader minification libraries. All minification libraries I have found are abandonware, so, unless mesa has functionality to get the intermediary glsl files directly, I may have to edit the actual code the way those libraries did it.
I was wondering if there is an executable within the mesa code base that can be used to do the stripping without having to edit the code.
I tried reading the official mesa documentation, but I didn't anything that suggests either way:
https://www.mesa3d.org/opengles.html
"Minification" is something different to optimization. Typically the term is used to describe a process that takes a source file in text form and removes all superfluous whitespace and replaces all identifiers with shorter ones.
Strictly speaking the whole idea of minification is a folly, since it has zero impact on performance; neither lexing the code, nor the compilation result are affected by it. The whole minification doofus started in web development to reduce webpage resource size; totally worthless, because you'll get far better performance if you just compress the text with gzip or similar. Heck after zipping the original and the minified versions' sized will probably within a few bytes of each other.
If you're really concerned about the size of your shader as a resource, just compress it (but mind the overhead of the decompression code). EDIT: However if your target is WebGL, then make use of HTTP transport gzip compression. The relevant browsers do support it all, and most HTTP servers can be configured to transparently deliver a supplementary .gz suffixed file (or do the compression and cache it on the fly).
For optimization, you should look to the other offerings of Khronos. Specifically the GLSL to SPIR-V compiler glslc, the SPIR-V optimizer spirv-opt, and the SPIR-V decompiler spirv-cross. You can chain those up to create optimized, "reduced" GLSL.
glslc --target-env=opengl -fshader-stage=... -o ${FILE}.spv ${FILE}
spirv-opt -Os -o ${FILE}.spv.opt ${FILE}.spv
spirv-cross --output ${FILE}.opt ${FILE}.spv.opt
Since the SPIR-V tools are part of the official Vulkan SDK, and SPIR-V is also a valid shader format to be loaded directly by OpenGL-4.6 (instead of just GLSL source), you can sleep well on the fact, that these tools are well maintained and will also be so in the future.

Multi-Rendering API Engine shader file management

I am developing a 3D engine that is designed to support the implementation of any given graphics API. I would like your feedback on how I'm planning to manage the shader files:
I thought about creating a struct that contains 3 string variables, the directory and the file name (both vertex and fragment), something like this:
class ShaderFile : public SerializableAsset
{
std::string nameID; //Identifier
std::string directory;
std::string vertexName;
std::string fragmentName;
};
The user would be able to set these variables in the editor. Then, my engine would load the shader files in like:
void createShader(RenderAPI api)
{
ShaderFile shaderFile //Get it from some place
std::string vertexPath = shaderFile.directory + shader.vertexName + api.name + api.extension;
std::string fragmentPath = shaderFile.directory + shader.fragmentName + api.name + api.extension;
//Create shader...
}
Which would create something like: Project/Assets/Shaders/standardVulkan.spv.
Am I thinking in the right direction or is this a completely idiotic approach? Any feedback
It's an interesting idea and we've actually done exactly this, but we discovered some things along the way that are not easy to deal with:
If you take a deeper look at Shader API's, although they are close to offering the same capabilities on paper, they often do not support features in the same way and have to be managed differently. By extension, so do the shaders. The driver implementation is key here, and sometimes differs considerably when it comes to managing internal state (synchronization and buffer handling).
Flexibility
You'll find that OpenGL is flexible in the way it handles attributes and uniforms, where DirectX is more focussed on minimizing uploads to the hardware by binding them in blocks according to your renderpass configurations, usually on a per-object/per-frame/per-pass basis etc.. While you can also do this by creating tiny blocks, this obviously would give different performance.
Obviously, there are multiple ways to do binds, or even buffer objects, or shaders, even in a single API. Also, getting shader variable names and bind points is not that flexible to query in DirectX and some of the parameters needs to be set from code. In Vulkan, binding shader attributes and uniforms is even more generalized: you can completely configure the bind points as you wish.
Versioning
Another topic is everything that has to do with GLSL/HLSL shading versioning: You may need to write different shaders for different hardware capabilities that support lower shader models. If you're going to write unique shaders and are not going for the uber-shader approach (and also to a large extend IF you use this approach) this can get complicated if it ties too tightly into your design, and given the number of permutations might be unrealistic.
Extensions
OpenGL and Vulkan extensions can be 'queried' from within the shader, while other API's such as DirectX require setting this from the code side. Still, within the same compute_capability, you can have extensions that only work on NVidia, or are ARB approved but not CORE, etc. This is really quite messy and in most cases application specific.
Deprecation
Considerable parts of API's are getting deprecated all the time. This can be problematic if your engine expects those features to remain in place, especially if you like to deal with multiple API's that support that feature.
Compilation & Caching
Most API's by now support some form of offline compilation that can be loaded later. Compilation takes a considerable amount of time so caching that makes sense. Since the hardware shader code is compiled uniquely for the hardware that you have, you need to do this exercise for each platform that the code should run on, either the first time the app needs the shader, or in some other clever way in your production pipeline. Your filename would in that case be replaced by a hash so that the shader can be retrieved from the cache. But this means the cache needs a timestamp so it can detect new versions of the source shader, because if the shader source should change, then the cache entry needs to be rebuilt. etc.. :)
Long story short
If you aim for maximum flexibility in whatever API, you'd end up adding a useless layer in your engine that in the best case simply duplicates the underlying calls. If you aim for a generalized API, you'll quickly get trapped in the version story that is totally not synchronized between the different API's in terms of extensions, deprecation and driver implementation support.

The Best way of storing/retrieving config data in Modern Windows

I've not done much coding for Windows lately, and I find myself sitting at Visual Studio right now, making a small program for Windows 7 in C++. I need some configuration data to be read/written.
In the old days, (being a Borland kind of guy) I'd just use a TIniFile and keep the .ini beside my exe Obviously this is just not the done thing any more. The MS docs tell me that Get/WritePrivateProfileString are for compatibility only, and I doubt that I'd get away with writing to Program Files these days. Gosh I feel old.
I'd like the resulting file to be easily editable - open in notepad sort of thing, and easily findable. This is a small app, I don't want to have to write a setup screen when I can just edit the config file.
So, what is the modern way of doing this?
Often people use XML files for storing preferences, but they are often overkill (and they aren't actually all that readable for humans).
If your needs would be easily satisfied with an INI file, you may want to use Boost.Program_options using the configuration file parser backend, which actually writes INI-like files without going through deprecated (and slow!) APIs, while exposing a nice C++ interface.
The key thing to get right is where to write such configuration file. The right place is usually a subdirectory (named e.g. as your application) of the user's application data directory; please, please, please, don't harcode its path in your executable, I've seen enough broken apps failing to understand that the user profile may not be in c:\Documents and settings\Username.
Instead, you can retrieve the application data path using the SHGetFolderPath function with CSIDL_APPDATA (or SHGetKnownFolderPath with FOLDERID_RoamingAppData if you don't mind to lose the compatibility with pre-Vista Windows versions, or even just expanding the %APPDATA% environment variable).
In this way, each user will be able to store its preferences and you won't get any security-related errors when writing your preferences.
This is my opinion (which I think most of the answers you get will be opinion), but it seems that the standard way of doing things these days is to store config files like these in C:\Users\<Username>. Moreover, it is generally good to not clutter this directory itself, but to use a subdirectory for the purpose of storing your application's data, such as C:\Users\<Username>\AppData\Roaming\<YourApplicationName>. It might be overkill for a single config file, but that will give you the opportunity to have all of your application data in one place, should you add even more.

Stringifying openGL enums

So there has been a lot of times where I needed to know what the enums returned by certain opengl operations are, to print them on the terminal to see what's going on.
It doesn't seem there's any kind of function available for stringifying the enums at the moment, so I'm thinking of going straight to gl.h (actually I'm gonna use libglew's header for now), grabbing the #defines and creating a huge switch table for convenience purposes.
Is there any better way, and how would you deal with having to port things to OpenGL ES?
gluErrorString is the function you're looking for in OpenGL, as GLU library is normally always available alongside with GL.
I don't have experience in OpenGL ES, but Google turned up GLUes that may help you.
OpenGL has some official .spec files that define the API, there is one enum.spec that lists all the enum names and their values. You just need to write a simple program to parse the file and produce a lookup mapping.
The file can be found at http://www.opengl.org/registry/ (specifically http://www.opengl.org/registry/api/enum.spec)
Manually processing gl.h would work but the spec files are updated for new versions and if you have a program to generate the code for you then you don't have to do anything to get the new enums in. Also gl.h file is implementation specific. So it might change between nVidia, ATI, and on different platforms.
The OpenGL Registry contains comprehensive information on the OpenGL API. Notably the gl.xml file in the XML API registry lists all enums, including their names and values. Creating a lookup table from that data should be simple enough.