GLSL shader programming #version header error: unrecognized preprocessing directive - opengl

I just started GLSL shader programing, but however I get
unrecognized preprocessing directive
whenever I put #version directive at the preprocessor directives header stack, though I included all opengl related headers and files within my source file,
Shader:
#version 400
in vec3 Color;
out vec4 FragColor;
void main()
{
FragColor = vec4(Color, 1.0);
}
how can I fix this issue?

The #version directive must occur in a shader before anything else, except for comments and white space.
Even preprocessor directives are illegal ( NVIDIA accepts it but AMD does not! ).
If this doesn't help, give us some more information. E.g. glGetString(GL_VERSION) and glGetString(GL_VENDOR).
Refering to your comments you missunderstand how a shader is compiled. A shader cannot be compiled by a C++ compiler. Put your shader into a text file and load it at runtime, then call the compilation methods of OpenGL.

The #version pre-processor command is run at the c++ compile time, glsl is only text based and shouldn't be compiled. If you #include "file" in a header or .cpp in the program it will trigger the compile and error. Therefore don't #include glsl files into you application.

Related

How to interpret GLSL shader compilation errors

My OpenGL application encounters errors when it compiles my fragment shader program, with output that includes
0:4(13): error: syntax error, unexpected BASIC_TYPE_TOK, expecting LOWP or MEDIUMP or HIGHP
Specifically, how to interpret the "0:4(13)" prefix to the error message? Anyone know where this is documented? I could not find a description of error message format at the Khronos Shader Compilation wiki.

HLSL splitting shader into multiple files

It might be stupid question, but I was looking for a long time for an answer and I can't find one for my DirectX 11 game engine. For example I have got two pixel shaders A and B and I don't want to repeat my code for gamma correction. So my idea was to move that to separate hlsl file and include them, but I don't know how to do that. When I was google that I was only able to find informations about .fx workflow, but I'm not using it. When I'm trying to make a new shader I always get an error, that my shaders need to have main function. How can I do this?
EDIT:
As VTT suggested I will provide example. Let's say I have my uber_pixel_shader.hlsl like this:
#include "gamma_utils_shader.hlsl"
...
float4 main(PS_INPUT input) : SV_TARGET
{
...
finalColor = gammaCorrect(finalColor);
return float4(finalColor, Alpha);
}
And there is no method gammaCorrect in HLSL, so I want to include it from another file named gamma_utils_shader.hlsl. This file looks like this:
float3 gammaCorrect(float3 inputColor)
{
return pow(inputColor, 1.0f / 2.2f);
}
When I'm trying to compile this, the comipler is throwing an error "Error 3501 'main': entrypoint not found". And it's true, becase I don't have main method in this file, but I do not need one. How can I solve this in Visual Studio 2017?
Your project settings by default specify that an HLSL file should be compiled with the HLSL compiler. This means that during the build, VS queues all your HLSL files, including your include file, for compilation by the compiler with the default entrypoint of main. Obviously this is not desired - an include file can't be truly compiled.
To solve the issue, right click on your HLSL include file in the 'Solution Explorer', click 'Properties', and change the 'Item Type' field from "HLSL Compiler" to "Does not participate in build". This will prevent Visual Studio from compiling your include file.
In the future, provide the '.hlsli' extension to your HLSL include files. Visual Studio will open those files with the HLSL editor, but automatically identify them as not participating in the build procedure.

LWJGL GLSL shader not compiling

I am using the lastest recomended version of LWJGL 3, and while trying to compile shaders i get errors.
Shader:
#version 330
in vec2 position;
void main() {
gl_Position = vec4(position, 0.0, 1.0);
}
Error:
Vertex shader failed to compile with the following errors:
ERROR: 0:1: error(#307) Invalid profile "in"
ERROR: 0:1: error(#76) Syntax error: unexpected tokens following #version
ERROR: 0:1: error(#364) Invalid: unexpected token in symbol.
ERROR: error(#273) 3 compilation errors. No code generated
I wasnt able to find anything related to this error online. Does anyone here know?
Looks as if the end of line characters (\n) are missing in the code string which means that the compiler treats the in keyword as profile qualifier for the #version directive.

How do I use gl.h in a C++ program?

I am trying to understand why this program will compile in C but not C++ and why extern "C" { } doesn't seem to help.
This short program doesn't actually do anything, but shows by example that there is a difference in the compilation when it is C vs C++.
#include <GL/gl.h>
int main() {
glBegin(GL_POLYGON);
glUseProgram(0);
glEnd();
return 0;
}
When you save that as ex.c and compile it with gcc ex.c -lGL -o ex it compiles and links as expected.
When you save it as ex.cpp and compile it with gcc ex.cpp -lGL -o ex you get a compiler error:
error: ‘glUseProgram’ was not declared in this scope
Note that it does not complain about glBegin, glEnd, or GL_POLYGON. In fact, you can comment out the glUseProgram line and it compiles just fine as a cpp program.
Now, why can't I wrap the program in extern "C" like so:
extern "C" {
#include <GL/gl.h>
int main() {
glBegin(GL_POLYGON);
glUseProgram(0);
glEnd();
return 0;
}
}
Doing so still leads to the same compiler error. My understanding of extern "C" is incomplete.
Ultimately I need to understand what is wrong because I am trying to write a C++ program that uses many of the GL functions that apparently won't compile in C++.
How do I use gl.h in a C++ program?
To address a few of the comments: I am using Mesa on X11. glUseProgram definition is in glext.h which is included via gl.h. I have already written a C++ program using OpenGL (actually GLES) on a raspi. converting it to X11 is proving to be a non-trivial matter.
As I've commented, placing #define GL_GLEXT_PROTOTYPES before the include directive solves your problem.
See the OpenGL ABI for Linux:
Normally, prototypes are present in the header files, but are not visible due to conditional compilation. To define prototypes as well as typedefs, the application must #define GL_GLEXT_PROTOTYPES prior to including gl.h or glx.h. (Note: consistency suggests using GLX_GLXEXT_PROTOTYPES for glxext.h - TBD).
This is usually done by something like GLUT or GLEW in their header files, so if you're working with them it's typically not needed to defined the macro yourself -- however be sure to include their headers before the GL/gl.h
EDITED: The reason why glBegin works fine while there's problem with glUseProgram is that glBegin comes with the very first draft of OpenGL while glUseProgram had been an extension and hasn't get introduced until OpenGL 2.0.
EDITED:
Let's be more specific: why it works with C but not C++? First of all, there is no GL_GLEXT_PROTOTYPES defined in neither C or C++ version. You could test it with simply some code like this:
#ifdef GL_GLEXT_PROTOTYPES
puts("macro detected");
#endif
The reason, however, is in C it's allowed that a symbol of function could get no definition at all (which would be assumed as all its parameters are int and also returns an int), while in C++ it's not allowed.
The quickest way to get it working, is to define GL_GLEXT_PROTOTYPES before the gl.h include :
#define GL_GLEXT_PROTOTYPES
#include <GL/gl.h>
in order to expose the function prototypes.
The reason it still works in C without the prototypes (function declaratons), is because C is not as strict as C++ about needing the function declaration for a called function (it'll implicitly assume a int fun() function declaration).
The first thing you need to do, is check if you have the various gl header files. If you have them already, check if you have the needed native files (.dll) and be sure that you load them into your program/project.
Get GLEW (OpenGL Extension Wrangler Library)
For your own sake get something like GLEW. Why? because by default all the GL function aren't called glUseProgram and such. By default on my computer (Windows7) glUseProgram() is actually called PFNGLUSEPROGRAMPROC.
Thereby to get those "ugly" things replaced by the more readable once, simply get GLEW.
You could also simply #define all those "ugly" things away, though that will be A LOT OF WORK. So again it would be easier to get something like GLEW.

OpenGL v2.0 Shaders with Dev-C++ and SDL?

I was about to rebuild my library in Dev-C++, under Windows; however, the shader functionality I've added in the meantime is not supported, the compiler could not find the related functions (::glCreateShader(), ::glCreateProgram(), etc.)
Digging around the internet and the Dev-C++ folder, I've found that the OpenGL implementation (gl.h) is only v1.1. I've found recommendations to download the latest headers from SGI. I have found gl3.h, however, after closer scrutiny I have realized that gl.h is not included in my project anyway, and I should be looking at SDL/SDL_opengl.h.
EDIT: SDL_opengl.h does include gl.h and declares prototypes of the functions in question. So the question is, why ame I given compile-time errors rather than linker errors?
(My library only links against mingw32, libOpenGL32, libSDL, libSDL_Image and libSDL_Mixer, much like under OSX (except for mingw32, of course) where I didn't have any problem.)
How can I use OpenGL v2.0 shaders with Dev-C++ and SDL?
gl.h is only for OpenGL 1.1 (and in some cases up to 1.3 depending on which version of the file you are using and which operating system). For everything else you additionally need glext.h and probably glxext.h (Linux/Unix) or wglext.h (Windows).
All functions from newer versions of OpenGL must be linked at runtime. So in order to use them you must get the right function address and assign it to a function pointer. The easiest way to do this is by using something like GLEW.
The manual way would be something like this:
PFNGLCREATESHADERPROC glCreateShader = NULL;
glCreateShader = (PFNGLCREATESHADERPROC) wglGetProcAddress("glCreateShader");
or for Linux:
PFNGLCREATESHADERPROC glCreateShader = NULL;
glCreateShader = (PFNGLCREATESHADERPROC) glXGetProcAddress((GLubyte*) "glCreateShader");
If you define GL_GLEXT_PROTOTYPES before including glext.h you can omit the first line.
EDIT: SDL_opengl.h looks like it contains a copy of glext.h (not up to date though). So if you use that the above should still be valid. If you want to use a seperate glext.h you must define NO_SDL_GLEXT before including SDL_opengl.h. Also, the function prototypes aren't available as long as GL_GLEXT_PROTOTYPES isn't defined or you write them yourself.
EDIT2: Apparently SDL has its own GetProcAddress function called SDL_GL_GetProcAddress.
Can you load the addresses of the functions at runtime?
typedef GLhandle (APIENTRYP PFNGLCREATESHADERPROC) (GLenum shaderType);
PFNGLCREATESHADERPROC glCreateShader = NULL;
glCreateShader = (PFNGLCREATESHADERPROC)wglGetProcAddress("glCreateShader");
typedef GLhandle (APIENTRYP PFNGLCREATEPROGRAMPROC) ();
PFNGLCREATEPROGRAMPROC glCreateProgram = NULL;
glCreateProgram = (PFNGLCREATEPROGRAMPROC)wglGetProcAddress("glCreateProgram");