My partner and I are working on an OpenGL project that includes a skybox. The skybox works fine on his computer (which has GLSL version 4.5) and everything BUT the skybox works on mine (GLSL 4.0). The compiler complains about a syntax error in this line:
layout(binding=0) uniform samplerCube currTexture;
and the impression I've gotten is that this syntax is not supported by versions of GLSL earlier than 4.2, is this correct? If so, how do I rewrite this line to be compatible with GLSL 4.0? I keep either seeing repeats of how to do this using the newest version, or much longer pieces of code that I am not sure I fully understand / don't know if it's even doing the same thing.
I've gotten is that this syntax is not supported by versions of GLSL earlier than 4.2, is this correct?
Yes. layout(binding=...) was intruduced in the GL_ARB_shading_language_420pack extension and is core since GL 4.2.
If so, how do I rewrite this line to be compatible with GLSL 4.0?
You simply omit the layout(binding) qualifier. It is only a shortcut for having to query the uniform location and setting the value via glUniform1i() on the client side. However, uniforms are initialized to 0 anyways, so in your case, this will just work as before.
Related
I've been upgrading my project in sections. I'm currently still using gl_ModelViewProjectionMatrix in the meantime.
I'm using OpenGL 3.1 Compatablity 1.4 GLSL This works fine on my computer. When I try to move it to another computer it gives me the following error:
C7533: global variable gl_ModelViewProjectionMatrix is deprecated after version 120
Why would one computer allow deprecated functionality and another not? Is there something I need to move to this other computer? This sounds like a warning, but the objects were either not drawn or not translated.
GLSL shader version and context version are two separate things, by the way. It is true that gl_ModelViewProjectionMatrix is deprecated after GLSL 1.20 (introduced in GL 2.1) because GL 3.0 deprecated (and GL 3.1 without GL_ARB_compatibility removed) the entire fixed-function matrix stack. GLSL version 1.50 introduces profiles to GLSL, which are still independent from the context version, but work the same way -- deprecated things generally become removed in a later core GLSL version.
With all that said, I really do not know how this works fine. If you really and truly have a GL 3.1 core context, there is no command that can set the matrix associated with gl_ModelViewProjectionMatrix (whether your GLSL compiler accepts it or not). glLoadMatrix (...), etc. were all removed in GL 3.1.
However, I suspect you do not have what you would typically consider a "core" context. GL 3.1 is an ugly thing, it pre-dates the introduction of profiles to OpenGL. Although it technically removes almost everything that was deprecated in GL 3.0, if the extension GL_ARB_compatibility is present you effectively have what we now call a "compatibility profile".
Alright, it appears the source of my problem was that this computer had an Nvidia graphics card. In order to get around the error (which I think should have been a simple warning) I changed my GLSL compiler version to "#version 150 compatibility" and OpenGL to 3.2
This convinced the Nvidia GLSL compiler to stop whining and do its job. I will upgrade from the ff matrix stack when I am ready.
Say I want to test shader code of an older version, which is GLSL 1.2.
The GPU on the machine actually can support GLSL 4.0 (from the hardware specification).
Yes, you should be able to run shaders for a lower version.
Just make sure to identify the glsl version the code is written against in the very first line of every shader source, e.g. #version 120
The OpenGL context should also use the compatibility profile, the core profile does not contain deprecated functionality.
You need to create an OpenGL context in compatibility mode, which probably is the default.
So, I googled a lot of opengl 3.+ tutorials, all incorporating shaders (GLSL 330 core). I however do not have a graphics card supporting these newer GLSL implementations, either I have to update my driver but still I'm not sure if my card is intrinsically able to support it.
Currently my openGL version is 3.1, and I created on windows with C++ a modern context with backwards compatibility. My GLSL version is 1.30 via NVIDIA Cg compiler (full definition), and GLSL 1.30 -> version 130.
The problem is : version 130 is fully based on the legacy opengl pipeline, because it contains things like viewmatrix, modelmatrix, etc. Then how am I supposed to use them when I am using core functions in my client app (OpenGL 3+)?
This is really confusing, give me concrete examples.
Furthermore, I want my app to be able to run on most OpenGL implementations, then could you tell me where the border is between legacy GLSL and modern GLSL? Is GLSL 300 the modern GLSL, and is there a compatibilty with OpenGL 3.+ with older GLSL versions?
I would say OpenGL 3.1 is modern OpenGL.
Any hardware that supports OpenGL 3.1 is capable of supporting OpenGL 3.3. Whether the driver always support of it is another matter. Updating your graphics card will probably bump you up to OpenGL 3.3.
Just to clear this up OpenGL 3.1 is not legacy OpenGL.
legacy OpenGL would be:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(90.0, 0.0, 1.0, 0.0);
glTranslatef(0.0, 0.0, -5.0);
Which OpenGL 3.1 with a compatibility context supports, but that doesn't mean it should be used. If you are developing for OpenGL 3 capable hardware you should most definitely not be using it. You can disable the legacy functionality by requesting a core context.
if you are using shaders then you already moved away the legacy fixed function pipeline. So GLSL 130 is not legacy :P.
Working on my Linux Laptop with my Intel CPU where the latest stable drivers are only at OpenGL 3.1 (Yes OpenGL 3.3 commits are in place, but I'm waiting for MESA 10 ;) ) I have without much effort been able to get the OpenGL 3.3 Tutorials to run on my machine without touching legacy OpenGL.
One of the wonderful things about OpenGL is that you can extend the functionality with OpenGL extension. Even if your HW isn't capable of handling OpenGL 4.4 you can still use the extensions that doesn't require OpenGL 4 HW with updated drivers!
See https://developer.nvidia.com/opengl-driver and http://developer.amd.com/resources/documentation-articles/opengl-zone/ for info on what features are added to older HW, but if you are uncertain all you have to do is test it on your HW.
And I'll finish of by saying Legacy OpenGL also has it's place.
In my opinion legacy OpenGL might be easier to learn than modern OpenGL, since you don't need knowledge of shaders and OpenGL buffers to draw your first triangle, but I don't think you should be using it in a modern production application.
If you need support for old hardware you might need to use an older OpenGL version. Even modern CPU's support OpenGL 3 so I would not worry about this to much.
Converting from OpenGL 3.3 to OpenGL 3.0
I tested it on the tutorials from http://www.opengl-tutorial.org/. I cannot put the code up I converted as most of it is as is from the tutorials and I don't have permission to put the code here.
They author talked about OpenGL 3.1, but since he is capped at glsl 130 (OpenGL 3.0) I am converting to 3.0.
First of all change the context version to OpenGL 3.0 (Just change
the minor version to 0 if your working from the tutorials). Also don't set it to use core context if your using OpenGL 3.0 since as far as I know ARB_compatibility is only available from OpenGL 3.1.
Change the shader version to
#version 130
Remove all layout binding in shaders
layout(location = #) in vec2 #myVarName;
to
in vec2 #myVarName;
Use glBindAttribLocation to bind the in layouts as they were specified (see 3)
e.g
glBindAttribLocation(#myProgramName, #, "#myVarName");
Use glBindFragDataLocation to bind the out layout as they were specified (see 3)
e.g
glBindFragDataLocation(#myProgramName, #, "#myVarName");
glFramebufferTexture doesn't work in OpenGL 3.0. (Used for shadowmapping and deferred rendering etc.). Instead you need to use glFramebufferTexture2D. (It has a extra parameter, but the documentation is sufficient)
Here is screenshot of tutorial16 (I though this one covered the most areas and used this a test to see if that all that's needed)
There is a mistake in the source of tutorial16 (At the time of writing). The FBO is set to have no color output, but the fragment shader still outputs a color value, causing a segfault (Trying to write to nothing ussually does that). Simply changing the depth fragment shader to output nothing fixes it. (Doesn't produce segfault on more tolerant drivers, but that's not something you should bargain on)
I'm going to start implementing OpenGL 3 into my application. I currently am using OpenGL 1.1 but I wanted to keep some of it due to problems if I attempt to change the code but I wanted to change some of my drawing code to a faster version of OpenGL. If I do things like bind textures in OpenGL 1.1 can I draw the texture in OpenGL 3?
Mixing OpenGL versions isn't as easy as it used to be. In OpenGL 3.0, a lot of the old features were marked as "deprecated" and were removed in 3.1. However, since OpenGL 3.2, there are two profiles defined: Core and Compatibility. The OpenGL context is created with respect to such a profile. In compatibility profile,
all the deprecated (and in core profiles removed) stuff is still availbale, and it can be mixed as well. You can even mix a custom vertex shader with the fixed-function fragment processing or vice versa.
The problem here is that it is not grequired that implementors actually provide support for the compatibility profile. On MacOS X, OpenGL 3.x and 4.x are supported in core profile only.
In you specific example, binding textures will work in all cases, since that funtctionality exists unmodified in all versions from 1.1 to 4.3 (and is likely to do so in the near future). However, most of your drawing calls are likely to be not available in the newer core profiles.
Omg.. opengl 1.1 is from 1997! Do yourself a favor and get rid of the fixed-function pipeline stuff and adapt to OpenGL 4.x. However, you can try
#version 420 core
in your shader.
I was having extreme trouble getting a vertex shader of mine to run under OpenGL 3.3 core on an ATI driver:
#version 150
uniform mat4 graph_matrix, view_matrix, proj_matrix;
uniform bool align_origin;
attribute vec2 graph_position;
attribute vec2 screen_position;
attribute vec2 texcoord0;
attribute vec4 color;
varying vec2 texcoord0_px;
varying vec4 color_px;
void main() {
// Pick the position or the annotation position
vec2 pos = graph_position;
// Transform the coordinates
pos = vec2(graph_matrix * vec4(pos, 0.0, 1.0));
if( align_origin )
pos = floor(pos + vec2(0.5, 0.5)) + vec2(0.5, 0.5);
gl_Position = proj_matrix * view_matrix * vec4(pos + screen_position, 0.0, 1.0);
texcoord0_px = texcoord0;
color_px = color;
}
I used glVertexAttrib4f to specify the color attribute, and turned the attribute array off. According to page 33 of the 3.3 core spec, that should work:
If an array corresponding to a generic attribute required by a vertex shader is not enabled, then the corresponding element is taken from the current generic attribute state (see section 2.7).
But (most of the time, depending on the profile and driver) the shader either didn't run at all or used black if I accessed the disabled color attribute. Replacing it with a constant got it to run.
Much searching yielded this page of tips regarding WebGL, which had the following to say:
Always have vertex attrib 0 array enabled. If you draw with vertex attrib 0 array disabled, you will force the browser to do complicated emulation when running on desktop OpenGL (e.g. on Mac OSX). This is because in desktop OpenGL, nothing gets drawn if vertex attrib 0 is not array-enabled. You can use bindAttribLocation() to force a vertex attribute to use location 0, and use enableVertexAttribArray() to make it array-enabled.
Sure enough, not only was the color attribute assigned to index zero, but if I force-bound a different, array-enabled attribute to zero, the code ran and produced the right color.
I can't find any other mention of this rule anywhere, and certainly not on ATI hardware. Does anyone know where this rule comes from? Or is this a bug in the implementation that the Mozilla folks noticed and warned about?
tl;dr: this is a driver bug. Core OpenGL 3.3 should allow you to not use attribute 0, but the compatibility profile does not, and some drivers don't implement that switch correctly. Just make sure to use attribute 0 to avoid any problems.
Actual Content:
Let's have a little history lesson in how the OpenGL specification came to be.
In the most ancient days of OpenGL, there was exactly one way to render: immediate mode (ie: glBegin/glVertex/glColor/glEtc/glEnd). Display lists existed, but they were always defined as simply sending the captured commands again. So while implementations didn't actually make all of those function calls, implementations would still behave as if they did.
In OpenGL 1.1, client-side vertex arrays were added to the specification. Now remember: the specification is a document that specifies behavior, not implementation. Therefore, the ARB simply defined that client-side arrays worked exactly like making immediate mode calls, using the appropriate accesses to the current array pointers. Obviously implementations wouldn't actually do that, but they behaved as if they did.
Buffer-object-based vertex arrays were defined in the same way, though with language slightly complicated by pulling from server storage instead of client storage.
Then something happened: ARB_vertex_program (not ARB_vertex_shader. I'm talking about assembly programs).
See, once you have shaders, you want to start being able to define your own attributes instead of using the built-in ones. And that all made sense. However, there was one problem.
Immedate mode works like this:
glBegin(...);
glTexCoord(...);
glColor(...);
glVertex(...);
glTexCoord(...);
glColor(...);
glVertex(...);
glTexCoord(...);
glColor(...);
glVertex(...);
glEnd();
Every time you call glVertex, this causes all of the current attribute state to be used for a single vertex. All of the other immediate mode functions simply set values into the context; this function actually sends the vertex to OpenGL to be processed. That's very important in immediate mode. And since every vertex must have a position in fixed-function land, it made sense to use this function to decide when a vertex should be processed.
Once you're no longer using OpenGL's fixed-function vertex semantics, you have a problem in immediate mode. Namely, how do you decide when to actually send the vertex?
By convention, they stuck this onto attribute 0. Therefore, all immediate mode rendering must use either attribute 0 or glVertex to send a vertex.
However, because all other rendering is based on the language of immediate mode rendering, all other rendering has the same limitations of immediate mode rendering. Immediate mode requires attribute 0 or glVertex, and therefore so too do client-side arrays and so forth. Even though it doesn't make sense for them to, they need it because of how the specification defines their behavior.
Then OpenGL 3.0 came around. They deprecated immediate mode. Deprecated does not mean removed; the specification still had those functions in it, and all vertex array rendering was still defined in terms of them.
OpenGL 3.1 actually ripped out the old stuff. And that posed a bit of a language problem. After all, every array drawing command was always defined in terms of immediate mode. But once immediate mode no longer exists... how do you define it?
So they had to come up with new language for core OpenGL 3.1+. While doing so, they removed the pointless restriction on needing to use attribute 0.
But the compatibility profile did not.
Therefore, the rules for OpenGL 3.2+ is this. If you have a core OpenGL profile, then you do not have to use attribute 0. If you have a compatibility OpenGL profile, you must use attribute 0 (or glVertex). That's what the specification says.
But that's not what implementations implement.
In general, NVIDIA never cared much for the "must use attribute 0" rule and just does it how you would expect, even in compatibility profiles. Thus violating the letter of the specification. AMD is generally more likely to stick to the specification. However, they forgot to implement the core behavior correctly. So NVIDIA's too permissive on compatibility, and AMD is too restrictive on core.
To work around these driver bugs, simply always use attribute 0.
BTW, if you're wondering, NVIDIA won. In OpenGL 4.3, the compatibility profile uses the same wording for its array rendering commands as core. Thus, you're allowed to not use attribute 0 on both core and compatibility.