Need Minimum Textures required for OpenGL - opengl

Quick question, what is the minimum amount of textures that can be bound for the fragment shader that a OpenGL implementation is required to have?
Note:
I would like to know this for OpenGL 1.5, for OpenGL 2.0, and OpenGL 2.1

OpenGL 1.x and 2.x require at least 2 texture units. OpenGL 3.x and 4.x require at least 16. Most current GPUs have 32.
You can find those values fairly easily in the OpenGL specification itself, in the "Implementation Dependent Values" table. This specific value is called MAX_TEXTURE_UNITS in 1.x and 2.x and MAX_TEXTURE_IMAGE_UNITS in 3.x and 4.x.

Related

Is GLSL 1.30 always supported in OpenGL 3.x / 4.x

Edit 2 It's been pointed out that a question I'm really asking is: "Is the upward compatibility (regarding core profiles) stated in the spec transitive or not?" I would indeed be happy to get an answer to this question. And if it is indeed transitive, how does does one explain the apparent contradiction?
Edit: This is not a duplicate of Which GLSL versions to use for maximum compatibility
It's not a duplicate because : That question was about what versions of GLSL to use in general. This question is about the very specific case of contradictions in the spec about when GLSL 1.30 is supported. I've changed the title as well to clarify.
It seems like all versions of OpenGL from 3.1 onwards are backwards compatible. From the specs:
The OpenGL 4.2 compatibility and core profiles are upward compatible with
the OpenGL 4.1 compatibility and core profiles, respectively
The OpenGL 4.1 compatibility and core profiles are upward compatible with
the OpenGL 4.0 compatibility and core profiles, respectively
The OpenGL 4.0 compatibility and core profiles are upward compatible with
the OpenGL 3.3 compatibility and core profiles, respectively
The OpenGL 3.3 compatibility and core profiles are upward compatible with
the OpenGL 3.2 compatibility and core profiles, respectively
The OpenGL 3.2 core profile is upward compatible with OpenGL 3.1, but not
with earlier versions
Great, so I can write code against OpenGL 3.1 and it will work on an OpenGL 4.2 implementation.
But the 4.2 core spec also says:
The core profile of OpenGL 4.2 is also guaranteed to support all previous versions of the OpenGL Shading Language back to version 1.40.
And the 3.1 spec says:
OpenGL 3.1 implementations are guaranteed to support at least version 1.30 of
the shading language.
So how can OpenGL 4.2 claim to be upward compatible with OpenGL 3.1 when it might not support GLSL 1.30, which was supported by OpenGL 3.1?
I've been puzzled by found inconsistency in specifications and tried to make some research. My personal conclusion is that indeed, OpenGL specifications uses ambiguous/bad wording in this aspect, which has been introduced since OpenGL 3.2+ (the fragment quoted from OpenGL 4.2 Core was first introduced in OpenGL 3.2 Core and remained unchanged).
OpenGL 3.2 demands implementations to support only GLSL 1.40 (OpenGL 3.1) and GLSL 1.50 (OpenGL 3.2) and at the same time specifies that it is compatible with OpenGL 3.1 without removed deprecated functionality.
OpenGL 3.2 implementations are guaranteed to support versions 1.40 and 1.50 of the OpenGL Shading Language. All references to sections of that specification refer to version 1.50.
...
The core profile of OpenGL 4.6 is also guaranteed to support all previous versions of the OpenGL Shading Language back to version 1.40
...
The OpenGL 3.2 core profile is upward compatible with OpenGL 3.1, but not with earlier versions.
At the same time, OpenGL 3.0 and 3.1 explicitly marked GLSL 1.10 (OpenGL 2.0) and GLSL 1.20 (OpenGL 2.1) as deprecated (and removed in OpenGL 3.1):
OpenGL 3.1 implementations are guaranteed to support at least version 1.30 of the shading language.
...
The features deprecated in OpenGL 3.0: ... OpenGL Shading Language versions 1.10 and 1.20. These versions of the shading language depend on many API features that have also been deprecated.
But specs doesn't mention GLSL 1.30 (OpenGL 3.0) in deprecation context, so that there is no obvious reason to consider GLSL 1.30 being deprecated (as dedicated GLSL 1.30 specification itself elaborates deprecated functionality).
One may also note, that OpenGL 3.1 doesn't require GLSL 1.40 to be supported (although it was introduced by this specs)! So stating OpenGL 3.2 being "upward-compatible" to OpenGL 3.1 without GLSL 1.30 support (the only version mandatory to be supported by OpenGL 3.1 specs) looks at bare minimum confusing to me.
I guess that OpenGL specs authors in "upward-compatible" section implicitly referred to non-GLSL functionality, and designed dedicated section "implementations are guaranteed ... shading language" to clarify requirements upon GLSL versions, so that the latter dominates over the first one.
I've tried to check OpenGL implementations at hand to see what they will tell for different GLSL versions (110, 120, 130, 140, 150) within Core Profile, GLSL compilation warnings logged and Debug context enabled:
NVIDIA (456.71), Intel and Mesa (20.2.6)
GLSL 1.10+ - say not a word for any known GLSL versions;
AMD
GLSL 1.10/GLSL 1.20 - shader compiler generates a warning
WARNING: warning(#271) Explicit version number 120 not supported by GL3 forward compatible context;
GLSL 1.30+ - no warnings;
Apple (Metal 71.0.7)
GLSL 1.10/1.20/1.30 - generates a shader compilation error
ERROR: 0:1: '' : version '130' is not supported
GLSL 1.40+ - no issues.
So, it seems that OpenGL vendors are inconclusive on reading this portion of specifications.
NVIDIA just doesn't care (their GLSL -> Cg translator is known to skip most of GLSL validation), Intel too (but their OpenGL driver was never good), as well as Mesa. But this is NOT a violation of OpenGL specs, as they say that OpenGL implementations may support any other GLSL versions in addition to mandatory ones.
AMD is known to have a good GLSL validator, and apparrently their engineers read the portions of the spec stating "OpenGL 3.2 Core Profile should support the same as OpenGL 3.1 except deprecated functionality". But does it softly - without compilation errors, as only shader compilation log suggests that GLSL 1.20 should not be used (but GLSL 1.30 is OK!).
Apple is known to be paranoid in following OpenGL specs and usually generates errors on every deviation from it. And here we see that Apple engineers read the other portion of OpenGL 3.2 specification listing supported GLSL versions as 1.40 and 1.50 - so that it does not accept GLSL 1.30.
It should be noted, that most vendors implemented OpenGL 3.2 after implementing OpenGL 3.0 and 3.1, so that it is really not a big deal for them to support all GLSL versions starting from 1.10.
In contrast, Apple never has OpenGL 3.0/3.1 and implemented OpenGL 3.2 Core Profile straight ahead (without any interest in implementing Compatible Profiles). That's why, I guess, they preferred to read specs that GLSL 1.30 (OpenGL 3.1) is not supported.
The general wording in OpenGL specifications states that the range of supported GLSL versions may be wider and suggests querying GL_SHADING_LANGUAGE_VERSION, but apparently, this query is useless to retrieve a minimal GLSL version or complete list of supported versions, and no other API provided for that purpose. So that one may only "probe" shader compiler to see if particular version is supported or not (outside the list of mandatory ones).
From the other side, separate GLSL versioning independent from OpenGL looks redundant, as newer GLSL revisions are also compatible with earlier ones (save deprecated functionality) - so that there is no much reason to not force GLSL 4.20 while using OpenGL 4.2, as long as you are not just trying to verify GLSL compatibility with older OpenGL versions.
Practically speaking, OpenGL 3.2 and it's GLSL 1.50 looks like a more reasonable baseline for development, as I don't really know up-to-date implementations supporting OpenGL 3.1 but not OpenGL 3.2 Core Profile.

OpenGL 3.+ glsl compatibility mess?

So, I googled a lot of opengl 3.+ tutorials, all incorporating shaders (GLSL 330 core). I however do not have a graphics card supporting these newer GLSL implementations, either I have to update my driver but still I'm not sure if my card is intrinsically able to support it.
Currently my openGL version is 3.1, and I created on windows with C++ a modern context with backwards compatibility. My GLSL version is 1.30 via NVIDIA Cg compiler (full definition), and GLSL 1.30 -> version 130.
The problem is : version 130 is fully based on the legacy opengl pipeline, because it contains things like viewmatrix, modelmatrix, etc. Then how am I supposed to use them when I am using core functions in my client app (OpenGL 3+)?
This is really confusing, give me concrete examples.
Furthermore, I want my app to be able to run on most OpenGL implementations, then could you tell me where the border is between legacy GLSL and modern GLSL? Is GLSL 300 the modern GLSL, and is there a compatibilty with OpenGL 3.+ with older GLSL versions?
I would say OpenGL 3.1 is modern OpenGL.
Any hardware that supports OpenGL 3.1 is capable of supporting OpenGL 3.3. Whether the driver always support of it is another matter. Updating your graphics card will probably bump you up to OpenGL 3.3.
Just to clear this up OpenGL 3.1 is not legacy OpenGL.
legacy OpenGL would be:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(90.0, 0.0, 1.0, 0.0);
glTranslatef(0.0, 0.0, -5.0);
Which OpenGL 3.1 with a compatibility context supports, but that doesn't mean it should be used. If you are developing for OpenGL 3 capable hardware you should most definitely not be using it. You can disable the legacy functionality by requesting a core context.
if you are using shaders then you already moved away the legacy fixed function pipeline. So GLSL 130 is not legacy :P.
Working on my Linux Laptop with my Intel CPU where the latest stable drivers are only at OpenGL 3.1 (Yes OpenGL 3.3 commits are in place, but I'm waiting for MESA 10 ;) ) I have without much effort been able to get the OpenGL 3.3 Tutorials to run on my machine without touching legacy OpenGL.
One of the wonderful things about OpenGL is that you can extend the functionality with OpenGL extension. Even if your HW isn't capable of handling OpenGL 4.4 you can still use the extensions that doesn't require OpenGL 4 HW with updated drivers!
See https://developer.nvidia.com/opengl-driver and http://developer.amd.com/resources/documentation-articles/opengl-zone/ for info on what features are added to older HW, but if you are uncertain all you have to do is test it on your HW.
And I'll finish of by saying Legacy OpenGL also has it's place.
In my opinion legacy OpenGL might be easier to learn than modern OpenGL, since you don't need knowledge of shaders and OpenGL buffers to draw your first triangle, but I don't think you should be using it in a modern production application.
If you need support for old hardware you might need to use an older OpenGL version. Even modern CPU's support OpenGL 3 so I would not worry about this to much.
Converting from OpenGL 3.3 to OpenGL 3.0
I tested it on the tutorials from http://www.opengl-tutorial.org/. I cannot put the code up I converted as most of it is as is from the tutorials and I don't have permission to put the code here.
They author talked about OpenGL 3.1, but since he is capped at glsl 130 (OpenGL 3.0) I am converting to 3.0.
First of all change the context version to OpenGL 3.0 (Just change
the minor version to 0 if your working from the tutorials). Also don't set it to use core context if your using OpenGL 3.0 since as far as I know ARB_compatibility is only available from OpenGL 3.1.
Change the shader version to
#version 130
Remove all layout binding in shaders
layout(location = #) in vec2 #myVarName;
to
in vec2 #myVarName;
Use glBindAttribLocation to bind the in layouts as they were specified (see 3)
e.g
glBindAttribLocation(#myProgramName, #, "#myVarName");
Use glBindFragDataLocation to bind the out layout as they were specified (see 3)
e.g
glBindFragDataLocation(#myProgramName, #, "#myVarName");
glFramebufferTexture doesn't work in OpenGL 3.0. (Used for shadowmapping and deferred rendering etc.). Instead you need to use glFramebufferTexture2D. (It has a extra parameter, but the documentation is sufficient)
Here is screenshot of tutorial16 (I though this one covered the most areas and used this a test to see if that all that's needed)
There is a mistake in the source of tutorial16 (At the time of writing). The FBO is set to have no color output, but the fragment shader still outputs a color value, causing a segfault (Trying to write to nothing ussually does that). Simply changing the depth fragment shader to output nothing fixes it. (Doesn't produce segfault on more tolerant drivers, but that's not something you should bargain on)

Can I mix OpenGL versions?

I'm going to start implementing OpenGL 3 into my application. I currently am using OpenGL 1.1 but I wanted to keep some of it due to problems if I attempt to change the code but I wanted to change some of my drawing code to a faster version of OpenGL. If I do things like bind textures in OpenGL 1.1 can I draw the texture in OpenGL 3?
Mixing OpenGL versions isn't as easy as it used to be. In OpenGL 3.0, a lot of the old features were marked as "deprecated" and were removed in 3.1. However, since OpenGL 3.2, there are two profiles defined: Core and Compatibility. The OpenGL context is created with respect to such a profile. In compatibility profile,
all the deprecated (and in core profiles removed) stuff is still availbale, and it can be mixed as well. You can even mix a custom vertex shader with the fixed-function fragment processing or vice versa.
The problem here is that it is not grequired that implementors actually provide support for the compatibility profile. On MacOS X, OpenGL 3.x and 4.x are supported in core profile only.
In you specific example, binding textures will work in all cases, since that funtctionality exists unmodified in all versions from 1.1 to 4.3 (and is likely to do so in the near future). However, most of your drawing calls are likely to be not available in the newer core profiles.
Omg.. opengl 1.1 is from 1997! Do yourself a favor and get rid of the fixed-function pipeline stuff and adapt to OpenGL 4.x. However, you can try
#version 420 core
in your shader.

What version of OpenGL is closest to OpenGL ES2?

I've been trying to work with OpenGL ES 2 in Android for some time now, but I'm finding the lack of experience with OpenGL itself to be an issue, since I barely understand what all the GLES20 methods actually do. I've decided to try to learn actual OpenGL, but a little bit of reading has informed me that each version of OpenGL is drastically different from its predecessor. Wikipedia isn't very clear on which version that OpenGL ES2 most closely resembles.
So, my question is, which version of OpenGL should I learn for the purpose of better understanding OpenGL ES2?
According to the book OpenGL ES 2.0 Programming Guide:
The OpenGL ES 1.0 and 1.1 specifications implement a fixed function
pipeline and are derived from the OpenGL 1.3 and 1.5 specifications,
respectively. The OpenGL ES 2.0 specification implements a
programmable graphics pipeline and is derived from the OpenGL 2.0
specification.
OpenGL ES 2.0’s closest relative is OpenGL 2.0. Khronos provides a difference specification, which enumerates what desktop OpenGL functionality was removed to create OpenGL 2.0. The shading language for OpenGL ES 2.0 (GLSL ES 1.0) is derived from GLSL 1.20.
OpenGL ES2.0 is almost one-to-one copy of WebGL.
The differences are practically only in the setup of the environment, which in Android happens with EGL and which happens in WebGL with calls to DOM methods. (setting canvas)
The comparison to "open gl" is next to impossible, as Open GL means almost fixed and hidden rendering pipeline, which is controlled by stack of matrices and attributes. This is now obsoleted in ES. Instead one has the "opportunity" to control almost every aspect of the rendering pipeline.

Which version of OpenGL supports rectangular textures (w/o extensions)?

Rectangular textures used to be support through extensions and at some version of OpenGL are now directly supported, i.e I can create textures with the same basic opengl methods just supplying non-power-of-two sizes.
I've googled and can't seem to find a definitive changelog for the OpenGL spec. I need this information to dynamically detect support in the application and to inform users.
Simply replying with a number like 1.5 or 3.0 isn't enough. I need a reference.
According to the ARB_texture_non_power_of_two documentation, this extension was added as part of OpenGL 1.4.
However, it was not promoted into the core of OpenGL until OpenGL 2.0. Any vendor implementing OpenGL 2.0 should support this fully as of OpenGL 2.0.
According to the spec (page 341), NPOT textures were promoted to core in OpenGL 2.0.