Differences between GLSL and GLSL ES 2 - glsl

Two questions really...
Is GLSL ES 2 a totally separate language, or a special version of GLSL?
What are the differences between them, in terms of "standard library" functions, syntax and capabilities?
I am writing shaders for an application targeted at Windows, Mac and iPad and I would prefer not to have to add more versions of each shader - well simpler shaders anyway.

Is GLSL ES 2 a totally separate language, or a special version of GLSL?
Every version of GLSL is ultimately a "totally separate language;" some aren't even backwards compatible with previous versions. However, the ES variation of GLSL is even moreso.
Think of it as a fork of desktop GLSL. A fork of GLSL from six versions ago, mind you.
What are the differences between them, in terms of "standard library" functions, syntax and capabilities?
Between which versions?
The only possible reason you could have to try to share shaders between these platforms is if you're trying to share OpenGL code between them. So you'd have to be limiting yourself to desktop OpenGL 2.1 functionality.
That means you'd be using desktop GLSL version 1.20. This is similar to GLSL ES version 1.00 (the version of GLSL is not given the same number as the matching GL version. Well, until desktop GL 3.3/4.0 and GLSL 3.30/4.00), but still different. In ES, you have a lot of precision qualifiers that desktop GLSL 1.20 doesn't handle.
The best way to handle that is to use a "preamble" shader string. See, glShaderSource takes multiple shader strings, which it slaps together and compiles as one. You can take your main shader string (main and its various code, attribute definitions, uniforms, etc) and stick a "preamble" shader string in front of that. The preamble would be different depending on what you're compiling for.
Your ES GLSL 1.00 preamble would look like this:
#version 100 //Must start with a version specifier.
Your desktop GLSL 1.20 preamble would look like this:
#version 120
#define lowp
#define mediump
#define highp
And maybe a few other judicious #defines. This way, your shader can have those precision qualifiers set, and they'll just be #defined away when using desktop GLSL.
This is a lot like platform-neutral coding in C/C++, where you often have some platform-specific header that changes the definition and/or has #defines that you can condition code based on.

In Simpler terms Opengl ES ie opengl Embedded system was made to run on embedded devices like mobile phones the mobile phone system face the same problem as 90s pc that is computation issues and difference in gpu arcitecture.
ImgTech PowerVR SGX - TBDR (TileBasedDeferred)
ImgTech PowerVR MBX - TBDR (Fixed Function)
ARM Mali - Tiled (small tiles)
Qualcomm Adreno - Tiled (large tiles)
Adreno3xx - can switch to Traditional
glHint(GL_BINNING_CONTROL_HINT_QCOM, GL_RENDER_DIRECT_TO_FRAMEBUFFER_QCOM)
NVIDIA Tegra - Traditional

Related

ARB_draw_buffers available but not supported by shader engine

I'm trying to compile a fragment shader using:
#extension ARB_draw_buffers : require
but compilation fails with the following error:
extension 'ARB_draw_buffers' is not supported
However when I check for availability of this particular extensions, either by calling glGetString (GL_EXTENSIONS) or using OpenGL Extension Viewer I get positive results.
OpenGL version is 3.1,
The grapic card is Intel HD Graphics 3000.
What might be the cause of that?
Your driver in this scenario is 3.1; it is not clear what your targeted OpenGL version is.
If you can establish OpenGL 3.0 as the mininum required version, you can write your shader using #version 130 and avoid the extension directive altogether.
The ARB extension mentioned in the question is only there for drivers that cannot implement all of the features required by OpenGL 3.0, but have the necessary hardware support for this one feature.
That was its intended purpose, but there do not appear to be many driver / hardware combinations in the wild that actually have this problem. You probably do not want the headache of writing code that supports them anyway ;)

Is it a big deal switching from OpenGL 3.0 to OpenGL ES 2.0?

If I am currently developing a game for windows using SDL and GLEW (for OpenGL 3.0+) and I later want to port my game to Android, will I have to rewrite the majority of my code to convert from OpenGL 3.0 to OpenGL ES 2.0? Are there any programs that do this for me? Is it a big deal switching from OpenGL to OpenGL ES?
Not at all, it is very easy to convert.
Only differences are shader variables and constants, and suffixes like GL_RGBA8 to GL_RGBA8_OES. However, there are limits in OpenGL ES. For instance, you can use only GL_UNSIGNED_BYTE or GL_UNSIGNED_SHORT as indices data type GL_UNSIGNED_INT. Which means, you can not draw more than 65,535 indices at one go. It is not a big deal although you should refer to the official OpenGL ES manual, https://www.khronos.org/opengles/sdk/docs/man/
Refer to the link OpenGL ES 2.0 vs OpenGL 3 - Similarities and Differences by coffeeandcode
It really depends on your code
OpenGL ES 2.0 (and 3.0) is mostly a subset of Desktop OpenGL.
The biggest difference is there is no legacy fixed function pipeline in ES. What's the fixed function pipeline? Anything having to do with glVertex, glColor, glNormal, glLight, glPushMatrix, glPopMatrix, glMatrixMode, etc... in GLSL using any of the variables that access the fixed function data like gl_Vertex, gl_Normal, gl_Color, gl_MultiTexCoord, gl_FogCoord etc...
If you use any of those features you'll have some work cut out for you. OpenGL ES 2.0 and 3.0 are just plain shaders. No "3d" is provided for you. You're required to write all projection, lighting, texture references, etc yourself.
If you're already doing that (which most modern games probably do ) you might not have too much work. If on the other hand you've been using those old deprecated OpenGL features which from my experience is still very very common (most tutorials still use that stuff). Then you've got a bit of work cut out for you as you try to reproduce those features on your own.
There is an open source library, regal, which I think was started by NVidia. It's supposed to reproduce that stuff. Be aware that whole fixed function system was fairly inefficient which is one of the reasons it was deprecated but it might be a way to get things working quickly.

Is it viable to replace GLSL with CG?

http://http.developer.nvidia.com/Cg/TessellationControlShader.html
I have some questions regarding CG.
What OpenGL version does CG support? On their site they state
Opengl Functionality Requirements
OpenGL 1.0
Which seems a little bit odd to me. For me this means that I need to have at least OpenGL 1.0 to use all OpenGL features in CG. So litteraly all new OpenGL features are missing?
Also the compute shader seems to be missing
GeometryShader, PixelShader, TessellationEvaluationShader,
VertexShader, FragmentProgram, GeometryProgram,
TessellationControlProgram, TessellationEvaluationProgram,
VertexProgram
Is CG now a viable alternative to replace GLSL 4.x? Can I write all shaders in CG that I could write in GLSL 4.3?
Is CG now a viable alternative to replace GLSL 4.x? Can I write all shaders in CG that I could write in GLSL 4.3?
No. While some OpenGL 4.x features, such as tessellation, are exposed as of Cg 3.1, others are not.
Notable missing features in Cg 3.1 (and their OpenGL names) include:
compute shaders
atomic counters
shader-writeable storage blocks (shader storage blocks)
shader-writable textures (image load / store)
runtime shader function selection (shader subroutines)
In general, Cg tends to lag two or three years behind the latest OpenGL release.
Cg has been end-of-lifed by NVidia so it will not be developed going forward:
The Cg Toolkit is a legacy NVIDIA toolkit no longer under active development or support. Cg 3.1 is our last release and while we continue to make it available to developers, we do not recommend using it in new development projects because future hardware features may not be supported.
So I think the best answer would be No.
I'm pretty sure that OpenGL-1.0 is a typo. DirectX-11 is about the function level you get with OpenGL-4.0. Now look what key is right below the 4 on the numpad.
In fact no single NVidia GPU ever did support only a OpenGL profile as low as OpenGL-1.0. OpenGL-1.0 dates back 20 years.
Is CG now a viable alternative to replace GLSL 4.x?
Well, I personally don't see a reason why to use Cg, except if you want to support both OpenGL and DirectX with a common set of shaders. But why would you want cross API compatibility? If you aim for portability then OpenGL wins clearly over DirectX.
IMHO the main reason to keep using Cg is, if you have to maintain a legacy product that uses Cg already. Remember that Cg was introduced long before OpenGL had a high level shading language.
Can I write all shaders in CG that I could write in GLSL 4.3?
Yes.

How to test shaders again different version of shaders model

I have many OpenGl shaders. We try to use as many different hardware as possible to evaluate the portability of our product. One of our customer recently ran into some rendering issues it seems that the target machine only provide version shaders model 2.0 all our development/build/test machine (even oldest ones run version 4.0), everything else (OpenGl version, GSLS version ...) seems identical.
I didn't find a way to downgrade the shaders model version since it's automatically provided by the graphic card driver.
Is there a way to manually install or select OpenGl/GLSL/Shader model version in use for develpment/test purpose ?
NOTE: the main target are windows XP SP2/7 (32&64) for both ATI/NVIDIA cards
OpenGL does not have the concept of "shader models"; that's a Direct3D thing. It only has versions of GLSL: 1.10, 1.20, etc.
Every OpenGL version matches a specific GLSL version. GL 2.1 supports GLSL 1.20. GL 3.0 supports GLSL 1.30. For GL 3.3 and above, they stopped fooling around and just used the same version number, so GL 3.3 supports GLSL 3.30. So there's an odd version number gap between GLSL 1.50 (maps to GL 3.2) and GLSL 3.30.
Technically, OpenGL implementations are allowed to refuse to compile older shader versions than the ones that match to the current version. As a practical matter however, you can pretty much shove any GLSL shader into any OpenGL implementation, as long as the shader's version is less than or equal to the version that the OpenGL implementation supports. This hasn't been tested on MacOSX Lion's implementation of GL 3.2 core.
There is one exception: core contexts. If you try to feed a shader through a core OpenGL context that uses functionality removed from the core, it will complain.
There is no way to force OpenGL to provide you with a particular OpenGL version. You can ask it to, with wgl/glXCreateContextAttribs. But that is allowed to give you any version higher than the one you ask for, so long as that version is backwards compatible with what you asked for.

OpenGL vs OpenGL ES 2.0 - Can an OpenGL Application Be Easily Ported?

I am working on a gaming framework of sorts, and am a newcomer to OpenGL. Most books seem to not give a terribly clear answer to this question, and I want to develop on my desktop using OpenGL, but execute the code in an OpenGL ES 2.0 environment. My question is twofold then:
If I target my framework for OpenGL on the desktop, will it just run without modification in an OpenGL ES 2.0 environment?
If not, then is there a good emulator out there, PC or Mac; is there a script that I can run that will convert my OpenGL code into OpenGL ES code, or flag things that won't work?
It's been about three years since I was last doing any ES work, so I may be out of date or simply remembering some stuff incorrectly.
No, targeting OpenGL for desktop does not equal targeting OpenGL ES, because ES is a subset. ES does not implement immediate mode functions (glBegin()/glEnd(), glVertex*(), ...) Vertex arrays are the main way of sending stuff into the pipeline.
Additionally, it depends on what profile you are targetting: at least in the Lite profile, ES does not need to implement floating point functions. Instead you get fixed point functions; think 32-bit integers where first 16 bits mean digits before decimal point, and the following 16 bits mean digits after the decimal point.
In other words, even simple code might be unportable if it uses floats (you'd have to replace calls to gl*f() functions with calls to gl*x() functions.
See how you might solve this problem in Trolltech's example (specifically the qtwidget.cpp file; it's Qt example, but still...). You'll see they make this call:
q_glClearColor(f2vt(0.1f), f2vt(0.1f), f2vt(0.2f), f2vt(1.0f));
This is meant to replace call to glClearColorf(). Additionally, they use macro f2vt() - meaning float to vertex type - which automagically converts the argument from float to the correct data type.
While I was developing some small demos three years ago for a company, I've had success working with PowerVR's SDK. It's for Visual C++ under Windows; I haven't tried it under Linux (no need since I was working on company PC).
A small update to reflect my recent experiences with ES. (June 7th 2011)
Today's platforms probably don't use the Lite profile, so you probably don't have to worry about fixed-point decimals
When porting your desktop code for mobile (e.g. iOS), quite probably you'll have to do primarily these, and not much else:
replace glBegin()/glEnd() with vertex arrays
replace some calls to functions such as glClearColor() with calls such as glClearColorf()
rewrite your windowing and input system
if targeting OpenGL ES 2.0 to get shader functionality, you'll now have to completely replace fixed-function pipeline's built in behavior with shaders - at least the basic ones that reimplement fixed-function pipeline
Really important: unless your mobile system is not memory-constrained, you really want to look into using texture compression for your graphics chip; for example, on iOS devices, you'll be uploading PVRTC-compressed data to the chip
In OpenGL ES 2.0, which is what new gadgets use, you also have to provide your own vertex and fragment shaders because the old fixed function pipeline is gone. This means having to do any shading calculations etc. yourself, things which would be quite complex, but you can find existing implementations on GLSL tutorials.
Still, as GLES is a subset of desktop OpenGL, it is possible to run the same program on both platforms.
I know of two projects to provide GL translation between desktop and ES:
glshim: Substantial fixed pipeline to 1.x support, basic ES 2.x support.
Regal: Anything to ES 2.x.
From my understanding OpenGL ES is a subset of OpenGL. I think if you refrain from using immediate mode stuff, like glBegin() and glEnd() you should be alright. I haven't done much with OpenGL in the past couple of months, but when I was working with ES 1.0 as long as I didn't use glBegin/glEnd all the code I had learned from the standard OpenGL worked.
I know the iPhone simulator runs OpenGL ES code. I'm not sure about the Android one.
Here is Windows emulator.
Option 3) You could use a library like Qt to handle your OpenGL code using their built in wrapper functions. This gives you the option of using one code base (or minimally different code bases) for OpenGL and building for most any platform you want. You wouldn't need to port it for each different platform you wanted to support. Qt can even choose the OpenGL context based on the functions that you use.