I'd like to run some tests to make sure some scenes I'm making will render on devices that only support WebGL 1.0. Is there a way to run a browser such that the browser only uses WebGL version 1.0 when it runs a WebGL scene?
I have already added version 100 to the top of my shaders--is that all that's required, or is there more required to ensure one is testing a scene using WebGL 1.0? Any pointers others can offer would be greatly appreciated!
Adding version 100 on top of shaders does nothing.
If you created the context with canvas.getContext('webgl') you will get the basic WebGL 1 features.
When using WebGL 1 the shaders that have #version 300 es will fail to compile and a lot of features will be locked behind extensions.
You should look up which extensions are supported by your target browser and only enable those extensions during your tests.
But I would still strongly recommend testing on your target browsers, because the WebGL spec has a fairly relaxed requirement for what are the minimal features that are required to be supported.
I often see that some combinations of parameters work on some browsers and not on others, even if those browsers support WebGL 2.
Are you using some kind of library? If you're doing raw WebGL then if you ask for "webgl" you always get webgl1. if you ask for "webgl2" you get webgl2 if the device supports it but it does not automatically fall back to WebGL1 as the 2 APIs are similar but incompatible.
As for shaders WebGL1 only supports GLSL ES 1.0. WebGL2 supports GLSL ES 1.0 and GLSL ES 3.0.
That's it.
If you want an idea of what features are supported across devices see webglstats.com. The minimum features are defined in the spec but very few devices support the minimum and if there are any devices that support the minimum they are likely going to have serious performance issues.
Related
I am currently using a NVIDIA GeForce GTX 780 (from Gigabyte if that matters - I don't know how much this could be affected by the onboard BIOS, I've also got two of them Installed but due to Vulkans incapeability of SLI I only use one device at a time in my code. However in the NVIDIA control center SLI is activated. I use the official Driver version 375.63). That GPU is fully capable of geometry shaders of course.
I am using a geometry shader with Vulkan API and it works allright and does everything I exspect it to do. However I get the validation layer report as follows: #[SC]: Shader requires VkPhysicalDeviceFeatures::geometryShader but is not enabled on the device.
Is this a bug? Does someone have similiar issues?
PS: http://vulkan.gpuinfo.org/displayreport.php?id=777#features is saying the support for "Geometry Shader" is "true" as exspected. I am using Vulkan 1.0.30.0 SDK.
Vulkan features work differently from OpenGL extensions. In OpenGL, if an extension is supported, then it's always active. In Vulkan, the fact that a feature is available is not sufficient. When you create a VkDevice, you must explicitly ask for all features you intend to use.
If you didn't ask for the Geometry Shader feature, then you can't use GS's, even if the VkPhysicalDevice advertises support for it.
So the sequence of steps should be to check to see if a VkPhysicalDevice supports the features you want to use, then supply those features in VkDeviceCreateInfo::pEnabledFeatures when you call vkCreateDevice.
Since Vulkan doesn't do validation checking on most of its inputs, the actual driver will likely assume you enabled the feature and just do what it normally would. But it is not required to do so; using a feature which has not been enabled is undefined behavior. So the validation layer is right to stop you.
We're working with OpenGL 4.3. However, we're afraid that we're using features that are working with our graphics card, but not in the "minimal" required specs for OpenGL 4.3.
Is there any possibility to emulate the minimum behaviour? For example, to make the graphics card reject any non-standard texture formats etc.? (Could also be in software, speed doesn't matter for testing compatibility...)
Update
In the best case, a minimum set in all aspects would be perfect, so it is guaranteed the application works on all graphics cards supporting OpenGL 4.3. So this emulation mode should:
Reject all features/extensions deprecated in 4.3
Reject all features/extensions newer than 4.3
Only support required formats, no optional formats (for example for textures and renderbuffers)
Only support the minimum required precision for calculations
Have the minimum value of the supported Limits than can be queried via GetInteger (for example a MAX_TEXTURE_IMAGE_UNITS of 16)
There is a reference GLSL compiler that will solve half of this problem. But, as for the rest ... AMD, NV and Intel all have their own compliance issues and policies regarding how loosely they believe in following the specification.
I have seen each one of these vendors implicitly enable extensions from versions of OpenGL they should not have (without so much as a warning in the compiler log), and that is just the GLSL side of things. It is likely that Mesa can serve the role of greatest common factor for feature testing, but for OpenGL versions much older than 4.3. Mesa is effectively a minimalist implementation, and usually a few years behind the big hardware vendors.
Ideally GL's debug output extension, which is conveniently a core feature in GL 4.3, would issue API warnings if you use a feature your requested context version does not support. However, each vendor has different levels of support for this; AMD is generally the best. NVIDIA may even require you to enable "OpenGL Expert" mode before it spits out any genuinely useful information.
If all else fails, there is an XML file published by Khronos that you can parse to figure out which version and/or extension ANY OpenGL constant, function or enumerant is provided by. I wrote a simple project to do this with half a day's effort: https://github.com/Andon13/glvs. You could write some sort of validator yourself based on that principle.
There are a number of OpenGL Loading Libraries that will do what you need to some degree. GLEW just gives you everything and lets you pick and choose what you want. But there are others which generate more specific loaders.
GL3w for example generates only the core OpenGL functions, ignoring extensions entirely.
For a more comprehensive solution, there are glLoadGen or GLad. Both of these are generators for the headers and loading code. But both of them allow you to specify exactly which version of OpenGL you want and exactly which extensions you want. GLad even has a web application that can generate headers and download them to your computer.
In the interests of full disclosure, I wrote glLoadGen.
I'm trying to compile a fragment shader using:
#extension ARB_draw_buffers : require
but compilation fails with the following error:
extension 'ARB_draw_buffers' is not supported
However when I check for availability of this particular extensions, either by calling glGetString (GL_EXTENSIONS) or using OpenGL Extension Viewer I get positive results.
OpenGL version is 3.1,
The grapic card is Intel HD Graphics 3000.
What might be the cause of that?
Your driver in this scenario is 3.1; it is not clear what your targeted OpenGL version is.
If you can establish OpenGL 3.0 as the mininum required version, you can write your shader using #version 130 and avoid the extension directive altogether.
The ARB extension mentioned in the question is only there for drivers that cannot implement all of the features required by OpenGL 3.0, but have the necessary hardware support for this one feature.
That was its intended purpose, but there do not appear to be many driver / hardware combinations in the wild that actually have this problem. You probably do not want the headache of writing code that supports them anyway ;)
I am learning OpenGL with an aim to build OpenGL ES application for Android / iPhone.
Since I learn it from the beginning, I would prefer to learn the new specification, without touching the old stuff (glBegin etc.). Unfortunately, when I pass some tutorial and implement stuff, it turns out that the examples are incompatible with ES 2.0. For example, after those excellent tutorials I know how to implement lights, what works on my PC, but would not work on a mobile (gl_LightSource is not supported in the latter).
What I would like to do, is to develop the code on my PC, and restrict the API to the commands that are supported under OpenGL ES (like, throw error on glLight). Is that possible?
Assuming you are using Windows for development, then you can restrict the API to just OpenGL ES 2.0 by using Google ANGLE. ANGLE is basically wrapping DirectX, but you use it through a fully standard compliant OpenGL ES 2.0 interface.
If you have an AMD Radeon GPU, you have another option: the AMD OpenGL ES SDK also provides a fully compliant 2.0 interface.
In both cases, if you accidentally use non OpenGL ES 2.0 features, the code will just not compile or fail at runtime in case of unsupported combinations of parameters. Same goes for shaders, the glCompileShader call will fail.
IF you want to learn OpenGL ES 2 and make sure you are only using calls and techniques compatible with OpenGL ES 2, consider learning WebGL.
WebGL is almost identical to OpenGL ES 2. You get the advantage of a javasript console (with firebug or chrome's built-in developer tools) and in some environments (chrome on windows I think?) you get VERY helpful error messages whenever you do something wrong. Add to that you automatically have access to up to 4 implementations of WebGL to test with (firefox, chrome, safari, opera) and you have a pretty good set of tools for testing your OpengGL.
This is essentially how I have been able to learn OpenGL ES 2.
As stated in the Mali GPU OpenGL ES Application Development Guide:
OpenGL ES 1.1 and OpenGL ES 2.0 are subsets of the full OpenGL
standard. When using the OpenGL ES API, there are limitations that you
must be aware of when developing your applications.
For example, the following OpenGL functionality is not present in
either OpenGL ES 1.1 or OpenGL ES 2.0:
There is no support for glBegin or glEnd. Use vertex arrays and vertex buffer objects instead.
The only supported rasterization primitives are points, lines and triangles. Quads are not supported.
There is no polynomial function evaluation stage.
You cannot send blocks of fragments directly to individual fragment operations.
There is no support for display lists.
In addition, the following OpenGL functionality is not present in
OpenGL ES 2.0:
There is no support for the fixed-function graphics pipeline. You must use your own vertex and fragment shader programs.
There is no support for viewing transforms such as glFrustumf. You must compute your own transformation matrix, pass it to the vertex
shader as a uniform variable, and perform the matrix multiplication in
the shader.
There is no support for specialized functions such as glVertexPointer and glNormalPointer. Use glVertexAttribPointer
instead.
You can always refer to the OpenGL ES specification and see if a function / feature is supported.
There is a good set of lessons available for OpenGL ES 2.0 over at http://www.learnopengles.com/. For developing on PC you can try using an emulator; many different GPU vendors provide their own emulators that translate the calls to desktop GL. However, the best way to be sure that your code works as expected is to run it on the actual device.
EDIT: A new emulator for Android has support for OpenGL ES 2.0: http://android-developers.blogspot.ca/2012/04/faster-emulator-with-better-hardware.html
Since I'm not familiar with iPhone development I'd like to know whether it is possible to use OpenGL ES1.0 on the iPhone 3gs rather than 2.0.
I'd like to share a code base across different mobile platforms and not having to deal with the programmable pipeline from OGLES 2.0 could speed up an initial build.
Update -- I'm not used to working with OpenGL Es, but is there a always complete backward compatibility or do phones sometimes only support the latest version, eg 2.0
Thanks
Yes, you can. Simply call the OpenGL ES 1.0 APIs. The hardware is a full 2.0 device, but the software/driver can implement an OpenGL 1.x pipeline for you.