Image load store equivalent in OpenGL 3 - opengl

My project should greatly benefit from arbitrary/atomic read and write operations in a texture from glsl shaders. The Image load store extension is what I need. Only problem, my target platform does not support OpenGL 4.
Is there an extension for OGL 3 that achieves similar results? I mean, atomic read/write operations in a texture or shared buffer of some sort from fragment shaders.

Image Load Store and, especially atomic operations are features that must be backed up by specific hardware capabilities, that are very similar to features used in compute shaders. Only some of the GL3 hardware can handle it and only in a limited way.
Image Load Store in core profile since 4.2, so if your hardware (and driver) is capable of OpenGL 4.2, then you don't need any extensions at all
if your hardware (and driver) capabilities is lower than GL 4.2, but higher than GL 3.0, you can, probably, use ARB_shader_image_load_store extension.
quote: OpenGL 3.0 and GLSL 1.30 are required
obviously, not all 3.0 hardware (and drivers) will support this extension, so you must check for its support before use it
I believe, most NVIDIA GL 3.3 hardware supports it, but not AMD or Intel (that's my subjective observations ;) ).
If your hardware is lower than GL 4.2 and not capable of this extension, nothing really you can do. Just have an alternative code path with texture sampling and rendering to texture and no atomics (as I understood this is possible, but without "great benefit of atomic"), or simply report an error to those users, who not yet upgraded their rigs.
Hope it helps.

Related

"Emulate" minimum OpenGL specs?

We're working with OpenGL 4.3. However, we're afraid that we're using features that are working with our graphics card, but not in the "minimal" required specs for OpenGL 4.3.
Is there any possibility to emulate the minimum behaviour? For example, to make the graphics card reject any non-standard texture formats etc.? (Could also be in software, speed doesn't matter for testing compatibility...)
Update
In the best case, a minimum set in all aspects would be perfect, so it is guaranteed the application works on all graphics cards supporting OpenGL 4.3. So this emulation mode should:
Reject all features/extensions deprecated in 4.3
Reject all features/extensions newer than 4.3
Only support required formats, no optional formats (for example for textures and renderbuffers)
Only support the minimum required precision for calculations
Have the minimum value of the supported Limits than can be queried via GetInteger (for example a MAX_TEXTURE_IMAGE_UNITS of 16)
There is a reference GLSL compiler that will solve half of this problem. But, as for the rest ... AMD, NV and Intel all have their own compliance issues and policies regarding how loosely they believe in following the specification.
I have seen each one of these vendors implicitly enable extensions from versions of OpenGL they should not have (without so much as a warning in the compiler log), and that is just the GLSL side of things. It is likely that Mesa can serve the role of greatest common factor for feature testing, but for OpenGL versions much older than 4.3. Mesa is effectively a minimalist implementation, and usually a few years behind the big hardware vendors.
Ideally GL's debug output extension, which is conveniently a core feature in GL 4.3, would issue API warnings if you use a feature your requested context version does not support. However, each vendor has different levels of support for this; AMD is generally the best. NVIDIA may even require you to enable "OpenGL Expert" mode before it spits out any genuinely useful information.
If all else fails, there is an XML file published by Khronos that you can parse to figure out which version and/or extension ANY OpenGL constant, function or enumerant is provided by. I wrote a simple project to do this with half a day's effort: https://github.com/Andon13/glvs. You could write some sort of validator yourself based on that principle.
There are a number of OpenGL Loading Libraries that will do what you need to some degree. GLEW just gives you everything and lets you pick and choose what you want. But there are others which generate more specific loaders.
GL3w for example generates only the core OpenGL functions, ignoring extensions entirely.
For a more comprehensive solution, there are glLoadGen or GLad. Both of these are generators for the headers and loading code. But both of them allow you to specify exactly which version of OpenGL you want and exactly which extensions you want. GLad even has a web application that can generate headers and download them to your computer.
In the interests of full disclosure, I wrote glLoadGen.

Direct3D 11.1's target-independent rasterization (TIR) equivalent in OpenGL (including extensions)

Target-independent rasterization (TIR) is a new hardware feature in DirectX 11.1, which Microsoft used to improve Direct2D in Windows 8. AMD claimed that TIR improved performance in 2D vector graphics by some 500%. And there was some "war of words" with Nvidia's because Kepler GPUs apparently don't support TIR (among other DirectX 11.1 features). The idea of TIR appears to have originated at Microsoft, because they have a patent application for it.
Now Direct2D is fine your OS is Windows, but is there some OpenGL (possibly vendor/AMD) extension that provides access to the same hardware/driver TIR thing? I think AMD is in a bit of a weird spot because there is no vendor-independent 2D vector graphics extension for OpenGL; only Nvidia is promoting NV_path_rendering for now and its architecture is rather different from Direct2D. So it's unclear where anything made by AMD to accelerate 2D vector graphics can plug (or show up) in OpenGL, unlike in the Direct2D+Direct3D world. I hope I my pessimism is going to be unraveled by a simple answer below.
I'm actually posting an update of sorts here because there's not enough room in comment-style posts for this. There seems to be a little confusion as to what TIR does, which is not simply "a framebuffer with no storage attached". This might be because I've only linked above to the mostly awful patentese (which is however the most detailed document I could find on TIR). The best high-level overview of TIR I found is the following snippet from Sinofsky's blog post:
to improve performance when rendering irregular geometry (e.g. geographical borders on a map), we use a new graphics hardware feature called Target Independent Rasterization, or TIR.
TIR enables Direct2D to spend fewer CPU cycles on tessellation, so it can give drawing instructions to the GPU more quickly and efficiently, without sacrificing visual quality. TIR is available in new GPU hardware designed for Windows 8 that supports DirectX 11.1.
Below is a chart showing the performance improvement for rendering anti-aliased geometry from a variety of SVG files on a DirectX 11.1 GPU supporting TIR: [chart snipped]
We worked closely with our graphics hardware partners [read AMD] to design TIR. Dramatic improvements were made possible because of that partnership. DirectX 11.1 hardware is already on the market today and we’re working with our partners to make sure more TIR-capable products will be broadly available.
It's this bit of hardware I'm asking to use from OpenGL. (Heck, I would settle even for invoking it from Mantle, because that also will be usable outside of Windows.)
The OpenGL equivalent of TIR is EXT_raster_multisample.
It's mentioned in the new features page for Nvidia's Maxwell architecture: https://developer.nvidia.com/content/maxwell-gm204-opengl-extensions.
I believe TIR is just a repurposing of a feature nvidia and AMD use for antialiasing.
Nvidia calls it coverage sample antialiasing and their gl extensions is GL_NV_framebuffer_multisample_coverage.
AMD calls it EQAA but they don't seem to have a gl extension.
Just to expand a bit on Nikita's answer, there's a more detailed Nvidia (2017) extension page that says:
(6) How do EXT_raster_multisample and NV_framebuffer_mixed_samples
interact? Why are there two extensions?
RESOLVED: The functionality in EXT_raster_multisample is equivalent to
"Target-Independent Rasterization" in Direct3D 11.1, and is expected to be
supportable today by other hardware vendors. It allows using multiple
raster samples with a single color sample, as long as depth and stencil
tests are disabled, with the number of raster samples controlled by a
piece of state.
NV_framebuffer_mixed_samples is an extension/enhancement of this feature
with a few key improvements:
- Multiple color samples are allowed, with the requirement that the number
of raster samples must be a multiple of the number of color samples.
- Depth and stencil buffers and tests are supported, with the requirement
that the number of raster/depth/stencil samples must all be equal for
any of the three that are in use.
- The addition of the coverage modulation feature, which allows the
multisample coverage information to accomplish blended antialiasing.
Using mixed samples does not require enabling RASTER_MULTISAMPLE_EXT; the
number of raster samples can be inferred from the depth/stencil
attachments. But if it is enabled, RASTER_SAMPLES_EXT must equal the
number of depth/stencil samples.

Frame buffer object support

Do most OpenGL 2.0 and 2.1 graphics cards that still are in use support frame buffer objects (through the GL_ARB_framebuffer_object or GL_EXT_framebuffer_object extensions)?
In my experience, they do.
Among nVidia, GPUs at least as far back as the GeForce FX 5xxx (which support OpenGL 2.0) have FBO support, and I suspect even older cards do.
Among ATI GPUs old enough to only support OpenGL 2.0, I have seen such GPUs as the HD 2400 and the X1300, and they all support FBOs.
Among Intel GPUs, I think that it is mainly the HD Graphics families that have OpenGL 2.0 support at all, and all the HD Graphics GPUs I've seen have FBO support. I have also seen some other GPUs with 2.0 and FBO support, including some versions of the 965, and something called the "Eaglelake". I'm not sure why only some 965s support OpenGL 2.0, though. It could be a driver issue.
I have, on the other hand, not yet found any 2.0-compatible GPUs that do not support FBOs.
I hope this purely empirical answer helps somewhat.
I'd say yes. My Intel GMA 950's Windows 7 driver (at least) unofficially exposes OpenGL 2.0 features and frame buffer objects are supported through the EXT_framebuffer_object extension.

Is there a trick to use a opengl 3.x version program on a graphics card which supports opengl 2.x?

I have a onboard graphics card which supports opengl 2.2. Can I run a opengl (let's say 3.3 version) application on it by using some software etc?
OpenGL major versions somewhat refer to available hardware capabilities:
OpenGL-1: fixed function pipeline (DirectX 7 class HW)
OpenGL-2: programmable vertex and fragment shader support.(DirectX 9 class HW)
OpenGL-3: programmable geometry shader support (DirectX 10 class HW)
OpenGL-4: programmable tesselation shader support and a few other nice things (DirectX 11 class HW).
If your GPU supports OpenGL-2 only, then there is no way you could run a OpenGL-3 program, making use of all whistles and bells on it. Your best bet is a software rasterizing implementation.
A few years ago, when shders were something new, NVidia shipped their developer drivers with some higher functionality emulation software rasterizer, to kickstart shader development, so that there were actual applications to run on those new programmable GPUs.
Sure you can, you just have to disable those features. Whether this will work well depends greatly on the app.
The simplest method is to intercept all OpenGL calls, using some manner of DLL hooking, and filter them as necessary. When OGL3 features are used, return a "correct" answer (but don't do anything) or provide null for calls that aren't required.
If done properly, and the app isn't relying on the OGL3 features, this will run without those on your hardware.
If the app does require OGL3 stuff, results will be unreliable at best, and it may be unusable. It really depends on what exactly the app does and what it needs. Providing a null implementation of OGL3 will allow you to run it, but results are up in the air.
No. Well, not really. NVIDIA has some software emulation that might work, but other than that, no.
Your hardware simply can't do what GL 3.0+ asks of it.
also:
I have a onboard graphics card which supports opengl 2.2
There is no OpenGL 2.2. Perhaps you meant 2.1.

What OpenGL version to choose for cross-platform desktop application

I'm working on some cross-platform desktop application with heavy 2-D graphics. I use OpenGL 2.0 specification because I need vertex shaders. I like 3.2+ core API because of it's simplicity and power. I think that 3.2+ core could be a choice for the future. But I'm afraid that nowadays this functionality may not be available on some platforms (I mean old graphic cards and lack (?) of modern Linux drivers). Maybe, I should use OpenGL ES 2.0 -like API for easy future porting.
What's the state of affairs with 3.2+ core, cards and linux driveres?
Older Intel chips only support OpenGL 1.5. The later chips (since about two years ago) have 2.1 but that performs worse than 1.5. Sandy Bridge claims to support "OpenGL 3" without specifying whether it is capable of doing 3.3 (as Damon suggests) but Linux drivers only do 2.1 for now. All remotely recent Radeons and Nvidia hardware with closed-source drivers support 3.3 (geometry shaders) and the 400-500 series support 4.1 (tesselation shaders).
Therefore, the versions you want to aim for are 1.5 (if you care about pre-Sandy-Bridge Intel crap), 2.1 (for pretty much all hardware), 3.3 (for decent hardware & closed-source drivers) or 4.1 (bleeding edge).
I have vertex and fragment shaders written with #version 120 and geometry shaders written in #version 330, to make fallback on old hardware easier.
You can stay on OpenGL ES 2.0. Even if ES mean Embed, it's a good approach because it remove all the fixed functions (glBegin, etc...): you are using a subset of OpenGL 2.x. So if you write your software by thinking only OpenGL ES 2.0, it will be fast and work on the majority.
In real, OpenGL ES 2.0 and desktop GL might have some difference, but i don't think it will be something you will use. If the extension GL_ARB_ES2_compatibility is supported, you have a "desktop" card that support the complete embed subset. (4 func and some const.)
Now, the real question is how many years of hardware do you want to support ? They are still lot of very old hardware that have very poor gl support. Best would be to support the less-old (OpenGL 2.0 is already old) :)
I would personally go for OpenGL 3.3, optionally with a fallback for 3.2 plus extensions (which is basically the same). It is the most convenient way of using OpenGL 3.x, and widely supported.
Targetting 3.1 or 3.0 is not really worth it any more, except if you really want to run on sandy bridge (which, for some obscure reason only supports 3.0 although the hardware is very well capable of doing 3.3). Also 3.1 and 3.0 have very considerable changes in shader code, which in my opinion are a maintenance nightmare if you want to support many versions (no such problem with 3.2 and 3.3).
Every hardware that supports 3.2 can also support 3.3, the only hindrance may be that IHVs don't provide a recent driver or a user may be too lazy to update. Therefore you cannot assume "3.3 works everywhere". The older drivers will usually have the same functionality via ARB extensions anyway, though.
Mac OS X doesn't support GL-3 context at the moment. This summer may change the situation, but I would recommend to stick with GL-2 plus extensions nevertheless.
Depends on your target market's average machine. Although to be honest, OpenGL 3.2+ is pretty ubiquitous these days.