Modern OpenGL for 2d graphics - opengl

I wanted to use Opengl for 2d graphics because of its hardware acceleration. Would you rather recommend me to use modern Opengl or to use the fixed pipeline for this? This whole shader
writing seems like too much overhead to me. I just want to draw some 2d primitives.

Even for trivial 2d graphics using the programmable pipeline as opposed to the fixed function pipeline is what you want. In the end, the programmable pipeline gives you more freedom in expressing your graphics. How you decide to program the pipeline is up to you and is driven by your graphical needs. It could be, that you only need a single shader. There is no written rule that you need to have hundreds of shaders for it to be 'modern opengl'.
In that aspect it's debatable if modern opengl really is that much effort at all. It's a shader, a vertex/index buffers and a few textures. In comparison to fixed function pipeline, has it really changed that much that you even have to consider sticking to the fixed function pipeline?
A more compelling reason why you should prefer the programmable pipeline is that the fixed function pipeline is deprecated. In other words, pending removal. In concept a IHV could decide to drop support for it at any moment.

Modern OpenGL is better. Don't be afraid of shaders. Without them, you can't do anything besides draw the image using some blending... You can't go too far...
With shaders, you can do pretty much everything (effects like Photoshop, for example).

Related

What's the simplest way to use a vertex array in OpenGL (for 2d graphics)? Is it even necessary?

I've been using OpenGL in a pretty basic way to draw textured quads for various 2D graphics projects. I've been using glBegin() and glEnd() to draw the two triangles that make up each textured quad, but I know that it's also possible to draw shapes with a vertex array.
However, the tutorials I've found seemed geared towards 3d graphics and involve loading shaders and such. All I need to do (for now at least) is draw textured quads, so this seems like overkill.
First of all, how much advantage is there to using vertex arrays in 2d? If there is advantage, what is the simplest way to use it?
how much advantage is there to using vertex arrays in 2d?
The advantage is huge in terms of performance. Vertex arrays greatly reduce the amount of API calls. This increases your rendering performance.
A disadvantage is that it is more complicated overall.
If there is advantage, what is the simplest way to use it?
Vertex arrays can be used for many different forms of rendering. For basic straight up rendering it is usually in the following structure:
//initializing
glGenBuffer
glBindBuffer
glBufferData
//Rendering
glBindBuffer
glVertexAttribPointer
glDrawArrays
There are lots of tutorials online.
The glBegin()/glEnd() approach is typically referred to as "immediate mode". Vertex arrays can be massively more efficient than immediate mode, particularly when they are stored in buffers (VBO = Vertex Buffer Object). This may not be a big deal as long as your drawing involves very few vertices, but becomes critical when you're dealing with geometry with a large number of vertices.
The main reasons why arrays are more efficient than immediate mode are:
They require a lot fewer OpenGL API calls, since you specify entire arrays with a single call, instead of making a call for each vertex. This adds up when you have millions of vertices.
The data can be specified once, and then reused in each frame as long as it does not change. In combination with VBOs, the vertex data can also be stored in memory that the GPU can access very efficiently.
There is another aspect. Immediate mode is not available anymore in most OpenGL versions:
It was never available in OpenGL ES, which is used on mobile devices.
It has been marked as deprecated in desktop OpenGL starting in version 3.0, which was released in 2008.
It is not available in the core profile, which was introduces with version 3.2.
The only way to still use immediate mode is with old OpenGL versions, or by using the compatibility profile that retains the deprecated features. The same is true for plain vertex arrays, only arrays in VBOs are supported in these newer versions.
The fixed function pipeline is equally deprecated, so writing your own shaders is required in all these newer OpenGL versions.
Some of this certainly adds complexity when starting out. It does require more code to get simple examples up and running. But once you crossed the initial hurdle, you gain a lot of flexibility, and much of the functionality actually becomes easier to use and understand.
An alternative is to use a higher level toolkit. There are plenty of options for graphics toolkits and game engines.

Why is the Graphics Pipeline so highly specialized? (OpenGL)

The OpenGL Graphics Pipeline is changing every year. So the programmable Pipelines are growing. At the end, as an opengl Programmer we create many little programms (Vertex, Fragment, Geometry, Tessellation, ..)
Why is there such a high specialization between the stages? Are they all running on a different part of the hardware? Why not just writing one code-block to describe what should be come out at the end instead of juggling between the stages?
http://www.g-truc.net/doc/OpenGL%204.3%20Pipeline%20Map.pdf
In this Pipeline PDF we see the beast.
In the days of "Quake" (the game), developers had the freedom to do anything with their CPU rendering implementations, they were in control of everything in the "pipeline".
With the introduction of fixed pipeline and GPUs, you get "better" performance, but lose a lot of the freedom. Graphics developers are pushing to get that freedom back. Hence, more customization pipeline everyday. GPUs are even "fully" programmable now using tech such as CUDA/OpenCL, even if it's not strictly about graphics.
On the other hand, GPU vendors cannot replace the whole pipeline with fully programmable one overnight. In my opinion, this boils down to multiple reasons;
GPU capabilities and cost; GPUs evolve with each iteration, it's
nonsense to throw away all the architecture you have and replace it
overnight, instead you add new features and enhancements every iteration, especially
when developers ask for it (example: Tessellation stage). Think of CPUs, Intel tried to replace the x86 architecture with Itanium, losing backward compatibility, having failed, they eventually copied what AMD did with AMDx64 architecture.
They also can't fully replace it due to legacy applications support, which are more widely used than someone might expect.
Historically, there were actually different processing units for the different programmable parts - there were Vertex Shader processors and Fragment Shader processors, for example. Nowadays, GPUs employ a "unified shader architecture" where all types of shaders are executed on the same processing units. That's why non-graphic use of GPUs such as CUDA or OpenCL is possible (or at least easy).
Notice that the different shaders have different inputs/outputs - a vertex shader is executed for each vertex, a geometry shader for each primitive, a fragment shader for each fragment. I don't think this could be easily captured in one big block of code.
And last but definitely far from least, performance. There are still fixed-function stages between the programmable parts (such as rasterisation). And for some of these, it's simply impossible to make them programmable (or callable outside of their specific time in the pipeline) without reducing performance to a crawl.
Because each stage has a different purpose
Vertex is to transform the points to where they should be on the screen
Fragment is for each fragment (read: pixel of the triangles) and applying lighting and color
Geometry and Tessellation both do things the classic vertex and fragment shaders cannot (replacing the drawn primitives with other primitives) and are both optional.
If you look carefully at that PDF you'll see different inputs and outputs for each shader/
Separating each shader stage also allows you to mix and match shaders beginning with OpenGL 4.1. For example, you can use one vertex shader with multiple different fragment shaders, and swap out the fragment shaders as needed. Doing that when shaders are specified as a single code block would be tricky, if not impossible.
More info on the feature: http://www.opengl.org/wiki/GLSL_Object#Program_separation
Mostly because nobody wants to re-invent the wheel if they do not have to.
Many of the specialized things that are still fixed-function would simply make life more difficult for developers if they had to be programmed from scratch to draw a single triangle. Rasterization, for instance, would truly suck if you had to implement primitive coverage yourself or handle attribute interpolation. It might add some novel flexibility, but the vast majority of software does not require that flexibility and developers benefit tremendously from never thinking about this sort of stuff unless they have some specialized application in mind.
Truth be told, you can implement the entire graphics pipeline yourself using compute shaders if you are so inclined. Performance generally will not be competitive with pushing vertices through the traditional render pipeline and the amount of work necessary would be quite daunting, but it is doable on existing hardware. Realistically, this approach does not offer a lot of benefits for rasterized graphics, but implementing a ray-tracing based pipeline using compute shaders could be a worthwhile use of time.

When should I use GLSL?

I have used Opengl for a semester, but in a traditional way, like: glBegin...glEnd.
I heard someone said the GLSL is the future of OpenGL, I was just wondering do I need jump into GLSL instead of the traditional OpenGL?
Moreover, whether GLSL only works well for good GPU?
Short answer: Yes, you do need to update your OpenGL usage as you will generally get lousy performance from glBegin/glEnd and limit what you can do by constraining yourself to the old fixed pipe behavior.
Long answer:
You're mixing up two different problems. One of them is immediate mode (glBegin glVertex ... glEnd, etc.) vs. batched mode (glVertexPointer, etc.). To get full performance out of modern GPUs you need to used batches. See this SO discussion: When are VBOs faster than "simple" OpenGL primitives (glBegin())?.
The other one is fixed pipe vs. programmable shaders (glEnable states, etc.. vs. GLSL). This can be a performance issue in many cases, but more importantly it's a flexibility issue. With GLSL you have far more control over how things are rendered, so you can accomplish things that weren't really possible using the fixed pipe -- at least not at a usable frame rate. Programmable shaders are also a better reflection of how modern GPUs really work -- in fact if you use the fixed pipe it is probably just being emulated with a shader under the hood.
GLSL is not the future of OpenGL, it's the current way of programming. As Aeluned states, glBegin and glEnd are deprecated (and not even supported in OpenGL ES.)
And what do you mean by good GPU? Even Intel integrated graphic cards support shaders, using GLSL is not slower just for being GLSL. You might get a slow performance when doing heavy stuff, but if you implement the fixed pipeline I think you will get the same performance.
I'd say learning GLSL is the way to go.

Deprecated OpenGL features

I recently read this list and I noticed that almost everything I studied from the OpenGL Red Book is considered deprecated.
I'm talking about pixel transfer operations, pixel drawings, accumulation buffer, Begin/End functions (!?), automatic mipmap generation and current raster position.
Why did they flag these features as deprecated? Will it be okay to still use them? What are the workarounds?
In my opinion its for the better. But this so called Immediate Mode is indeed deprecated in OpenGL 3.0 mainly because its performance is not optimal.
In immediate mode you use calls like glBegin and glEnd. So the rendering of primitives depends on the program's commands, OpenGL can't advance until it gets the appropiate command from the CPU. Instead you can use buffer objects to store all your vertices and data. And then tell OpenGL to render its primitives using this buffer with commands like glDrawArrays or glDrawElements or even more specialized commands like glDrawElementsInstanced. While the GPU is busy executing those commands and drawing the buffer to the target FrameBuffer (basically a render target). The program can go off and issue some other commands. This way both the CPU and the GPU are busy at the same time, and no time is wasted.
Not the best explanation ever, but my advice: try to learn this new rendering pipeline instead. It's superior to immediate mode by far. I recommend tutorials like:
http://www.arcsynthesis.org/gltut/index.html
http://www.opengl-tutorial.org/
http://ogldev.atspace.co.uk/
Literally try to forget what you know so far, immediate mode is long deprecated and shouldn't be used anymore, instead, focus on the new technology ;)
Edit Excuse me if I used 'intermediate' instead of 'immediate', I think its actually called 'immediate', I tend to mix them up.
Why did they flag these features as deprecated?
First, some terminology: they aren't deprected. In OpenGL 3.0, they are deprecated (meaning "may be removed in later versions"); in 3.1 and above, most of them are removed. The compatibility profile brings the removed features back. And while it is widely implemented on Windows and Linux, Apple's 3.2 implementation only implements the core profile.
As to the reasoning behind the removal, it depends on which feature you're talking about. We can really only speculate as to why the ARB any specific feature:
pixel transfer operations
Pixel transfer operations have not been removed. If you're talking about glDrawPixels, that is a pixel transfer operation, but it is one pixel transfer. Not all of them.
Speaking of which:
pixel drawings
Because it was a horrible idea to begin with. glDrawPixels is a performance trap; it sounds nice and neat, but it performs terribly and because it's simple, people will try to use it.
Having something that is easy to do but terrible in performance encourages people to write terrible OpenGL applications.
accumulation buffer
Shaders can do this just fine. Better in fact; they have a lot more options than accumulation buffers cover.
Begin/End functions (!?),
It's another performance trap. Immediate mode rendering is terribly slow.
automatic mipmap generation
Because it was a terrible idea to begin with. Having OpenGL decide when to do a heavyweight operation like generate mipmaps of a texture is not a good idea. The much better idea the ARB had was to just let you say, "OK, OpenGL, generate some mipmaps for this texture right now."
current raster position.
Another performance trap/bad idea.
Will it be okay to still use them?
That's up to you. NVIDIA has effectively pledged to support the compatibility profile in perpetuity. Which means that AMD and Intel probably will have to as well. So that covers Windows and Linux.
On MacOSX, Apple controls the GL implementations more rigidly, and they seem committed to not supporting the compatibility profile. However, they seem to have little interest in advancing OpenGL, since they stopped with 3.2. Even Mountain Lion didn't update the OpenGL version.
What are the workarounds?
Stop using performance traps. Use buffer objects for your vertex data like everyone else. Use shaders. Use glGenerateMipmap.

Why were display lists deprecated in opengl 3.1?

I'm just learning about them, and find it discouraging that they have been deprecated. Should I keep investing into learning them? Would I learn something useful for the current model?
I think, though I may be wrong, that since most high-performance graphics apps (mostly games) pretty much only used vertex buffers and the like (in order to squeeze every drop of performance out of the card), that there was pressure to stop worrying about "frivolous" items such as display lists (and even good-old glVertex calls). IMHO, this provides a huge barrier to people learning to write OpenGL code, and (for my own purposes) is a big impediment to whipping up some quick, legible, and reasonably well performing code.
Note that these features were deprecated in 3.0, and actually removed in 3.1 (but still provided compatibility via an ARB extension). In OpenGL 3.2, they moved these features into a 'compatibility' profile that is optional for driver writers to implement.
So what does this mean? NVidia, at least, has vowed to continue support for the old-school compatibility mode for the forseeable future - there is a large wealth of legacy code out there that they need to support. You can find the discussion of their support in a slideshow at:
http://www.slideshare.net/Mark_Kilgard/opengl-32-and-more
starting at about slide #32. I don't know ATI/AMD's stance on this, but I would assume that it would be similar.
So, while display lists are technically removed from the required portion of the OpenGL 3.2 standard, I think that you are safe using them for quite a while. Eventually, you may wish to learn the buffer/shader-centric interface to OpenGL, especially if your end-goal is envelope-pushing game writing, but it really is a lot less intuitive (no glRotate, even!), so I would recommend starting with good old OpenGL 2.x.
-matt
Display lists were removed, because with opengl 3+ all vertex, texture and lighting data are stored on the graphics card, in what is called retained mode rendering (the data is retained, allowing you to send a single command to the card to draw a mesh, rather than sending vertex data to the card every frame). A major bottleneck in computer graphics is data bandwidth between RAM and gpuRAM. by generating meshes once, and retaining that data, we can transform it using homogeneous transform matrices, and draw it easily. This effectively reduces the bottleneck, with the drawback of longer loading times.
Immediate mode, however (pre 3.0) uses massive amounts of graphics bandwidth to send vertex data every frame, pre-transformed, with recalculated normals etc.
The problems with this approach are twofold:
1) excessive bandwidth use, and too much gpu idle time.
2) Excessive use of cpu time for calculations that could be done in parallel on 100+ cores, on the gpu
The simple solution to this, is retained mode.
With retained mode, display lists are no longer necessary. Hence their removal from the core profile.
Immediate mode is still very good for learning the theory of computer graphics. (and it's loads of fun, to boot) It just suffers in terms of maximum possible performance.
VBOs & VAOs may be, at first, less intuitive, but in terms of speed, it is far superior.
There are several easy to understand opengl 3.0 tutorials on the internet. Once you have openGL 2.0 down, you should consider moving on to 3.0+, as it allows you to build very fast 3d graphics applications.
While Matthew Hall has a good answer and covers most things, there are a few things I'll add.
If you look at what's been deprecated, you'll see it's a lot of client side and fixed functionality. So it's obvious that they're trying to move people away from client side centered code and have people do everything possible server side on the GPU instead.
When it comes to which context to use, well, that's up to you. Though if performance is a major concerned then 3.x is probably the way to go. I personally definitely want to learn opengl 3.x, but I doubt I'll be giving up 1.x/2.x. It's just so much easier to put together a quick app with what's available in a 1.x or 2.x context.
If you want a list of what's been deprecated, download the 3.0 specification and look under "The Deprecation Model"
A note from the future: latest Direct-X, Metal, and Vulkan apis have command buffers and command queues, which allow to record commands in the CPU, then sent them to the GPU to execute them there. Thus, perhaps, display lists was not a so old-fashioned idea. In fact, compiling display list is something orthogonal to the use of shaders and VBOs, and display lists can improve performance further....I wonder if a Vulkan or Metal to OpenGL translator could use display lists for command buffers....
Because VBOs (vertex buffer objects) are much more efficient and can do everything display lists can do. They're not really any more complex, either, just a little different. Unless you're already more familiar with the old style glBegin/glEnd stuff, you're probably best off learning about buffers from the get go.