How come these "Bible" type books for programming which are supposed to be comprehensive don't mention anything about programming sound or graphics?
My C programming language second edition book came in the mail today by Brian Kerninghan and Dennis Ritchie, and I thought the book was supposed to be comprehensive, but what I first noticed about it is that it is very thin. And it doesn't really seem to talk about much beyond just the basic stuff we have already learned.
So then I thought I would look in my C++ programming book by Bjarne Stroustrop, which is a lot thicker, to see what IT says about graphics and sound, and at least going by the table of contents, in 1200 plus pages, there doesn't seem to be anything on graphics or sound either.
Are graphics and sound some kind of extra subject matter that requires specialty books or something on some specific libraries or something?
Because surely, there must be some foundational stuff on sound and graphics in the core language itself, isn'tt there?
If not, where does one go to start learning about programming graphics and sound?
Sound and graphics are not part of the C or C++ programming languages. The C and C++ standards define only core languages that must be extended to provide other services.
C and C++ are, by and large, abstract programming languages. They specify a few features for input and output, which are subject to interpretation and implementation choices, but they do not specify interactions with devices, including sound systems or graphics displays. They specify features for computing with data and minimal provisions for interactions and storage.
The C and C++ standards define core languages. These core languages are extended in various ways, including:
Providing external libraries that any kind of services, including sound and graphics features.
Using volatile objects to interact with machinery, including devices connected to a processor.
Building more features into the language by supporting additional keywords or language constructs in a compiler.
C++ (and C) does not have graphics libraries as part of its Standard Library
Much to the chagrin of many novice programmers.
The reasons why C++ presently lacks a Graphics Library are varied. There is a proposal for a 2d graphics library to be added to the C++ standard, but it failed multiple times to get added, and as of this year is more-or-less defunct.
There's some writeups on Reddit that try to go into the details of what went wrong, which I'll link below, but I'll summarize the basic issues:
First, the proposal was for functionality that, intrinsically, not all Architectures + Operating Systems could support. Any viable Graphics API needs to have some basic components that can be backed by the Operating System, things like a Surface (something to draw on), a Display, and commands for drawing arbitrary images on that surface and presenting them to the display. Lots of Operating Systems have that: Windows, Linux, MacOS, for example. But many more don't, and trying to build an API where the entire API could be rendered invalid by an Operating System that fails to provide the necessary functionality was troublesome. The philosophy of the Standard Library is that it provides functionality to all compilers that correctly implement it, and a feature that couldn't make that guarantee was inherently unsuitable.
The second problem is that there was virtually no agreement on how the library should be interfaced with. A basic 2D Graphics API like that provided by Java, Python, or (some variants of) BASIC could be implemented in a wide variety of ways, each with quite substantial upsides and downsides, and the authors of the proposal didn't seem to have a coherent vision of how it should be implemented.
In particular, modern graphics is largely a matter of heterogeneous computing, between the ways that DirectX11/OpenGL 4.x try to implement their APIs (more substantially in the former case than the latter...), or the ways that DirectX12/Vulkan represent attempts to get "as close to the metal as possible", and the C++ Standard Library lacks a lot of valuable tools for handling these kinds of functionality.
Tools like std::future might have been sufficient, but in my experience with graphical programming, I'm skeptical it would have been enough, and even if it was, you then have the question of whether you want a Graphics library in your Standard Library that's implemented in such obtuse terms. That's held back the Networking proposal for years, and even that is only getting added in C++23 because there's other library features that are going to support it, like the Executors proposal, which the Networking library is pretty much dependent on.
There's a number of other ways things went wrong, but I'll leave it at those two big ones, since not only do they explain why this specific proposal didn't go anywhere, it also explains why a lot of other ambitious proposals to do the same didn't go anywhere either—including many proposals to add Audio libraries to C++.
So what can you do instead?
For Graphics, you need two things (at minimum):
An API for getting Windows/Surfaces/etc. to display on
An API for generating the images that are displayed
The former can be handled by your Operating System's native windowing api, but you can also use something like QT, GLFW, SDL, or any other api you prefer that's designed for cross-platform compatibility.
The latter can be handled by a good graphics API like OpenGL, or (if you're developing for a Windows environment) DirectX (11-). You could also use Vulkan or DirectX12 if you want to get familiar with the cutting-edge technology, although I'll warn you now that both are far more complex than their predecessors because they don't abstract anything other than the barest of basics, so be aware that it's a much steeper learning curve for those.
For Audio handling, I don't have any recommendations I can personally vouch for (my experience is more limited on that front) but there's quite a few APIs that are specifically designed for that, so just do a little research into what's available.
References:
https://www.reddit.com/r/cpp/comments/89q6wr/sg13_2d_graphics_why_it_failed/
https://www.reddit.com/r/cpp/comments/89we31/2d_graphics_a_more_modest_proposal/
Putting it simply (comment from #NathanOliver): C and C++ have no concept of sounds or graphics.
As you've guessed, graphics and sound are extra subject matter that require other types of books.
Most of these things are abstracted away from the hardware, and are usually OS-dependent.
Take, for example, /dev/dsp on Linux. It's a part of OSS, an abstraction that allows you to play audio. You can interact with it in standard C or C++, it just won't work on all platforms.
For some historical perspective, at least on C:
Once upon a time, the core C language did not even cover I/O to files. The core C language covered the syntax of the language, and that was it. If you wanted to do I/O to files, well, you could include <stdio.h> and call those functions... but they were just external functions in a library, you could use them or not, it wasn't like they were part of the language or anything. (You will probably find language in that copy of K&R you just got saying more or less what I've just said here.)
Now, when the first ANSI C Standard came out in 1989 or whenever it was, it did cover several of the then-standard libraries, so the functions in <stdio.h> (and the ones in <string.h>, and <math.h>, and several others) became a formal part of the language. But that was a pretty significant change.
But there had never been a <stdgraphics.h>, so there wasn't one to standardize. (And of course there still isn't.) And pretty much nobody was doing computer audio in the 1970's, so that had even less of a chance.
(Unix in those early days did have a nice, simple 2D graphics library, <plot.h>, and there might even be a few dinosaurs besides me still using it, but I don't think anyone ever considered trying to push it as a broader standard. Today's GNU libplot is a descendant of it.)
Basically, C never aspired to be a "platform" language like, say, Python. And it's now so well entrenched as a low-level, platform-independent, "systems" language that I'd say there's very little chance that any of these "higher level" functionalities will ever be added to it.
ISO C++ does have a sound and graphics (and input) Study Group:
SG13, HMI & I/O (Human/Machine Interface): Selected low-level output (e.g., graphics, audio) and input (e.g., keyboard, pointing) I/O primitives.
which is currently active (after being inactive).
Audio is probably even more of a mine-field for standardisation than graphics (where, I note, nobody yet has mentioned motion video - see Codecs below). There are at least these levels of abstraction it could operate at (listed from low to high), depending on the application in question:
Raw PCM samples.
Datastreams suitable for audio Codecs (e.g. MPEG layer III, AAC).
MIDI - or other ways of instructing a sequencer on a note-by-note basis
Programatic audio e.g SuperCollider
PCM Audio
Taking the first, this is possibly the most generic and portable. At the very minimum, it requires audio hardware (or, more commonly, a software abstraction) of a double buffer or circular buffer into which output samples are written in real-time to be output somewhere. Lots of parameters here such as sample-rate, channel-count, sample bit-depth, latency, endianness, signed-ness, and whether a push or pull (event-driven) model is used to render buffers of data.
Low-latency audio for professional applications requires real-time threads (and thus, an operating system that provides them), and careful management of system resources.
Successful APIs are CoreAudio (MacOS, iOS only), ASIO, DirectX and whole bunch of Windows APIs (professional software invariable uses ASIO), Jack, ALSA
Codecs
Lots of them are proprietary and patent encumbered. Various web standards have significant difficulties specifying them - and they are far less restrictive than the ISO rules. Not all implementations implement all of them.
MIDI
This is at least fairly standard (although the industry spend nearly 25+ years in replacing it). Twenty years ago, you'd have been driving specialist synthesis hardware with this (pretty much every pre-smartphone era phone and games console had one, mostly made by Yamaha), but these days the sequencer generally drives software synthesisers, and any decent one is proprietary, commercial software. No two implementations have ever sounded the same either, which makes them largely useless for portability.
Programmatic Audio
At this point, you'd be defining an entirely separate programming language.
Conclusions
Good luck trying to standardise any of these - the music software industry has failed repeatedly for decades in its attempts, with much laxer standards bodies.
There's a certain irony that almost all serious audio software is implemented in C++ - often because it lacks any kind of abstractions for audio of its own.
Some background on the variations of graphics support.
Historically, the only supported graphics by computers was ASCII characters. *(Search the internet for "ASCII Art").
Graphics was developed, but had two major flavors: bitmapped and vectored. Some systems had a list of vectors (the math type), and drew those. Other graphic devices used pixels to display images. Even today, some graphic controllers allow you to define your own "bitmap" and have reserved cells (but they don't support line drawing).
The graphics started out as monochrome. One foreground color and one background color, no shades between. This was to simplify complexity and cost. Other features that soon came into being: shades of monochrome and a brightness attribute. The graphics controllers were originally one bit per pixel (either on or off, the "off" being the background color). Graphics controllers then expanded to allow more bits per pixel, monochrome still being the most popular. You could have shades of gray and change the intensity. Some controllers also had bits for "blinking" and other attributes.
With the cost of H/W becoming less and less, graphics controllers started taking on more advanced features: color and bit-blit. Now you could have 4 bits of Red, 4-bits of Green, 4-bits of Blue. This allowed for multiple color and expanding the shading when combining the intensity bit. The graphics controllers started having their own memory and the capability of transferring bitmap data from the CPU's memory to the graphics memory area, often called bit-blit. The controllers advanced to allowing Boolean operations with the blitting (AND, OR, XOR, etc.).
The modern advanced graphics controllers are considered separate computers from the CPU's. Not only do they have their own memory, but they have Cores that can be used by the CPU to perform processing in parallel. Many of these controllers have common algorithms implemented in hardware (such as screen rotation, collision detection). They have multiple buffers so that the CPU can draw into one buffer while the GPU displays another buffer (helps with graphic speed to support animations).
The C and C++ are standards. That is, you should be able to compile a standard C language program to any platform that supports the standard. The issue with graphics, is that there is no standard. Some graphics controllers only support text and bitmaps, and not line drawing. Desktop PCs have various degrees of graphics capability depend on the graphics board plugged into the system. So, there is not much that can be standardized. Also, graphics technology is constantly changing (improving) at a faster rate than the language standards are developed.
Lastly, let's talk about low level programming. To get the most performance from graphics, the code needs to access the hardware directly; sometimes also exploiting features of the processor. Any graphics API placed into the language would have to be abstract to support graphics concepts; and probably not efficient because of the subtraction. The low level programming of the graphics hardware would still exist for performance. The compiler writers are not graphics exports and would either use libraries or compiler for the general case. So many combinations to support (as illustrated in the history section above).
Remember that C and C++ languages are "you only get what you pay for". If I don't use any graphics on my embedded system, I should be able to have the compiler code without graphic support. These languages have a wider audience than other languages that support graphics, like Java.
I have been wanting to make a game in OpenGL, c++ for a while now and i would love some explanation on how exactly it works and what it is.
Can computer graphics be made without OpenGL ? most of the tutorials i have seen online show how to use OpenGL for the most basic graphics drawing, it is possible to directly interface with your GPU ?
How does OpenGL work on different CPU's and Operating systems ? As far as i know languages like c++ must be recompiled if they want to be used on an ARM processor and the such, is this not the case for GPU's in general ?
If you can indeed make graphics without OpenGL, does anybody still do this ? how much work and effort does OpenGL save in general and how complex are the systems that OpenGL facilitates for us?
Are there other libraries like OpenGL that are commonly used ? if not, will new libraries eventually come and take it's place or is it perfect for the job and not going anywhere ?
How exactly it works and what it is?
OpenGL defines an interface that you as a programmer can use to develop graphics programs (API). The interface is provided to you in form of header files that you include to your project. It is meant to be multiplatform, so that you can compile your code that uses OpenGL on different operating systems. People that manage the OpenGL specification do not provide the implementation of specified functionality. That is done by the OS and hardware vendors.
Can computer graphics be made without OpenGL?
Yeah, sure. You can e.g. calculate the whole image manually in your program and then call some OS-specific function to put that image on the screen (like BitBlt in Windows).
How does OpenGL work on different CPU's and Operating systems?
Each OS will have its own implementation of OpenGL specification that will usually call the hardware drivers. So let's say you have machine with Windows OS and Nvidia graphics card. If you run some program that calls glDrawElements it will look like this:
your_program calls glDrawElements
which calls glDrawElements implementation written by people from Microsoft
which calls Nvidia drivers written by people from Nvidia
which operates the HW
If you can indeed make graphics without OpenGL, does anybody still do this?
Yeah sure. Some people might want to implement their own rendering engine from ground up (although that is really hardcore thing to do).
Are there other libraries like OpenGL that are commonly used ? if not, will new libraries eventually come and take it's place or is it perfect for the job and not going anywhere ?
Sure. There is DirectX that is maintained by Microsoft and targets only Windows platforms and the Vulkan that can be seen as successor to OpenGL.
As I understand GPU vendors defined standard interface to be used by OS Developers to communicate with their specific driver. So DirectX and OpenGL are just wrappers for that interface. When OS developers decide to create new version of Graphic API , GPU vendors expand their interface (new routines are faster and older ones are left for compatibility issues) and OS developers use this new part of interface.
So, when it is said that GPU vendors' support for DirectX is better than for OpenGL, does it simply mean that GPU vendors primarily take into account Microsoft's future plans of developing DirectX API structure and adjust future development of this interface to their needs? Or there is some technical reasons before this?
As I understand GPU vendors defined standard interface to be used by OS Developers to communicate with their specific driver. So DirectX and OpenGL are just wrappers for that interface.
No, not really. DirectX and OpenGL are just specifications that define APIs. But a specification is nothing more than a document, not software. The OpenGL API specification is controlled by Khronos, the DirectX API specification is controlled by Microsoft. Each OS then defines a so called ABI (Application Binary Interface) that specifies which system level APIs are supported by the OS (OpenGL and DirectX are system level APIs) and what rules an actual implementation must adhere to, when being run on the OS in question.
The actual OpenGL or Direct3D implementation happens in the hardware's drivers (and in fact the hardware itself is part of the implementation as well).
When OS developers decide to create new version of Graphic API , GPU vendors expand their interface
In fact it's the other way round: Most of the graphic APIs specifications are laid out by the graphics hardware vendors. After all they are close to where the rubber hits the road. In the case of Khronos the GPU makers are part of the controlling group of Khronos. In the case of DirectX the hardware makers submit drafts to and review the changes and suggestions made by Microsoft. But in the end each new APIs release reflects the common denominator of the capabilities of the next hardware generation in development.
So, when it is said that GPU vendors' support for DirectX is better than for OpenGL, does it simply mean that GPU vendors primarily take into account Microsoft's future plans of developing DirectX API structure and adjust future development of this interface to their needs?
No, it means that each GPU vendor implements his own version of OpenGL and the Direct3D backend, which is where all the magic happens. However OpenGL puts a lot of emphasis on backward compatibility and ease of transition to newer functionality. Direct3D development OTOH is quick in cutting the ties with earlier versions. This also means that full blown compatibility profile OpenGL implementations are quite complex beasts. That's also the reason why recent versions of OpenGL core profiles did (overdue) work in cutting down support for legacy features; this reduction of API complexity is also quite a liberating thing for developers. If you develop purely for a core profile it simplifies a lot of things; for example you no longer have to worry about a plethora of internal state when writing plugin.
Another factor is, that for Direct3D there's exactly one shader compiler, which is not part of the driver infrastructure / implementation itself, but gets run at program build time. OpenGL implementations however must implement their own GLSL shader compiler, which complicates things. IMHO the lack of a unified AST or immediate shader code is one of the major shortcomings of OpenGL.
There is not a 1:1 correspondence between the graphics hardware abstraction and graphics API like OpenGL and Direct3D. WDDM, which is Windows Vista's driver model defines things like common scheduling, memory management, etc. so that DirectX and OpenGL applications work interoperably, but very little of the design of DirectX, OpenGL or GPUs in general has to do with this. Think of it like the kernel, nobody creates a CPU specifically to run it, and you do not have to re-compile the kernel everytime a new iteration of a processor architecture comes out that adds a new subset of instructions.
Application developers and IHVs (GPU vendors, as you call them) are the ones who primarily deal with changes to GPU architecture. It may appear that the operating system has more to do with the equation than it actually does because Microsoft (more so) and Apple--who both maintain their own proprietary operating systems--are influential in the design of DirectX and OpenGL. These days OpenGL closely follows the development of commodity desktop GPU hardware, but this was not always the case - it contains baggage from the days of custom SGI workstations and lots of things in compatibility profiles have not been hardware native on desktop GPUs in decades. DirectX, on the other hand, has always followed desktop hardware. It used to be if you wanted an indication of where desktop GPUs were headed, D3D was a good marker.
OpenGL is arguably more complicated than DirectX because until recently it never let go of anything, whereas DirectX radically redefined the API and stripped legacy support with every iteration. Both APIs have settled down in recent years, but D3D still maintains a bit of an edge considering it only has to be implemented on a single platform and Microsoft writes the one and only shader compiler. If anything, the shader compiler and minimal feature set (void of legacy baggage) in D3D is probably why you get the impression that vendors support it better.
With the emergence of AMD Mantle, the desktop picture might change again (think back to the days of 3Dfx and Glide)... it certainly goes to show that OS developers have very little to do with graphics API design. NV and AMD both have proprietary APIs on the PS3, GameCube/Wii/WiiU, and PS4 that they have to implement in addition to D3D and OpenGL on the desktop, so the overall picture is much broader than you think.
As you probably know, C++ has no standard graphics library. Most games use DirectX or OpenGL.
But how do those libraries actually work? In other words, how can the third-party libraries draw graphics if there's no mechanism for it in C++? Are they just written in another language?
Specifically DirectX and OpenGL work by calling the operating system and/or the video hardware driver. The driver, in turn, does the actual drawing by interacting with the graphical device. The exact details of interaction vary from one video card to another.
Back in the DOS days, C++ programmers could work with the hardware directly. This was accomplished in two ways. First, by writing to/reading from a special memory block ("framebuffer") where the pixels or text were stored. It was a span of memory at a known address, so to work with it you had to cast an integer constant to a pointer and work with that pointer. Purely C++ mechanism. The other way of interaction was reading from/writing to I/O ports. Now, this is a mechanism that is not directly supported by C, unless you count inline assembly or compiler intrinsics. There were two library functions that would wrap these two operations (literally, wrappers around two CPU commands - INP and OUTP), but most programmers would just use a one-line inline assembly snippet instead.
Even now, most video hardware interaction boils down to these two pathways - framebuffer and I/O ports (DMA and interrupts are typically not used for graphics). But we application-level programmers don't get to see those. This is driver stuff.
One modern caveat has to do with protected mode; in protected mode, the C pointers are not the same as underlying physical addresses. Simply typecasting 0xA0000 to a pointer won't get you to a framebuffer, even in kernel mode. However, kernel-level code (i. e. a driver) can request that the memory manager give it a pointer that corresponds to a specific physical address.
They will transfer their calls to the driver which will send them to the graphic card.
DirectX and OpenGL opperate on a pretty low level. That is you say "draw a triangle, now draw a square". Normally programmers then wrap these calls into C++ classes that provide higher level functionality, such as "Draw a model, draw a tree". This is known as an engine.
I'd recommend taking a look at the NeHe tutorials for more info: http://nehe.gamedev.net/
So for example a call to draw a triangle in OpenGL looks like this:
GLBegin(BEGIN_POLYGONS);
GLVector3(0,0,0);
GLVector3(10,10,0);
GLVector3(-10,10,0);
GLEnd();
And like I said, see the above link to the NeHe tutorials, they are all in C/C++
Do the really big games also use Open GL? Or are there some proprietary technologies out there, which can scare Open GL's pants off?
OpenGL and Direct3D both allow comparable access to the GPU.
The "really really really really really really really big games" use one of these, in addition to other non-graphical libraries, plenty of skilled programmers, artists, musicians, game designers, level designers and other staff to create those games "that cost billions to develop".
Feature-wise you can accomplish the same output with either API (OpenGL or DirectX). Several game engines abstract the underlying API from the developer resulting in games which can use either API and are potentially cross platform.
Some examples of this are most of id software (doom,quake,etc) games and any games which use their engine. World of Warcraft also supports either Direct3D or Opengl. Also, several steam/valve games which run on Windows, Mac, and rumored Linux.
OpenGL and Direct3D are the heavy-hitters in the gaming world. Neither scares the pants off the other.
Note, however, that big game houses will use commercial game engines that hide these APIs for the most part.
Most PC games (and xbox360) use Direct3D, but some do use OpenGL.
You can find out more about Direct3D and download it all from Microsoft here...
http://msdn.microsoft.com/en-us/aa937791.aspx
Really big games use a graphics abstraction layer (as mentionned by basszero) since they have to target different platforms that have different APIs:
Xbox 360 : D3D9+
PS3 : libgcm
Vista/Win7: D3D9, D3D10, D3D11
XP : D3D9
OSX : OpenGL
The simple answer is that no, there's no direct alternative to OpenGL that's obviously superior. Direct3D is pretty nearly the only competitor of any kind, and while it's certainly competitive, it doesn't enjoy any major advantage.
At times, Direct3D has had something of an advantage in speed -- it's controlled by Microsoft, who could quickly modify the specification to take advantage of the latest graphics cards updates. At that time, OpenGL was controlled by a multi-vendor Architecture Review Board (ARB). Decisions about new versions of OpenGL took considerable time, and a fair number of vendors seemed more concerned about backward compatibility than taking full advantage of every new trick as quickly as hardware vendors invented them (and nVidia and ATI are sufficiently competitive that they do invent them, and quickly at that).
Since then, control of OpenGL has been turned over to Khronos Group. There's been some controversy about parts of what they've done with the specification (particularly deprecating a lot of features that quite a few people still use) but one thing is open to little question: they're now cranking out new revisions to the specification relatively quickly, so it provides access to the features of even the newest hardware.
Direct3D if you work for Microsoft.