How does a language expand itself? - c++

I am learning C++ and I've just started learning about some of Qt's capabilities to code GUI programs. I asked myself the following question:
How does C++, which previously had no syntax capable of asking the OS for a window or a way to communicate through networks (with APIs which I don't completely understand either, I admit) suddenly get such capabilities through libraries written in C++ themselves? It all seems terribly circular to me. What C++ instructions could you possibly come up with in those libraries?
I realize this question might seem trivial to an experienced software developer but I've been researching for hours without finding any direct response. It's gotten to the point where I can't follow the tutorial about Qt because the existence of libraries is incomprehensible to me.

A computer is like an onion, it has many many layers, from the inner core of pure hardware to the outermost application layer. Each layer exposes parts of itself to the next outer layer, so that the outer layer may use some of the inner layers functionality.
In the case of e.g. Windows the operating system exposes the so-called WIN32 API for applications running on Windows. The Qt library uses that API to provide applications using Qt to its own API. You use Qt, Qt uses WIN32, WIN32 uses lower levels of the Windows operating system, and so on until it's electrical signals in the hardware.

You're right that in general, libraries cannot make anything possible that isn't already possible.
But the libraries don't have to be written in C++ in order to be usable by a C++ program. Even if they are written in C++, they may internally use other libraries not written in C++. So the fact that C++ didn't provide any way to do it doesn't prevent it from being added, so long as there is some way to do it outside of C++.
At a quite low level, some functions called by C++ (or by C) will be written in assembly, and the assembly contains the required instructions to do whatever isn't possible (or isn't easy) in C++, for example to call a system function. At that point, that system call can do anything your computer is capable of, simply because there's nothing stopping it.

C and C++ have 2 properties that allow all this extensibility that the OP is talking about.
C and C++ can access memory
C and C++ can call assembly code for instructions not in the C or C++ language.
In the kernel or in a basic non-protected mode platform, peripherals like the serial port or disk drive are mapped into the memory map in the same way as RAM is. Memory is a series of switches and flipping the switches of the peripheral (like a serial port or disk driver) gets your peripheral to do useful things.
In a protected mode operating system, when one wants to access the kernel from userspace (say when writing to the file system or to draw a pixel on the screen) one needs to make a system call. C does not have an instruction to make a system calls but C can call assembler code which can trigger the correct system call, This is what allows one's C code to talk to the kernel.
In order to make programming a particular platform easier, system calls are wrapped in more complex functions which may perform some useful function within one's own program. One is free to call the system calls directly (using assembler) but it is probably easier to just make use of one of the wrapper functions that the platform supplies.
There is another level of API that are a lot more useful than a system call. Take for example malloc. Not only will this call the system to obtain large blocks of memory but will manage this memory by doing all the book keeping on what is take place.
Win32 APIs wrap some graphic functionality with a common platform widget set. Qt takes this a bit further by wrapping the Win32 (or X Windows) API in a cross platform way.
Fundamentally though a C compiler turns C code into machine code and since the computer is designed to use machine code, you should expect C to be able to accomplish the lions share or what a computer can do. All that the wrapper libraries do is do the heavy lifting for you so that you don't have to.

Languages (like C++11) are specifications, on paper, usually written in English. Look inside the latest C++11 draft (or buy the costly final spec from your ISO vendor).
You generally use a computer with some language implementation (You could in principle run a C++ program without any computer, e.g. using a bunch of human slaves interpreting it; that would be unethical and inefficient)
Your C++ implementation general works above some operating system and communicate with it (using some implementation specific code, often in some system library). Generally that communication is done thru system calls. Look for instance into syscalls(2) for a list of system calls available on the Linux kernel.
From the application point of view, a syscall is an elementary machine instruction like SYSENTER on x86-64 with some conventions (ABI)
On my Linux desktop, the Qt libraries are above X11 client libraries communicating with the X11 server Xorg thru X Windows protocols.
On Linux, use ldd on your executable to see the (long) list of dependencies on libraries. Use pmap on your running process to see which ones are "loaded" at runtime. BTW, on Linux, your application is probably using only free software, you could study its source code (from Qt, to Xlib, libc, ... the kernel) to understand more what is happening

I think the concept you are missing is system calls. Each operating system provides an enormous amount of resources and functionality that you can tap into to do low-level operating system related things. Even when you call a regular library function, it is probably making a system call behind the scenes.
System calls are a low-level way of making use of the power of the operating system, but can be complex and cumbersome to use, so are often "wrapped" in APIs so that you don't have to deal with them directly. But underneath, just about anything you do that involves O/S related resources will use system calls, including printing, networking and sockets, etc.
In the case of windows, Microsoft Windows has its GUI actually written into the kernel, so there are system calls for making windows, painting graphics, etc. In other operating systems, the GUI may not be a part of the kernel, in which case as far as I know there wouldn't be any system calls for GUI related things, and you could only work at an even lower level with whatever low-level graphics and input related calls are available.

Good question. Every new C or C++ developer has this in mind. I am assuming a standard x86 machine for the rest of this post. If you are using Microsoft C++ compiler, open your notepad and type this (name the file Test.c)
int main(int argc, char **argv)
{
return 0
}
And now compile this file (using developer command prompt) cl Test.c /FaTest.asm
Now open Test.asm in your notepad. What you see is the translated code - C/C++ is translated to assembler. Do you get the hint ?
_main PROC
push ebp
mov ebp, esp
xor eax, eax
pop ebp
ret 0
_main ENDP
C/C++ programs are designed to run on the metal. Which means they have access to lower level hardware which makes it easier to exploit the capabilities of the hardware. Say, I am going to write a C library getch() on a x86 machine.
Depending on the assembler I would type something this way :
_getch proc
xor AH, AH
int 16h
;AL contains the keycode (AX is already there - so just return)
ret
I run it over with an assembler and generate a .OBJ - Name it getch.obj.
I then write a C program (I dont #include anything)
extern char getch();
void main(int, char **)
{
getch();
}
Now name this file - GetChTest.c. Compile this file by passing getch.obj along. (Or compile individually to .obj and LINK GetChTest.Obj and getch.Obj together to produce GetChTest.exe).
Run GetChTest.exe and you would find that it waits for the keyboard input.
C/C++ programming is not just about language. To be a good C/C++ programmer you need to have a good understanding on the type of machine that it runs. You will need to know how the memory management is handled, how the registers are structured, etc., You may not need all these information for regular programming - but they would help you immensely. Apart from the basic hardware knowledge, it certainly helps if you understand how the compiler works (ie., how it translates) - which could enable you to tweak your code as necessary. It is an interesting package!
Both languages support __asm keyword which means you could mix your assembly language code too. Learning C and C++ will make you a better rounded programmer overall.
It is not necessary to always link with Assembler. I had mentioned it because I thought that would help you understand better. Mostly, most such library calls make use of system calls / APIs provided by the Operating System (the OS in turn does the hardware interaction stuff).

How does C++ ... suddenly get such capabilities through libraries
written in C++ themselves ?
There's nothing magical about using other libraries. Libraries are simple big bags of functions that you can call.
Consider yourself writing a function like this
void addExclamation(std::string &str)
{
str.push_back('!');
}
Now if you include that file you can write addExclamation(myVeryOwnString);. Now you might ask, "how did C++ suddenly get the capability to add exclamation points to a string?" The answer is easy: you wrote a function to do that then you called it.
So to answer your question about how C++ can get capabilities to draw windows through libraries written in C++, the answer is the same. Someone else wrote function(s) to do that, and then compiled them and gave them to you in the form of a library.
The other questions answer how the window drawing actually works, but you sounded confused about how libraries work so I wanted to address the most fundamental part of your question.

The key is the possibility of the operating system to expose an API and a detailed description on how this API is to be used.
The operating system offers a set of APIs with calling conventions.
The calling convention is defining the way a parameter is given into the API and how results are returned and how to execute the actual call.
Operating systems and the compilers creating code for them play nicely together, so you usually have not to think about it, just use it.

There is no need for a special syntax for creating windows. All that is required is that the OS provides an API to create windows. Such an API consists of simple function calls for which C++ does provide syntax.
Furthermore C and C++ are so called systems programming languages and are able to access arbitrary pointers (which might be mapped to some device by the hardware). Additionally, it is also fairly simple to call functions defined in assembly, which allows the full range of operations the processor provides. Therefore it is possible to write an OS itself using C or C++ and a small amount of assembly.
It should also be mentioned that Qt is a bad example, as it uses a so-called meta compiler to extend C++' syntax. This is however not related to it's ability to call into the APIs provided by the OS to actually draw or create windows.

First, there's a little misunderstading, I think
How does C++, which previously had no syntax capable of asking the OS for a window or a way to communicate through networks
There is no syntax for doing OS operations. It's the question of semantics.
suddenly get such capabilities through libraries written in C++ themselves
Well, the operating system is writen mostly in C. You can use shared libraries (so, dll) to call the external code. Additionally, the operating system code can register system routines on syscalls* or interrupts which you can call using assembly. That shared libraries often just make that system calls for you, so you are spared using inline assembly.
Here's the nice tutorial on that: http://www.win.tue.nl/~aeb/linux/lk/lk-4.html
It's for Linux, but the principles are the same.
How the operating system is doing operations on graphic cards, network cards etc? It's a very broad thema, but mostly you need to access interrupts, ports or write some data to special memory region. Since that operations are protected, you need to call them through the operating system anyway.

In an attempt to provide a slightly different view to other answers, I shall answer like this.
(Disclaimer: I am simplifying things slightly, the situation I give is purely hypothetical and is written as a means of demonstrating concepts rather than being 100% true to life).
Think of things from the other perspective, imagine you've just written a simple operating system with basic threading, windowing and memory management capabilities. You want to implement a C++ library to let users program in C++ and do things like make windows, draw onto windows etc. The question is, how to do this.
Firstly, since C++ compiles to machine code, you need to define a way to use machine code to interface with C++. This is where functions come in, functions accept arguments and give return values, thus they provide a standard way of transferring data between different sections of code. They do this by establishing something known as a calling convention.
A calling convention states where and how arguments should be placed in memory so that a function can find them when it gets executed. When a function gets called, the calling function places the arguments in memory and then asks the CPU to jump over to the other function, where it does what it does before jumping back to where it was called from. This means that the code being called can be absolutely anything and it will not change how the function is called. In this case however, the code behind the function would be relevant to the operating system and would operate on the operating system's internal state.
So, many months later and you've got all your OS functions sorted out. Your user can call functions to create windows and draw onto them, they can make threads and all sorts of wonderful things. Here's the problem though, your OS's functions are going to be different to Linux's functions or Windows' functions. So you decide you need to give the user a standard interface so they can write portable code. Here is where QT comes in.
As you almost certainly know, QT has loads of useful classes and functions for doing the sorts of things that operating systems do, but in a way that appears independent of the underlying operating system. The way this works is that QT provides classes and functions that are uniform in the way they appear to the user, but the code behind the functions is different for each operating system. For example QT's QApplication::closeAllWindows() would actually be calling each operating system's specialised window closing function depending on the version used. In Windows it would most likely call CloseWindow(hwnd) whereas on an os using the X Window System, it would potentially call XDestroyWindow(display,window).
As is evident, an operating system has many layers, all of which have to interact through interfaces of many varieties. There are many aspects I haven't even touched on, but to explain them all would take a very long time. If you are further interested in the inner workings of operating systems, I recommend checking out the OS dev wiki.
Bear in mind though that the reason many operating systems choose to expose interfaces to C/C++ is that they compile to machine code, they allow assembly instructions to be mixed in with their own code and they provide a great degree of freedom to the programmer.
Again, there is a lot going on here. I would like to go on to explain how libraries like .so and .dll files do not have to be written in C/C++ and can be written in assembly or other languages, but I feel that if I add any more I might as well write an entire article, and as much as I'd love to do that I don't have a site to host it on.

When you try to draw something on the screen, your code calls some other piece of code which calls some other code (etc.) until finally there is a "system call", which is a special instruction that the CPU can execute. These instructions can be either written in assembly or can be written in C++ if the compiler supports their "intrinsics" (which are functions that the compiler handles "specially" by converting them into special code that the CPU can understand). Their job is to tell the operating system to do something.
When a system call happens, a function gets called that calls another function (etc.) until finally the display driver is told to draw something on the screen. At that point, the display driver looks at a particular region in physical memory which is actually not memory, but rather an address range that can be written to as if it were memory. Instead, however, writing to that address range causes the graphics hardware to intercept the memory write, and draw something on the screen.
Writing to this region of memory is something that could be coded in C++, since on the software side it's just a regular memory access. It's just that the hardware handles it differently.
So that's a really basic explanation of how it can work.

Your C++ program is using Qt library (also coded in C++). The Qt library will be using Windows CreateWindowEx function (which was coded in C inside kernel32.dll). Or under Linux it may be using Xlib (also coded in C), but it could as well be sending the raw bytes that in X protocol mean "Please create a window for me".
Related to your catch-22 question is the historical note that “the first C++ compiler was written in C++”, although actually it was a C compiler with a few C++ notions, enough so it could compile the first version, which could then compile itself.
Similarly, the GCC compiler uses GCC extensions: it is first compiled to a version then used to recompile itself. (GCC build instructions)

How i see the question this is actually a compiler question.
Look at it this way, you write a piece of code in Assembly(you can do it in any language) which translates your newly written language you want to call Z++ into Assembly, for simplicity lets call it a compiler (it is a compiler).
Now you give this compiler some basic functions, so that you can write int, string, arrays etc. actually you give it enough abilities so that you can write the compiler itself in Z++. and now you have a compiler for Z++ written in Z++, pretty neat right.
Whats even cooler is that now you can add abilities to that compiler using the abilities it already has, thus expanding the Z++ language with new features by using the previous features
An example, if you write enough code to draw a pixel in any color, then you can expand it using the Z++ to draw anything you want.

The hardware is what allows this to happen. You can think of the graphics memory as a large array (consisting of every pixel on the screen). To draw to the screen you can write to this memory using C++ or any language that allows direct access to that memory. That memory just happens to be accessible by or located on the graphics card.
On modern systems accessing the graphics memory directly would require writing a driver because of various restrictions so you use indirect means. Libraries that create a window (really just an image like any other image) and then write that image to the graphics memory which the GPU then displays on screen. Nothing has to be added to the language except the ability to write to specific memory locations, which is what pointers are for.

Related

How does the C++ standard library work behind the scenes?

This question has been bothering me so much for the past couple of days. I was wondering how the standard library works, in terms of functionality. I couldn't find an answer anywhere, even by checking the source code provided by the LLVM compiler which is, for a beginner like me, a really complicated piece of code.
What I'm basically trying to understand here is how does the C++ standard library work. For example let's take the fstream header file which consist of a bunch of functions that help to write to and read from files.
How does it work? Does it use the OS specific API (since the library is cross platform), or what? And, if the standard library can do it, aren't I supposed to be able to mess with some files as well without calling the standard fstream file (which to my experience I can't do)?
I apologize if my questions are unclear since I'm not a native English speaker: feel free to modify this text so as to make it clearer.
Does it use the OS specific API (since the library is cross platform), or what?
At some point, the OS specific API is used. The fstream implementation does not necessarily call an OS function directly. It might use other classes, which call functions inherited from C, etc., but eventually the call chain will lead to an OS call. (Yes, the details are often too complicated for an intermediate programmer to follow. So, as a self-described beginner, your findings are not surprising.)
The library is cross-platform in the sense that on your end (the C++ programmer), the interface is the same regardless of platform. It is not, however, the same library on every platform. Each platform has its own library, exposing the same interface on the C++ side, but making use of different OS calls. (In fact, the same platform might have multiple standard libraries, as the library implementation is provided by your toolchain, not by the standards committee.)
And, if the standard library can do it, aren't I supposed to be able to mess with some files as well without calling the standard fstream file (which to my experience I can't do)?
Yes, you are allowed to. Apparently, you have not been able to yet, but with some practice and guidance you should be able to. Everything in the standard library can be recreated in your own code. The point of the standard library (and most libraries, for that matter) is to save you time, not to enable something that was otherwise unavailable. For example, you don't have to implement a file stream for every program you write; it's in the standard library so you can focus on more interesting aspects of your project.
A compiler is just a program which create executable file or library. You can use the compiler default libraries to gain time or write your own. The default libraries communicate with the os for file operation or memory allocation and provide a simple standard classes to allow the developper to write only one code which work on all target platforms supported by the compiler and the libraries. If you want to write your own you have to write each function for all your target os.
The standard library is cross-platform in a sense that its interface does not change between platforms but its implementation does - or in practical terms - if you only use C++ and its standard library, you can write your code the same way for Linux / Windows / MacOS / Android / Whatever and if you find a C++ compiler for one of those platforms that supports the language features you used, you will be able to compile your code for that platform without rewriting anything.
So while you can use std::vector or std::fstream or any other feature in the library independently of the platform you're writing for and expect the function definitions, type names, etc. to look the same, you cannot expect the executable which you compiled for PC with Windows 10 to run on a phone with Android. You cannot even expect the same executable to run on the same PC but with different system - that is what I mean by "the implementation is different"
There are two main reasons for this difference:
Processors with different architectures (x86-64 and ARM for example) use different instruction sets and as such the C++ source would need to be compiled to a completely different machine code to run properly
Computers with processors of the same architecture which have a different operating system have different ways of dynamically allocating memory, creating files, creating streams, writing to console, creating and scheduling threads etc. - which is part of the system functionality that you use via the standard library
If you really wanted to you could use HeapAlloc() instead of operator new() or CreateThread() instead of stdlib's std::thread but that would force you to both rewrite your program every time you wanted to compile it for something else than Windows and recompile it with the target platform's compiler (and by proxy learn its API). Standard library saves you from that trouble by abstracting away those system calls.
As for the fstream in particular, here is what it uses internally on most PCs nowadays.
Basically, fstream, iostream and printf works based on a kernel function write(). When your code call printf (we use printf as an example), it will finally call write() to let the kernel work on the IO stuff. After that, write() returns and printf returns and your code continues.
So if you really want to know how the printf works internally, you have to read the source code of the Kernel.
But you shouldn't do that for now.
For a beginner, do not try to go deeper when you haven't got a basic cognition about computer. A computer is a project, just like a building. So the right way to learn it is to learn it level by level. First, learning how to use brick and cement to build a building, this is what you should do for now. What you shouldn't do is that you are learning how to build a building and this is your first time to try to use brick, then you are interested in how to produce a brick and start to focus on brick, this is a wrong way to learn IT.
If you are learning C/C++, just learn it. Remember, learn it level by level. For now, knowing how to use printf is enough.

In C++, how does the system associate each of these objects with the window in which the program is executed?

Essentially, I am learning C++. Anyways, I have bought this book called C++ Primer 5th Edition, and in the first unit, when learning about istream and outstream, I came across this statement,
"Ordinarily, the system associates each of these objects with the window in which the program is executed."
Can someone please explain what this means, and how the system does this.
First, your book is wrong, or at least very simplifying. For example, you could compile and run valid and standard C++ code on a Linux server (that you have rented from some VPS provider) or a supercomputer and such a server has no screens and no windows. And the C++ code (compiled) that you use on servers running in datacenters at Google or Facebook (probably the most common computations you do) has no "windows".
Or you could cross-compile some C++ code for a small microcontroller like an arduino which has no screens and no windows.
Then, even if you assume to compile then run some C++ code on a laptop (running an ordinary operating system like Linux, Window, MacOSX, Android....) there are many layers of software involved (cumulating many millions of lines of code).
If you want to understand more, read about display servers, terminal emulators, processes and about operating systems.
You C++ implementation gives you (thru its C++ standard library and many other layers of software) the abstraction of console streams. How these are really implemented is a complex topic. Windows are unknown to the C++14 standard (but consider using a complex GUI framework library like Qt if you need some).
If you have access to some Linux system, try to strace(1) a hello-world program in C++ and read the tty demystified page; you'll be surprised by the amounts of system calls involved (and your terminal emulator is much more complex than hello-world, and so is the display server).
PS. At Paris 6 university, an full-year course (of several hours per week) is required to explain only the basics of the answer to your question
Magic.
No, I'm serious. The C++ standard dictates that the compiler needs to provide a certian environment to the C++ program. This environment is rather abstract -- it is called the abstract machine.
Specific compilers decide to attach that abstract machine to particular behaviors on the actual machine they run on. How this happens is outside the C++ language.
In practice, what happens is that a given C++ executable is compiled to run in a given operating system (and usually on given kinds of compatible hardware).
The operating system knows how to load your executable into memory, and "wires up" the parts of the executable to its own services (or directly to hardware services) that provide features, like console output/input, memory allocation, threading, etc etc etc.
The compiler was written with the ability to write that executable in mind. The operating system was written with the ability to load that executable in mind.
In some cases, the "operating system" is firmware on a chip or motherboard, and "loading the executable" consists of initializing a ROM with the bits that come out of the compiler.
In more common cases, the "operating system" is a modern desktop operating system. It specifies an executable layout, and provides ways for such an executable to talk to the OS/kernel/screen/etc.
The compiler could have libraries (dynamic or not) that are written to help the program interact with the operating system of your computer, which it links to your program implicitly.
The operating system itself is going to be written, usually, in a mixture of machine code, assembly and higher level languages (like C, C++ or even Java). It will often be running in a "different mode" than client code is running in, and have access to a very different environment.
In short, Magic.
C++ does not specify how this happens, it just demands the compiler do it. These things are usually not fully implemented in C++.

Why does the chip control the language to choose

I've asked the question before what language should I learn for embedded development. Most embedded engineers said c and c++ are a must, but also pointed out that it depends on the chip.
Can someone clarify? Is it a compiler issue or what? Do chips come with their own specific compilers (like a c compiler or c++ compiler) and that's why you have to use the language the compiler knows? Is it not possible to code and compile it elsewhere, then burn it to the chip directly in its compiled state? (I think I heard an acquaintance say something to this effect)
I'm not sure how this works, as clearly I don't know much embedded systems or how they work. It's probably an easy answer for those of you who know.
Probably, they meant some toolchains do not support C++. Yes, many chips and boards do come with their own toolchains. Different processors have different instruction sets, which means a different compiler (or more specifically a different backend). That doesn't mean you always have to relearn everything. Many of these are based on GCC (often considered the most ported compiler). The final executable/image formats also vary, so you need a specific linker. Most likely, you will be (cross-)compiling the chip on a "regular" computer, then burning it to the chip. However, that doesn't mean you can use a typical compiler and linker targeted towards a desktop operating system.
It "depends on the chip" in three possible ways:
Some very constrained architectures are not suited to C++, or at least C++ provides constructs not suited to such architectures so offers no benefit over C. Most 8 bit devices fall into this category, but by no means all; I have seen useful C++ code implemented on MegaAVR for example.
Some devices are not supported by a C++ compiler. For example Microchip's dsPIC/PIC24 compiler is C only (third-party tools may have C++ support).
The chip architecture is designed specifically for a particular language; for example INMOS Transputers invariably ran OCCAM.
As well as C, C++, other possibilities are assembler, Forth, Ada, Pascal and many others, but C is almost ubiquitous; few chip vendors will release a new architecture or device without a C compiler being available from day-one. For other languages you will generally have to wait until a third-part decides to develop one, and that wait may be forever for a niche architecture.
Is it not possible to code and compile it elsewhere, then burn it to the chip directly in its compiled state?
That is called cross-compilation or cross-development, and is the usual development method for embedded systems. Most embedded systems lack the OS, file, performance and memory resources to self-host a compiler, and most developers want the comfort of a sophisticated development environment with IDEs, debuggers etc. in a familiar user-oriented desktop OS.
I'm not sure how this works, as
clearly I don't know much embedded
systems or how they work.
Get up-to-speed with some of these:
http://www.state-machine.com/arm/Building_bare-metal_ARM_with_GNU.pdf
http://www.eetimes.com/design/embedded
http://www.amazon.com/exec/obidos/ASIN/020179523X
http://www.amazon.com/Embedded-Systems-Firmware-Demystified-CD-ROM/dp/1578200997
Yes, there are many architectures for which a C compiler exists but a C++ compiler does not. The smaller and less fully-featured a processor you choose, the more likely this situation is to occur.
For embedded development, you almost always compile the code 'elsewhere', as you say, and then send it to the chip for execution/debugging. The process of compiling code for a different architecture than the compiler itself is built for is called 'cross-compiling'.
You are correct: chips have variations on compilers. Most/many modern chips have a gcc port; but not all.
The term 'embedded' is used to describe a vast range of hardware. Most embedded software engineering will consist of writing C/C++ code to produce a binary for a target microprocessor, but there are devices that you may work with that are not coded with compiled binary.
One example is a Programmable Logic Controller (PLC). These devices use a language called "Ladder Logic". It's a wonderful language. I have enjoyed working with it in the past.
Another thing you may encounter, as I have in the past, is devices that have interpreted BASIC emulators. Hopefully that is rare today.
C/C++ are a very good choice for firmware development. So the software you make will run on a embedded CPU/Microcontroller. In order to proper programmer the device, you will need to know the language and the device architecture.
The same code probably will not work in different devices. So, you have to learn the language, and the device architecture.
Another options are FPGAs, which are not microcontroller. FPGA are devices with specialized cell capable to transform itself in any type of synchronous circuit, including microcontroller. FPGAs are programed with Hardware Description languages, like verilog and VHDL. The "compiled" (synthesized) version of the software are called gateware.
The HDLs are the same languages used for ASICs designe also. The path to properly learn
the language are long. So I recommend start with C/C++ with pic form Microchip, which is a
low cost and highly accepted microcontroller.
If you intend to do FPGA development, the knowledge gained with C/C++/pic will be helpfull and important, because must FPGAs have embedded CPU/Microcontroller inside.
There is no direct scientific reason for it. In a lot of cases it has to do with the management and politics of the specific company.
Some companies are driven to create a turn key system and force you to buy that system and pay for maintenance. It locks out the individual developers, but there are many companies and esp government agencies that prefer this model because the support is often much better and you can often drive the direction of their products to suit your needs.
Other companies do not have the staff or the talent and outsource the solution and sometimes take whatever they can get. And you might end up with a one time developed tool that after the contractor leaves is never updated or fixed again, or if it is fixed it is a patch job by someone else. It takes money to make money, but if you run out of money before you can sell your product you still fail.
Sometimes you have companies that both have a staff that maintains their in-house must buy from them tool AND has individuals that also contribute to open tools like gcc.
Sometimes the politics or management in the company have individuals that have a strong opinion of how the world must be and only allow tools to be developed for a specific language. Or perhaps they are owned by or partner with or just like a company that has a specific language and this chip product came to be simply to support that language.
On top of all of this you have the very real technical problems of memory space, the quality and efficiency of the instruction set and how compiler friendly it is. Some architectures may be fine for assembler, but higher level compiled code chews up the limited memory resources too quickly.
Gcc in particular has a lot of problems internally (not as a people but the software/source code itself). I challenge you to write a back end, even with the tutorials that are out there. A company requires specialised talent in order to create and then maintain a gcc backend year after year, otherwise you get dumped. if your chip architecture is not 32 bit or bigger you are already fighting a losing battle with gcc, your chip architecture might be compiler friendly but just not friendly with the popular compilers design.
In the near future llvm is going to shine as a cross compiler relative to gcc because it has not yet built this internal bulk, and perhaps because the internal guts are themselves a defined language/system it may never suffer what has happened to gcc. As more folks get comfortable with llvm we will see a number of architectures ported to it. The msp430 backend was done specifically to demonstrate that you can add a target literally in an afternoon. By the end of next month, some motivated individual could have all of the targets most of us have ever heard of ported to llvm. And you dont have to build a cross compiler it is always a cross compiler. I only mention llvm because the door is now open for targets that have suffered from bad tools to recover.
Some companies, microcontrollers in particular, can and will make the programming interface proprietary so that you must use their programming tool (and or hack it and take your chances with publishing those results and or a cat and mouse of them changing it to defeat you). And they may have only made tools for Windows leaving the linux and apple folks hanging in the wind. Or they make it so that the only binaries it will load are the ones generated by their tools, here again you may hack through the binary format allowing an alternate compiler, and they may or may not work to defeat you.
Despite the technical problems the biggest is the companies politics, management, marketing teams, and supply of or lack of talent in the engineering staff. The bottom line, follow the dollars not the technology or science to understand why this language is supported and not that, or the support for this language is good, bad, or marginal.
What language to learn as a result of all of this? Start with assembler on at least three different architectures. Then C and then C++ if you feel you really need it. C and assembler are your primary languages for embedded (depending on your definition of embedded). No, we write assembler mostly for initial boot code and to support C, interrupt stuff or special instructions that are needed that the compiler cannot create. There are places like microcontrollers where it may very well make sense to use assembler for various reasons like tools, limited chip resources, etc. Even if you dont use assembler knowing it makes you a much better high level programmer.
You do need to decide what your definition of embedded is. Is it api and library calls for an application on a(n embedded) linux system (indistinguishable from the same program/calls on a desktop system). Or at the other end of the spectrum are you talking a microcontroller with maybe 256 or 1024 bytes (not mega or giga, but bytes) of program space? Or something in the middle? The majority of the "embedded" folks out there are closer to the api calls for applications on an operating system (rtos, linux, wince, etc), than the deeply embedded, so that means C, maybe C++ (always be able to fall back on C), trying to avoid python and other scripty languages that are resource hogs.
Some 8-bit parts cannot efficiently access data from a stack. Instead of using a stack to pass parameters, auto-variables and parameters are statically allocated; typically, a linker allocates the automatic variables for main() at one end of memory, and then allocate the variables for functions that are called by main and nothing else, then allocate the variables for functions that are called by those functions and nothing else, etc. This will yield an optimal allocation fairly easily, subject to some caveats:
Recursion can only be supported by adding code to explicitly copy variables onto some sort of stack arrangement; in many compilers, it's simply not supported at all.
If a function looks like it "might" call another function, the linker will assume it can do so in all cases (e.g. it may be that when 'foo' calls 'bar', one of its parameters might always have a value such that 'bar' won't call 'boz', but the linker won't know that).
Any call to a function pointer with a certain signature will be regarded as a call to all functions with the same signature whose address is taken.
If the evaluation of more than one parameter to a function requires making additional function calls, additional temporary storage must generally be pessimistically allocated even if optimal placement of the parameter storage could have avoided that.
There are many types of C programs for which the above restrictions pose no problem at all, and many more for which they pose a nuisance but not a huge one (e.g. by adding dummy parameters or return values to ensure different classes of indirectly-called functions have different signatures). Unfortunately, the code generated by an C++ to C pre-compiler will almost always involve function pointers whose call graph cannot be reasonably divined, so using C++ on such a platform is apt to be difficult if not impossible.

Why are drivers and firmwares almost always written in C or ASM and not C++?

I am just curious why drivers and firmwares almost always are written in C or Assembly, and not C++?
I have heard that there is a technical reason for this.
Does anyone know this?
Lots of love,
Louise
Because, most of the time, the operating system (or a "run-time library") provides the stdlib functionality required by C++.
In C and ASM you can create bare executables, which contain no external dependencies.
However, since windows does support the C++ stdlib, most Windows drivers are written in (a limited subset of) C++.
Also when firmware is written ASM it is usually because either (A) the platform it is executing on does not have a C++ compiler or (B) there are extreme speed or size constraints.
Note that (B) hasn't generally been an issue since the early 2000's.
Code in the kernel runs in a very different environment than in user space. There is no process separation, so errors are a lot harder to recover from; exceptions are pretty much out of the question. There are different memory allocators, so it can be harder to get new and delete to work properly in a kernel context. There is less of the standard library available, making it a lot harder to use a language like C++ effectively.
Windows allows the use of a very limited subset of C++ in kernel drivers; essentially, those things which could be trivially translated to C, such as variable declarations in places besides the beginning of blocks. They recommend against use of new and delete, and do not have support for RTTI or most of the C++ standard library.
Mac OS X use I/O Kit, which is a framework based on a limited subset of C++, though as far as I can tell more complete than that allowed on Windows. It is essentially C++ without exceptions and RTTI.
Most Unix-like operating systems (Linux, the BSDs) are written in C, and I think that no one has ever really seen the benefit of adding C++ support to the kernel, given that C++ in the kernel is generally so limited.
1) "Because it's always been that way" - this actually explains more than you think - given that the APIs on pretty much all current systems were originally written to a C or ASM based model, and given that a lot of prior code exists in C and ASM, it's often easier to 'go with the flow' than to figure out how to take advantage of C++.
2) Environment - To use all of C++'s features, you need quite a runtime environment, some of which is just a pain to provide to a driver. It's easier to do if you limit your feature set, but among other things, memory management can get very interesting in C++, if you don't have much of a heap. Exceptions are also very interesting to consider in this environment, as is RTTI.
3) "I can't see what it does". It is possible for any reasonably skilled programmer to look at a line of C and have a good idea of what happens at a machine code level to implement that line. Obviously optimization changes that somewhat, but for the most part, you can tell what's going on. In C++, given operator overloading, constructors, destructors, exception, etc, it gets really hard to have any idea of what's going to happen on a given line of code. When writing device drivers, this can be deadly, because you often MUST know whether you are going to interact with the memory manager, or if the line of code affects (or depends on) interrupt levels or masking.
It is entirely possible to write device drivers under Windows using C++ - I've done it myself. The caveat is that you have to be careful about which C++ features you use, and where you use them from.
Except for wider tool support and hardware portability, I don't think there's a compelling reason to limit yourself to C anymore. I often see complicated hand-coded stuff done in C that can be more naturally done in C++:
The grouping into "modules" of functions (non-general purpose) that work only on the same data structure (often called "object") -> Use C++ classes.
Use of a "handle" pointer so that module functions can work with "instances" of data structures -> Use C++ classes.
File scope static functions that are not part of a module's API -> C++ private member functions, anonymous namespaces, or "detail" namespaces.
Use of function-like macros -> C++ templates and inline/constexpr functions
Different runtime behavior depending on a type ID with either hand-made vtable ("descriptor") or dispatched with a switch statement -> C++ polymorphism
Error-prone pointer arithmetic for marshalling/demarshalling data from/to a communications port, or use of non-portable structures -> C++ stream concept (not necessarily std::iostream)
Prefixing the hell out of everything to avoid name clashes: C++ namespaces
Macros as compile-time constants -> C++11 constexpr constants
Forgetting to close resources before handles go out of scope -> C++ RAII
None of the C++ features described above cost more than the hand-written C implementations. I'm probably missing some more. I think the inertia of C in this area has more to do with C being mostly used.
Of course, you may not be able to use STL liberally (or at all) in a constrained environment, but that doesn't mean you can't use C++ as a "better C".
The comments I run into as why a shop is using C for an embedded system versus C++ are:
C++ produces code bloat
C++ exceptions take up too much
room.
C++ polymorphism and virtual tables
use too much memory or execution
time.
The people in the shop don't know
the C++ language.
The only valid reason may be the last. I've seen C language programs that incorporate OOP, function objects and virtual functions. It gets very ugly very fast and bloats the code.
Exception handling in C, when implemented correctly, takes up a lot of room. I would say about the same as C++. The benefit to C++ exceptions: they are in the language and programmers don't have to redesign the wheel.
The reason I prefer C++ to C in embedded systems is that C++ is a stronger typed language. More issues can be found in compile time which reduces development time. Also, C++ is an easier language to implement Object Oriented concepts than C.
Most of the reasons against C++ are around design concepts rather than the actual language.
The biggest reason C is used instead of say extremely guarded Java is that it is very easy to keep sight of what memory is used for a given operation. C is very addressing oriented. Of key concern in writing kernel code is avoiding referencing memory that might cause a page fault at an inconvenient moment.
C++ can be used but only if the run-time is specially adapted to reference only internal tables in fixed memory (not pageable) when the run-time machinery is invoked implicitly eg using a vtable when calling virtual functions. This special adaptation does not come "out of the box" most of the time.
Integrating C with a platform is much easier to do as it is easy to strip C of its standard library and keep control of memory accesses utterly explicit. So what with it also being a well-known language it is often the choice of kernel tools designers.
Edit: Removed reference to new and delete calls (this was wrong/misleading); replaced with more general "run-time machinery" phrase.
The reason that C, not C++ is used is NOT:
Because C++ is slower
Or because the c-runtime is already present.
It IS because C++ uses exceptions.
Most implementations of C++ language exceptions are unusable in driver code because drivers are invoked when the OS is responding to hardware interrupts. During a hardware interrupt, driver code is NOT allowed to use exceptions as that would/could cause recursive interrupts. Also, the stack space available to code while in the context of an interrupt is typically very small (and non growable as a consequence of the no exceptions rule).
You can of course use new(std::nothrow), but because exceptions in c++ are now ubiqutious, that means you cannot rely on any library code to use std::nothrow semantics.
It IS also because C++ gave up a few features of C :-
In drivers, code placement is important. Device drivers need to be able to respond to interrupts. Interrupt code MUST be placed in code segments that are "non paged", or permanently mapped into memory, as, if the code was in paged memory, it might be paged out when called upon, which will cause an exception, which is banned.
In C compilers that are used for driver development, there are #pragma directives that can control which type of memory functions end up on.
As non paged pool is a very limited resource, you do NOT want to mark your entire driver as non paged: C++ however generates a lot of implicit code. Default constructors for example. There is no way to bracket C++ implicitly generated code to control its placement, and because conversion operators are automatically called there is no way for code audits to guarantee that there are no side effects calling out to paged code.
So, to summarise :- The reason C, not C++ is used for driver development, is because drivers written in C++ would either consume unreasonable amounts of non-paged memory, or crash the OS kernel.
C is very close to a machine independent assembly language. Most OS-type programming is down at the "bare metal" level. With C, the code you read is the actual code. C++ can hide things that C cannot.
This is just my opinion, but I've spent a lot of time in my life debugging device drivers and OS related things. Often by looking at assembly language. Keep it simple at the low level and let the application level get fancy.
Windows drivers are written in C++.
Linux drivers are written in c because the kernel is written in c.
Probably because c is still often faster, smaller when compiled, and more consistent in compilation between different OS versions, and with fewer dependencies. Also, as c++ is really built on c, the question is do you need what it provides?
There is probably something to the fact that people that write drivers and firmware are usually used to working at the OS level (or lower) which is in c, and therefore are used to using c for this type of problem.
The reason that drivers and firmwares are mostly written in C or ASM is, there is no dependency on the actual runtime libraries. If you were to imagine this imaginary driver written in C here
#include <stdio.h>
#define OS_VER 5.10
#define DRIVER_VER "1.2.3"
int drivermain(driverstructinfo **dsi){
if ((*dsi)->version > OS_VER){
(*dsi)->InitDriver();
printf("FooBar Driver Loaded\n");
printf("Version: %s", DRIVER_VER);
(*dsi)->Dispatch = fooDispatch;
}else{
(*dsi)->Exit(0);
}
}
void fooDispatch(driverstructinfo *dsi){
printf("Dispatched %d\n", dsi->GetDispatchId());
}
Notice that the runtime library support would have to be pulled in and linked in during compile/link, it would not work as the runtime environment (that is when the operating system is during a load/initialize phase) is not fully set up and hence there would be no clue on how to printf, and would probably sound the death knell of the operating system (a kernel panic for Linux, a Blue Screen for Windows) as there is no reference on how to execute the function.
Put it another way, with a driver, that driver code has privilege to execute code along with the kernel code which would be sharing the same space, ring0 is the ultimate code execution privilege (all instructions allowed), ring3 is where the front end of the operating system runs in (limited execution privilege), in other words, a ring3 code cannot have a instruction that is reserved for ring0, the kernel will kill the code by trapping it as if to say 'Hey, you have no privilege to tread up ring0's domain'.
The other reason why it is written in assembler, is mainly for code size and raw native speed, this could be the case of say, a serial port driver, where input/output is 'critical' to the function in relation to timing, latency, buffering.
Most device drivers (in the case of Windows), would have a special compiler toolchain (WinDDK) which can use C code but has no linkage to the normal standard C's runtime libraries.
There is one toolkit that can enable you to build a driver within Visual Studio, VisualDDK. By all means, building a driver is not for the faint of heart, you will get stress induced activity by staring at blue screens, kernel panics and wonder why, debugging drivers and so on.
The debugging side is harder, ring0 code are not easily accessible by ring3 code as the doors to it are shut, it is through the kernel trap door (for want of a better word) and if asked politely, the door still stays shut while the kernel delegates the task to a handler residing on ring0, execute it, whatever results are returned, are passed back out to ring3 code and the door still stays shut firmly. That is the analogy concept of how userland code can execute privileged code on ring0.
Furthermore, this privileged code, can easily trample over the kernel's memory space and corrupt something hence the kernel panic/bluescreens...
Hope this helps.
Perhaps because a driver doesn't require object oriented features, while the fact that C still has somewhat more mature compilers would make a difference.
There are many style of programming such as procedural, functional, object oriented etc. Object oriented programming is more suited for modeling real world.
I would use object-oriented for device drivers if it suites it. But, most of the time when you programming device drivers, you would not need the advantages provided by c++ such as, abstraction, polymorphism, code reuse etc.
Well, IOKit drivers for MacOSX are written in C++ subset (no exceptions, templates, multiple inheritance). And there is even a possibility to write linux kernel modules in haskell.)
Otherwise, C, being a portable assembly language, perfectly catches the von Neumann architecture and computation model, allowing for direct control over all it's peculiarities and drawbacks (such as the "von Neumann bottleneck"). C does exactly what it was designed for and catches it's target abstraction model completely and flawlessly (well except for implicit assumption in single control flow which could have been generalized to cover the reality of hardware threads) and this is why i think it is a beautiful language.) Restricting the expressive power of the language to such basics eliminates most of the unpredictable transformation details when different computational models are being applied to this de-facto standard. In other words, C makes you stick to basics and allows pretty much direct control over what you are doing, for example when modeling behavior commonality with virtual functions you control exactly how the function pointer tables get stored and used when comparing to C++'s implicit vtbl allocation and management. This is in fact helpful when considering caches.
Having said that, object-based paradigm is very useful for representing physical objects and their dependencies. Adding inheritance we get object-oriented paradigm which in turn is very useful to represent physical objects' structure and behavior hierarchy. Nothing stops anyone from using it and expressing it in C again allowing full control over exactly how your objects will be created, stored, destroyed and copied. In fact that is the approach taken in linux device model. They got "objects" to represent devices, object implementation hierarchy to model power management dependancies and hacked-up inheritance functionality to represent device families, all done in C.
because from system level, drivers need to control every bits of every bytes of the memory, other higher language cannot do that, or cannot do that natively, only C/Asm achieve~

How do you create a freestanding C++ program?

I'm just wondering how you create a freestanding program in C++?
Edit: By freestanding I mean a program that doesn't run in a hosted envrioment (eg. OS). I want my program to be the first program the computer loads, instead of the OS.
Have a look at this article:
http://www.codeproject.com/KB/tips/boot-loader.aspx
You would need a little assembly start-up code to get you as far as main() but then you could write the rest in C++. You'd have to write your own heap manager (new/delete) if you wanted to create objects at runtime and your own scheduler if you wanted more than one thread.
See this page: http://wiki.osdev.org/C++
It has everything necessary to start writing an OS using c++ as the core language using the more popular toolchains.
In addition this page should prove to be very helpful: http://wiki.osdev.org/C++_Bare_Bones. It pretty much walks you through getting to the c++ entry point of an OS.
Legacy Systems
Even with your clarification, the answer is that it depends -- the exact boot sequence depends on the hardware -- though there's quite a bit of commonality. The boot loader is typically loaded at an absolute address, and the file it's contained in is frequently read into memory exactly as-is. This means instead of a normal linker, you typically use a "linking locator". Where a typical linker produces an executable file ready for relocation and loading, a locator produces an executable that's already set up to run at one exact address, with all relocations already applied. For those old enough to remember them, it's typically pretty much like an MS-DOS .COM file.
Along with that, it has to (of course) statically link the whole run-time that the program depends upon -- it can't depend on something like a DLL or shared object library, because the code to load either of those hasn't itself been loaded yet.
EFI/UEFI
Current PCs (and Macs) use EFI/UEFI. I'm going to just refer to UEFI throughout the remainder of this article, but most of it applies about equally to EFI as well (but UEFI is much more common).
These provide quite a bit more support for boot code. This includes drivers for most devices (it supports installing device drivers), so your boot code can use networking and such, which is much more difficult to support in legacy mode.
Bootable code under EFI uses the same PE format as Windows executables. Libraries are also available so quite a bit of boot code can be written much more like normal code that runs inside an OS. I won't try to get into a lot of detail, but here are links to some information.
https://www.intel.com/content/www/us/en/developer/articles/tool/unified-extensible-firmware-interface.html
https://www.intel.com/content/dam/doc/guide/uefi-driver-network-boot-devices-guide.pdf
https://www.intel.com/content/dam/www/public/us/en/documents/guides/bldk-v2-uefi-standard-based-guide.pdf
And perhaps the most important one--the development kit:
https://github.com/tianocore/edk2
google 'embedded c++' for a start
Another idea is to start with the embedded systems emulators, for example the atmel AVR site has a nice IDE the emulates atmel AVR systems, and allows you to build raw code in C and load it into an emulated CPU, they use gcc as toolchain (I think)
C++ is used in embedded systems programming, even to write OS kernels.
Usually you have at least a few assembler instructions early in the boot sequence. A few things are just easier to express that way, or there may be reference code from the CPU vendor you need to use.
For the initial boot process, you won't be able to use the standard library. No exceptions, RTII, new/delete. It's back to "C with classes". Most people just use C here.
Once you have enough supporting infrastructure loaded though, you can use whatever parts of the standard library you can port.
You will need an environment that provides:
A working C library, or enough of it to do what you want
The parts of the C++ runtime that you intend to use. This is compiler-specific
In addition to any other libraries. If you have no dynamic linker on your platform (if you have no OS, you probably have no linker) then you will have to static-link it all.
In practice this means linking some small C++ runtime, and a C library appropriate for your platform. Then you can simply write a standalone C++ program.
If you were using BSD Unix, you would link with the standalone library. That included a basic IO system for disk and tty. Your source code looked the same as if it were to be run under Unix, but the binary could be loaded into a naked machine.
Yes, of course. The ISO Standard for C and C++ support both hosted and free-standing environments, and the macro STDC_HOSTED is used to distinguish between the two. This has been the case since 2011.
Since most of the replies I see here are in the dark on the fact that this is all standard terminology (and is supposed to be common knowledge for C and C++ users), I'll recap, here, the distinction between the two, laid out by the ISO, and refer you to the following link for more details:
https://en.cppreference.com/w/cpp/freestanding
Hosted:
"main" is defined, and is started up in the main thread. The static objects may be constructed and destructed at the stand and end of the thread. Normally that means there is a host operating system (OS) that provides all the infrastructure required for an environment to run the program in. The POSIX standard, in particular, spells out a set of conditions expected for utilities that are called on a host system from the command-line; and the ISO C/C++ standards dovetail into each other and into the POSIX standard. (POSIX's support for C is at C99 at minimum.)
Free-Standing:
"main" may, but need not, be defined. Whether/how constructors are applied at start-up to static objects, and destructors to static objects on termination, depends on the implementation.
It's any environment where the routine made by the C or C++ program is not being called by some OS, but runs on its own. That's the typical case for embedded systems.
"Free-standing", of course, also includes OS's themselves, as a special case. An OS kernel is the epitome of a free-standing program. One of the respondents here actually went as far as to provide a link to a roll-your-own-OS kit for C++. In that vein, it would be an interesting exercise for you to test out the concept by rewriting the old 0.96 version of Linus' OS (which you can find in UNIX Archive) as a free-standing C++ program. His source code is actually filled with C++'isms (especially in the "fs" directory) that practically scream out "I'm a virtual function", "I'm a base class", "I'm a derived class" or "This is class inheritance"!
Libraries Required For Free-Standing Programs:
There are also minimum requirements on which of the standard libraries must be supported - and how much. Worthy of note: <new>, <exception> are mandatory, only partial support is required for <cstdlib> for start-up and termination, <atomic> (since 2011), <bit> and <coroutine> (both since 2020) are mandatory, as they had better be, if you're doing anything with embedded systems! <thread> is not mandatory.
A roll-your-own-OS kit would then provide templates for the mandatory libraries, and the required parts of the other semi-mandatory libraries. Any responsible embedded systems developer will be laying out their run-time system and environment, and compilers geared toward such users will be providing the under-the-box transparency needed to allow this to be done and to allow user-defined run-time systems to interface cleanly with the compiler.