I'm in a need to develop an audio device driver for System Audio Capture(based on Soundflower). But soon a problem appeared that it seems IOAudioFamily stack is being deprecated in OSX 10.10 and later. Looking through the IOAudioDevice and IOAudioEngine header files it seems that apple recommends now using the <CoreAudio/AudioServerPlugIn.h> API which runs in user-space. But I can't find lots of information on this user-space device drivers topic. It seems that the only resource is the Apple provided sample devices from https://developer.apple.com/library/prerelease/content/samplecode/AudioDriverExamples/Introduction/Intro.html
Looking through the examples I find that its a lot harder and more work to develop a user-space driver instead of I/O Kit kernel based.
So the question arises what should motivate to develop a device driver in user-space instead of kernel space?
The "SimpleAudioDriver" example is somewhat misnamed. It demonstrates pretty much every feature of the API. This is handy as a reference if you actually need to use those features. It's also structured in a way that's maybe a little more complicated than necessary.
For a virtual device, the NullAudioDriver is probably a much better base, and much, much easier to understand (single source file, if I remember correctly). SimpleAudioDriver is more useful for dealing with issues such as hotplugging, multiple instances of identical devices, etc.
IOAudioEngine is deprecated as you say, and has been since OS X 10.10. Expect it to go away eventually, so if you build your driver with it, you'll probably need to rewrite it sooner than if you create a Core Audio Server Plugin based one.
Testing and debugging audio drivers is awkward either way (due to being so time sensitive), but I'd say userspace ones are slightly less frustrating to deal with. You'll still want to test on a different machine than your development Mac, because if coreaudiod crashes or hangs, apps usually start locking up too, so being able to just ssh in, delete your plugin and kill coreaudiod is handy. Certainly quicker turnaround than having to reboot.
(FWIW, I've shipped both kernel and userspace OS X audio drivers, and I spend a lot of time working on kexts.)
There is a great book on this subject, available free online here:
http://free-electrons.com/doc/books/ldd3.pdf
See page 37 for a summary of why you might want a user-space driver, copied here for convenience:
The advantages of user-space drivers are:
The full C library can be linked in. The driver can perform many exotic tasks without resorting to external programs (the utility
programs implementing usage policies that are usually distributed
along with the driver itself).
The programmer can run a conventional debugger on the driver code without having to go through contortions to debug a running kernel.
If a user-space driver hangs, you can simply kill it. Problems with the driver are unlikely to hang the entire system, unless the hardware
being controlled is really misbehaving.
User memory is swappable, unlike kernel memory. An infrequently used device with a huge driver won’t occupy RAM that other programs could
be using, except when it is actually in use.
A well-designed driver program can still, like kernel-space drivers, allow concurrent access to a device.
If you must write a closed-source driver, the user-space option makes it easier for you to avoid ambiguous licensing situations and
problems with changing kernel interfaces.
From what I understand (correct me if I am wrong), the OpenGL api converts the function calls written by the programmer in the source code into the specific gpu driver calls of our graphic card. Then, the gpu driver is able to really send instructions and data to the graphic card through some hardware interface like PCIe, AGP or PCI.
My question is, does openGL knows how to interact with different graphic cards because there are basically only 3 types of physical connections (PCIe, AGP and PCI)?
I think it is not that simple, because I always read that different graphic cards have different drivers, so a driver is not just a way to use the physical interfaces, but it serves also the purpose to have graphic cards able to perform different types of commands (which are vendor specific).
I just do not get the big picture.
This is a copy of my answer to the question "How does OpenGL work at the lowest level?" (the question has been marked for deletion, so I add some redundancy here).
This question is almost impossible to answer because OpenGL by itself is just a front end API, and as long as an implementations adheres to the specification and the outcome conforms to this it can be done any way you like.
The question may have been: How does an OpenGL driver work on the lowest level. Now this is again impossible to answer in general, as a driver is closely tied to some piece of hardware, which may again do things however the developer designed it.
So the question should have been: "How does it look on average behind the scenes of OpenGL and the graphics system?". Let's look at this from the bottom up:
At the lowest level there's some graphics device. Nowadays these are GPUs which provide a set of registers controlling their operation (which registers exactly is device dependent) have some program memory for shaders, bulk memory for input data (vertices, textures, etc.) and an I/O channel to the rest of the system over which it recieves/sends data and command streams.
The graphics driver keeps track of the GPUs state and all the resources application programs that make use of the GPU. Also it is responsible for conversion or any other processing the data sent by applications (convert textures into the pixelformat supported by the GPU, compile shaders in the machine code of the GPU). Furthermore it provides some abstract, driver dependent interface to application programs.
Then there's the driver dependent OpenGL client library/driver. On Windows this gets
loaded by proxy through opengl32.dll, on Unix systems this resides in two places:
X11 GLX module and driver dependent GLX driver
and /usr/lib/libGL.so may contain some driver dependent stuff for direct rendering
On MacOS X this happens to be the "OpenGL Framework".
It is this part that translates OpenGL calls how you do it into calls to the driver specific functions in the part of the driver described in (2).
Finally the actual OpenGL API library, opengl32.dll in Windows, and on Unix /usr/lib/libGL.so; this mostly just passes down the commands to the OpenGL implementation proper.
How the actual communication happens can not be generalized:
In Unix the 3<->4 connection may happen either over Sockets (yes, it may, and does go over network if you want to) or through Shared Memory. In Windows the interface library and the driver client are both loaded into the process address space, so that's no so much communication but simple function calls and variable/pointer passing. In MacOS X this is similar to Windows, only that there's no separation between OpenGL interface and driver client (that's the reason why MacOS X is so slow to keep up with new OpenGL versions, it always requires a full operating system upgrade to deliver the new framework).
Communication betwen 3<->2 may go through ioctl, read/write, or through mapping some memory into process address space and configuring the MMU to trigger some driver code whenever changes to that memory are done. This is quite similar on any operating system since you always have to cross the kernel/userland boundary: Ultimately you go through some syscall.
Communication between system and GPU happen through the periphial bus and the access methods it defines, so PCI, AGP, PCI-E, etc, which work through Port-I/O, Memory Mapped I/O, DMA, IRQs.
I was wondering how to go about adding a virtual GPU into Qemu?
I have been told it involves adding a new graphics output module that uses OpenGL?
You probably refer to Create virtual hardware, kernel, qemu for Android Emulator in order to produce OpenGL graphics
The very first thing I suggest you do is reading the source code how commands to the virtual graphics adaptors already implemented are turned into graphical output. Then you should rewrite this, to use OpenGL commands instead. Once you got this you must literally invent a new, virtual GPU to offer the guest system. I'd not even attempt to emulate a GeForce or Radeon. GeForces are not publically documented anyway.
qemu doesn't provide a real kind of API for implementing a GPU. Of course there's some internal API for that, used to implement that VESA and S3 emulation, but a new GPU will require you to redo a lot of that again.
The virtual hardware should offer some I/O to pass drawing commands and data. In theory you could pass the full set OpenGL commands there. However OpenGL is hardware agnostic, whereas you actually implement "hardware", so you must find some balance there. Then in qemu you must implement that virtual hardware to execute the rendering commands apropriately.
Last but not least you must implement drivers for that virtual hardware, which will involve adding a new driver to Mesa and creating a driver for Xorg.
Is there a way I could make a C or C++ program that would run without an operating system and that would draw something like a red pixel to the top left corner? I have always wondered how these types of applications are made. Since Windows is written in C I imagine there is a way to do this.
Thanks
If you're writing for a bare processor, with no library support at all, you'll have to get all the hardware manuals, figure out how to access your video memory, and perform whatever operations that hardware requires to get a pixel drawn onto the display (or a sound on the beeper, or a block of memory read from the disk, or whatever).
When you're using an operating system, you'll rely on device drivers to know all this for you. Programs are still written, every day, for platforms without operating systems, but rarely for a bare processor. Many small MPUs come with a support library, usually a set of routines that lets you manipulate whatever peripheral devices they support.
It can certainly be done. You typically write the code in C, and you pretty much have to do everything on your own, with no standard library. To set your pixel, you'd usually load a pointer to the physical address of the screen, and write the correct value to that pointer. Alternatively, on a PC you could consider using the VESA BIOS. In all honesty, it's fairly similar to the way most code for MS-DOS was written (most used MS-DOS to read and write data on disk, but little else).
The core bootloader and the part of the Kernel that bootstraps the OS are written in assembly. See http://en.wikipedia.org/wiki/Booting for a brief writeup of how an operating system boots. There's no way I'm aware of to write a bootloader or Kernel purely in a higher level language such as C or C++ without using assembly.
You need to write a bootstrapper and a loader combination followed by a payload which involves setting the VGA mode manually by interrupt, grabbing a handle to the basic video buffer and then writing a value to the 0th byte.
Start here: http://en.wikipedia.org/wiki/Bootstrapping_(computing)
Without an OS it's difficult to have a loader, which means no dynamic libc. You'd have to link statically, as well as have a decent amount of bootstrap code written in assembly (although it could be provided as object files which you could then link with). Also, since you'd be at the mercy of whatever the system has, you'd be stuck with the VESA video modes (unless you want to write your own graphics driver and subsystem, which you don't).
There is, but not generally from within the OS. Initially, they are an asm stub that's executed from the MBR on the drive. See MBR. For x86 processors, this is generally 16-bit processing code, this generally jumps into the operating system code from here, and upgrades to 32-bit/64-bit mode depending on the operating system and chipset.
What exactly does an operating system do? I know that operating systems can be programmed, in, for example, C++, but I previously believed that C++ programs must be run under an operating system? Can somebody please explain and give links? thanks in advance, ell
An operating system is a layer between your code (user code) and the hardware.
The OS is responsible for managing the physical components and giving you a simple (hopefully) API off of which to build. It handles which programs run, when, who goes first, how memory is handled, who gets memory, video drawing, and all that good stuff.
For example, when making a GUI, instead of you sending each bit to the monitor, you tell the OS (or window manager) to make a window. You then tell it to place a button in your window. The OS then handles drawing the window, moving the window, moving the button (but keeping it where it should be in the window).
Now, you can program an operating system in C++, but it's not easy. You have to develop your kernel and whatnot, find a way to interface with the hardware, then expose that interface to your users and their programs.
So, essentially, an OS handles software-to-hardware interfacing and manages your physical resources. C++ programs can be run in an OS or, with enough work, run by themselves or even be an OS.
Actually, the C++ standard itself has something to say on this issue. §1.4/7:
Two kinds of implementations are defined: hosted and freestanding. For a hosted implementation, this International Standard defines the set of available libraries. A freestanding implementation is one in which execution may take place without the benefit of an operating system, and has an implementation-defined set of libraries that includes certain language-support libraries (17.4.1.3).
And in 17.4.1.3,
A freestanding implementation has an implementation-defined set of headers. This set shall include at least the following headers, as shown in Table 13:
Table 13—C++ Headers for Freestanding Implementations
_______________________________________________
Subclause Header(s)
18.1 Types <cstddef>
18.2 Implementation properties <limits>
18.3 Start and termination <cstdlib>
18.4 Dynamic memory management <new>
18.5 Type identification <typeinfo>
18.6 Exception handling <exception>
18.7 Other runtime support <cstdarg>
The supplied version of the header shall declare at least the functions abort(), atexit(), and exit() (18.3).
These headers either define constants or provide basic support to the compiler. In practice, some language features will be missing until the OS completes some initialization, for example new and catch.
An OS is really just a program that runs other programs and manages hardware resources for them.
If you are really serious about getting into the internals, I'd recommend reading the book Understanding the Linux Kernel.
sure, http://en.wikipedia.org/wiki/Operating_system
An operating system is the software on a computer that manages the way different programs use its hardware, and regulates the ways that a user controls the computer. Operating systems are found on almost any device that contains a computer with multiple programs—from cellular phones and video game consoles to supercomputers and web servers. Some popular modern operating systems for personal computers include Microsoft Windows, Mac OS X, and Linux (see also: list of operating systems, comparison of operating systems).
I mean the description of an operating system, what it does when and why goes far beyond an answer on this site imho.
An operating system, more specifically its kernel, is developed in a language such as C. And it is compiled into machine code just like any other program. The major difference between a mainstream OS and some code that you write in C is that the C code will run in a timeshare via the OS's CPU Scheduler. Also consider that the OS runs first, and is able to setup such an environment where it completely controls and restricts anything which it launches. Also keep in mind that system calls are how a process can communicate back to the OS, everything is just typical machine instructions that could run on any other processor of its type.
A few key features that any mainstream OS provides:
CPU Scheduler - This will load a process, allow it to run for a very limited amount of time before kicking it back off, regaining control and allowing something else to run (wether it be a kernel task or another process, typically kernel tasks have priority)
Memory management - Any application which you run does not have exact memory addresses since this is prone to change. All processes will run in virtual memory, and the OS will translate virtual memory (ex: 0x41000+) into a physical address. (again, its abstracting the hardware as is mentioned often)
File systems - various kinds
Resources - any kind of device kind be treated as a resource. A process may request access to a resource. (Oddly enough, in this day and age no mainstream OS has a mechanism for preventing dead locks for resources.)
Security - This is done through roles. It is very important that every process run within tight constrictions. This is another abstraction that the OS provides.
An operating system is just a software which is an interface between your hardware and your software. It makes an abstraction of this hardware to make it simpler to use. For instance, you don't have to read the keyboard status in your programs to check if the user hit a key. You might think of it as a lot of bricks put together and piled on top of each other, from a very precise view of the hardware to a very abstract view (from bits, to windows or buttons... for instance)
You don't have to program an operating system in a specific language, but most of them are written in C for efficiency and convenience reasons. You can do programming (your own applications) in any language then, provided that you have the correct libraries installed on the operating system.
There is no "clear" definition of what are the responsibilities of an OS. It could include the following:
Memory management
Devices & drivers
File system(s)
Processes and threads
System calls
In a nutshell OS is a program that enable the user to control computer's hardware in a relatively simple way
From a programming perspective operating systems primarily provide abstraction. Abstraction from the details of the CPU and memory management, abstraction from dealing with hardware devices, abstraction from the details of network protocol stacks.
The operating system provides a higher-level programming interface, often standardized across several operating systems like POSIX does for all Unix flavors.
After reading the question, I see what you're trying to ask. What you're asking is if C/C++ programs require an OS to run. The answer is no. A C/C++ is a compiler that translate human language into machine language. It doesn't require a specific operating system. However, if you compile in say, Visual Studio, the resulting executable machine code can't run on anything but the Windows.
In specific, C/C++ code are usually portable in that if you have a compiler for an operating system, you can compile it and it will run like so. However, sometimes you have machine specific code (or OS specific code), such as a windows application that uses windows-based interfaces that cannot be ported over to another operating system. Some examples I can think of are like directory operations are usually not portable and usually depends on what OS you're on. However, most file operations, like fopen, are portable.
An OS is a bit different. It requires a different kind of compiler, and it requires a different way to load. Most OS are made in C/C++, it is then compiled by a compiler, then it is distributed. For example, Microsoft wrote Windows 95 in C/C++, they put it through a compiler, then burnt the resulting executable code into a CD-ROM, then sold it to you then you put the disk in and it will copy the resulting executable code onto your machine and you use it.
They don't give you the source code, then your computer compiles it; it's usually they give you the resulting executable.
Basically an OS is the program that all other programs run inside of. It's literally the first program that your computer starts running when it boots up. As such, it controls all the hardware, and acts at the gatekeeper for other programs to access that hardware. It also controls ( or should, at least ) all the programs that are running under it -- when they start, how the stop, and what resources they have access to. You might call it "The Master Control Program" :)
The term "operating system" when applied to a PC, normally refers to a modern "protected memory" operating system that provides not only a basic set of system services but also a complete user interface:
the combination of a kernel, device drivers, and system services that provide memory protection, tasks that can not interfere with each other's memory, and threads which are units of execution within a process, as well as ways for threads and tasks to talk to each other and to access shared resources like file-systems that contain files, on storage devices like your PC's hard disk, are in fact, the core of the operating system.
the "shell" on top of that operating system might be as simple as the "command.com" text command prompt on DOS (remember " C:> _ "?) or as complex as the Windows Shell, including its control panel, etc.
Sometimes, a "linux distribution" contains far more than an operating system, but is informally referred to by a single name (such as Ubuntu) and so the line between what the operating system is (the linux kernel and standard libraries perhaps) and the applications that merely ship with that operating system (the Gnome and KDE environments on Linux) is pretty gray.
A great way to learn what an operating system really is, is to read one of Tannenbaum's books on Operating Systems. I believe he shows the implementation in detail of his "minix" kernel. Another book is "Linux Kernel Internals". If you can handle the technical detail in this kind of book, then you can really understand what an operating system "kernel" is, and then begin to make sense of the layers around that kernel.
I am not aware of one commercial or open source operating system that is written primarily in C++. Such system-level programming is most commonly carried out in a mix of pure ANSI C, and Assembly/Machine language. The low level assembly bits often are involved in tasks like handling interrupts, initializing hardware and booting the system up. Before you have a heap, and a stack, and a working virtual memory system, you wouldn't want to be using C++ objects, or even certain C features like malloc. Your resources and your design must be constrained by performance criteria, and any kind of extra overhead, even a semantic overhead, is to be deplored.
Recently Linus Torvalds famously insulted C++ and described on a mailing list why he would never use it for a Linux kernel. I believe however, that C++ is making inroads in areas that have typically been havens of "pure C". The Gnu GCC team for example is willing to allow C++ into the GCC codebase now, at last.