How to design a C / C++ library to be usable in many client languages? [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm planning to code a library that should be usable by a large number of people in on a wide spectrum of platforms. What do I have to consider to design it right? To make this questions more specific, there are four "subquestions" at the end.
Choice of language
Considering all the known requirements and details, I concluded that a library written in C or C++ was the way to go. I think the primary usage of my library will be in programs written in C, C++ and Java SE, but I can also think of reasons to use it from Java ME, PHP, .NET, Objective C, Python, Ruby, bash scrips, etc... Maybe I cannot target all of them, but if it's possible, I'll do it.
Requirements
It would be to much to describe the full purpose of my library here, but there are some aspects that might be important to this question:
The library itself will start out small, but definitely will grow to enormous complexity, so it is not an option to maintain several versions in parallel.
Most of the complexity will be hidden inside the library, though
The library will construct an object graph that is used heavily inside. Some clients of the library will only be interested in specific attributes of specific objects, while other clients must traverse the object graph in some way
Clients may change the objects, and the library must be notified thereof
The library may change the objects, and the client must be notified thereof, if it already has a handle to that object
The library must be multi-threaded, because it will maintain network connections to several other hosts
While some requests to the library may be handled synchronously, many of them will take too long and must be processed in the background, and notify the client on success (or failure)
Of course, answers are welcome no matter if they address my specific requirements, or if they answer the question in a general way that matters to a wider audience!
My assumptions, so far
So here are some of my assumptions and conclusions, which I gathered in the past months:
Internally I can use whatever I want, e.g. C++ with operator overloading, multiple inheritance, template meta programming... as long as there is a portable compiler which handles it (think of gcc / g++)
But my interface has to be a clean C interface that does not involve name mangling
Also, I think my interface should only consist of functions, with basic/primitive data types (and maybe pointers) passed as parameters and return values
If I use pointers, I think I should only use them to pass them back to the library, not to operate directly on the referenced memory
For usage in a C++ application, I might also offer an object oriented interface (Which is also prone to name mangling, so the App must either use the same compiler, or include the library in source form)
Is this also true for usage in C# ?
For usage in Java SE / Java EE, the Java native interface (JNI) applies. I have some basic knowledge about it, but I should definitely double check it.
Not all client languages handle multithreading well, so there should be a single thread talking to the client
For usage on Java ME, there is no such thing as JNI, but I might go with Nested VM
For usage in Bash scripts, there must be an executable with a command line interface
For the other client languages, I have no idea
For most client languages, it would be nice to have kind of an adapter interface written in that language. I think there are tools to automatically generate this for Java and some others
For object oriented languages, it might be possible to create an object oriented adapter which hides the fact that the interface to the library is function based - but I don't know if its worth the effort
Possible subquestions
is this possible with manageable effort, or is it just too much portability?
are there any good books / websites about this kind of design criteria?
are any of my assumptions wrong?
which open source libraries are worth studying to learn from their design / interface / souce?
meta: This question is rather long, do you see any way to split it into several smaller ones? (If you reply to this, do it as a comment, not as an answer)

Mostly correct. Straight procedural interface is the best. (which is not entirely the same as C btw(**), but close enough)
I interface DLLs a lot(*), both open source and commercial, so here are some points that I remember from daily practice, note that these are more recommended areas to research, and not cardinal truths:
Watch out for decoration and similar "minor" mangling schemes, specially if you use a MS compiler. Most notably the stdcall convention sometimes leads to decoration generation for VB's sake (decoration is stuff like #6 after the function symbol name)
Not all compilers can actually layout all kinds of structures:
so avoid overusing unions.
avoid bitpacking
and preferably pack the records for 32-bit x86. While theoretically slower, at least all compilers can access packed records afaik, and the official alignment requirements have changed over time as the architecture evolved
On Windows use stdcall. This is the default for Windows DLLs. Avoid fastcall, it is not entirely standarized (specially how small records are passed)
Some tips to make automated header translation easier:
macros are hard to autoconvert due to their untypeness. Avoid them, use functions
Define separate types for each pointer types, and don't use composite types (xtype **) in function declarations.
follow the "define before use" mantra as much as possible, this will avoid users that translate headers to rearrange them if their language in general requires defining before use, and makes it easier for one-pass parsers to translate them. Or if they need context info to auto translate.
Don't expose more than necessary. Leave handle types opague if possible. It will only cause versioning troubles later.
Do not return structured types like records/structs or arrays as returntype of functions.
always have a version check function (easier to make a distinction).
be careful with enums and boolean. Other languages might have slightly different assumptions. You can use them, but document well how they behave and how large they are. Also think ahead, and make sure that enums don't become larger if you add a few fields, break the interface. (e.g. on Delphi/pascal by default booleans are 0 or 1, and other values are undefined. There are special types for C-like booleans (byte,16-bit or 32-bit word size, though they were originally introduced for COM, not C interfacing))
I prefer stringtypes that are pointer to char + length as separate field (COM also does this). Preferably not having to rely on zero terminated. This is not just because of security (overflow) reasons, but also because it is easier/cheaper to interface them to Delphi native types that way.
Memory always create the API in a way that encourages a total separation of memory management. IOW don't assume anything about memory management. This means that all structures in your lib are allocated via your own memory manager, and if a function passes a struct to you, copy it instead of storing a pointer made with the "clients" memory management. Because you will sooner or later accidentally call free or realloc on it :-)
(implementation language, not interface), be reluctant to change the coprocessor exception mask. Some languages change this as part of conforming to their standards floating point error(exception-)handling.
Always pair a callbacks with an user configurable context. This can be used by the user to give the the callback state without defining global variables. (like e.g. an object instance)
be careful with the coprocessor status word. It might be changed by others and break your code, and if you change it, other code might stop working. The status word is generally not saved/restored as part of calling conventions. At least not in practice.
don't use C style varargs parameters. Not all languages allow variable number of parameters in an unsafe way
(*) Delphi programmer by day, a job that involves interfacing a lot of hardware and thus translating vendor SDK headers. By night Free Pascal developer, in charge of, among others, the Windows headers.
(**)
This is because what "C" means binary is still dependant on the used C compiler, specially if there is no real universal system ABI. Think of stuff like:
C adding an underscore prefix on some binary formats (a.out, Coff?)
sometimes different C compilers have different opinions on what to do with small structures passed by value. Officially they shouldn't support it at all afaik, but most do.
structure packing sometimes varies, as do details of calling conventions (like skipping
integer registers or not if a parameter is registerable in a FPU register)
===== automated header conversions ====
While I don't know SWIG that well, I know and use some delphi specific header tools( h2pas, Darth/headconv etc).
However I never use them in fully automatic mode, since more often then not the output sucks. Comments change line or are stripped, and formatting is not retained.
I usually make a small script (in Pascal, but you can use anything with decent string support) that splits a header up, and then try a tool on relatively homogeneous parts (e.g. only structures, or only defines etc).
Then I check if I like the automated conversion output, and either use it, or try to make a specific converter myself. Since it is for a subset (like only structures) it is often way easier than making a complete header converter. Of course it depends a bit what my target is. (nice, readable headers or quick and dirty). At each step I might do a few substitutions (with sed or an editor).
The most complicated scheme I did for Winapi commctrl and ActiveX/comctl headers. There I combined IDL and the C header (IDL for the interfaces, which are a bunch of unparsable macros in C, the C header for the rest), and managed to get the macros typed for about 80% (by propogating the typecasts in sendmessage macros back to the macro declaration, with reasonable (wparam,lparam,lresult) defaults)
The semi automated way has the disadvantage that the order of declarations is different (e.g. first constants, then structures then function declarations), which sometimes makes maintenance a pain. I therefore always keep the original headers/sdk to compare with.
The Jedi winapi conversion project might have more info, they translated about half of the windows headers to Delphi, and thus have enormous experience.

I don't know but if it's for Windows then you might try either a straight C-like API (similar to the WINAPI), or packaging your code as a COM component: because I'd guess that programming languages might want to be able to invoke the Windows API, and/or use COM objects.

Regarding automatic wrapper generation, consider using SWIG. For Java, it will do all the JNI work. Also, it is able to translate complex OO-C++-interfaces properly (provided you follow some basic guidelines, i.e. no nested classes, no over-use of templates, plus the ones mentioned by Marco van de Voort).

Think C, nothing else. C is one of the most popular programming languages. It is widely used on many different software platforms, and there are few computer architectures for which a C compiler does not exist. All popular high-level languages provide an interface to C. That makes your library accessible from almost all platforms in existence. Don't worry too much about providing an Object Oriented interface. Once you have the library done in C, OOP, functional or any other style interface can be created in appropriate client languages. No other systems programming language will give you C's flexibility and potability.

NestedVM I think is going to be slower than pure Java because of the array bounds checking on the int[][] that represents the MIPS virtual machine memory. It is such a good concept but might not perform well enough right now (until phone manufacturers add NestedVM support (if they do!), most stuff is going to be SLOW for now, n'est-ce pas)? Whilst it may be able to unpack JPEGs without error, speed is of no small concern! :)
Nothing else in what you've written sticks out, which isn't to say that it's right or wrong! The principles sound (mainly just listening to choice of words and language to be honest) like roughly standard best practice but I haven't thought through the details of everything you've said. As you said yourself, this really ought to be several questions. But of course doing this kind of thing is not automatically easy just because you're fixed on perhaps a slightly different architecture to the last code base you've worked on...! ;)
My thoughts:
All your comments on C interface compatibility sound sensible to me, pretty much best practice except you don't seem to properly address memory management policy - some sentences a bit ambiguous/vague/wrong-sounding. The design of the memory management will be to a large extent determined by the access patterns made in your application, rather than the functionality per se. I suiggest you study others' attempts at making portable interfaces like the standard ANSI C API, Unix API, Win32 API, Cocoa, J2SE, etc carefully.
If it was me, I'd write the library in a carefully chosen subset of the common elements of regular Java and Davlik virtual machine Java and also write my own custom parser that translates the code to C for platforms that support C, which would of course be most of them. I would suggest that if you restrict yourself to data types of various size ints, bools, Strings, Dictionaries and Arrays and make careful use of them that will help in cross-platform issues without affecting performance much most of the time.

your assumptions seem ok, but i see trouble ahead, much of which you have already spotted in your assumptions.
As you said, you can't really export c++ classes and methods, you will need to provide a function based c interface. What ever facade you build around that, it will remain a function based interface at heart.
The basic problem i see with that is that people choose a specific language and its runtime because their way of thinking (functional or object oriented) or the problem they address (web programming, database,...) corresponds to that language in some way or other.
A library implemented in c will probably never feel like the libraries they are used to, unless they program in c themselves.
Personally, I would always prefer a library that "feels like python" when I use python, and one that feels like java when I do Java EE, even though I know c and c++.
So your effort might be of little actual use (other than your gain in experience), because people will probably want to stick with their mindset, and rather re-implement the functionality than use a library that does the job, but does not fit.
I also fear the desired portability will seriously hamper development. Just think of the infinite build settings needed, and tests for that. I have worked on a project that tried to maintain compatibility for 5 operating systems (all posix-like, but still) and about 10 compilers, the builds were a nightmare to test and maintain.

Give it an XML interface, whether passed as a parameter and return value or as files through a command-line invocation. This may not seem as direct as a normal function interface, but is the most practical way to access an executable from, e.g., Java.

Related

Is it common practice to abstract library dependencies from implementation?

My answer to this question would be "no." But my coworkers disagree.
We're rebuilding our product and have a lot of critical decisions to make in the near-term.
While doing some of my own work I noticed that we've got some in-house C++ classes to abstract some of the POSIX API (threads, mutexes, semaphores, and rw locks) and other utility classes. Note that these classes are basic, and haven't been ported from Linux (portability is a factor in the rebuild.) We are also using POCO C++ libraries.
I brought this to the attention of my coworkers and suggested that we ditch our in-house classes in favour of their POCO equivalents. I want to take full advantage of the library we're already using. They suggested that we should implement our in-house classes using POCO, and further abstract additional POCO classes as necessary, so as not to depend on any specific C++ library (citing future unknowns - what if we want to use a different lib/framework like QT or boost, what if the one we choose turns out to be no good or development becomes inactive, etc.)
They also don't want to refactor legacy code, and by abstracting parts of POCO with our own classes, we can implement additional functionality (classic OOP.) Both of these arguments I can appreciate. However, I argue that if we're doing a recode we should go big, or go home. Now would be the time to refactor and it really shouldn't be that bad especially given the similarity between our classes and those in POCO (threads, etc.) I don't know what to say regarding the second point - should we only use extended classes where the functionality is necessary?
My coworkers also don't want to litter the POCO namespace all over the place. I argue that we should pick a library/framework/toolkit, and stick with it. Take full advantage of its features. Is this not typical practice? The only project I've seen that abstracts an entire framework is Freeswitch (that provides its own interface to APR.)
One suggestion is that the API we expose to each other, and potential customers, should be free of POCO, but it would be present in the implementation (which makes sense.)
None of us really have experience in these kinds of design decisions, and it shows in the current product. Having been at this since I was young, I've got some intuition that has brought me here, but no practical experience either. I really want to avoid poor solutions to problems that are already solved.
I think my question boils down to this: When building a product, should we a) choose a dominant framework on which to base most of our code, and b) expect that framework to be tightly coupled with the product? Isn't that the point of a framework? (Is framework or library more appropriate for POCO?)
First, the API that you expose should definitely be free of POCO, boost, qt, or any other type that is not part of the standard C++ library. This is because the base libraries have their own release cycle, distinct from the release cycle of your library. If the users of your library also use boost, but a different, incompatible, version, they would need to spend time to resolve the incompatibility. The only exception to this rule is when you design a library to be released as part of a wider framework - say, an addition to the POCO toolkit. In this case the release of your library is tied to the release of the entire toolkit.
Internally, however, you should avoid using your own wrappers, unless the library that you are abstracting out is a true "commodity library"1. The reason for this is that when you hide an external library behind your classes, most of the time you mimic the level of abstraction of the library that you are hiding. The code that uses your wrapper will program to the level of abstraction dictated by the external library. When you swap the implementation behind your wrapper for a different framework, it is very likely that you would either (1) adapt the new framework to fit the level of abstraction of the old framework, or (2) will need to change the way in which you use your wrapper. Both cases are highly suspect: if you do (1), perhaps you shouldn't switch in the first place, and if you do (2), then your wrappers prove to be useless.
1 By "commodity library" I mean a library that provides a level of abstraction commonly found in other libraries that serve a similar purpose.
There are two situations where I think it's worth having your own wrappers:
1) You've looked at several different mutex implementations on different systems/libraries, you've established a common set of requirements that they can all satisfy and that are sufficient for your software. Then you define that abstraction and implement it one or more times, knowing that you've planned ahead for flexibility. The rest of your code is written to rely only on your abstraction, not on any incidental properties of the current implementation(s). I have done this in the past, although not in code I can show you.
A classic example of this "least common interface" would be to change rename in the filesystem abstraction, on the basis that Windows cannot implement an atomic rename-over-an-existing-file. So your code must not rely on atomic rename-replacement if you might in future swap out your current *nix implementation for one that can't do that. You have to restrict the interface from the start.
When done right, this kind of interface can considerably ease any kind of future porting, either to a new system or because you want to change your third-party library dependencies. However, an entire framework is probably too big to successfully do this with -- essentially you'd be inventing and writing your own framework, which is not a trivial task and conceivably is a larger task than writing your actual software.
2) You want to be able to mock/stub/sham/spoof/plagiarize/whatever the next clever technique is, the mutex in tests, and decide that you will find this easier if you have your own wrapper thrown over it than if you're trying to mess with symbols from third-party libraries, or that are built-in.
Note that defining your own functions called wrap_pthread_mutex_init, wrap_pthread_mutex_lock etc, that precisely mimic pthread_* functions, and take exactly the same parameters, might satisfy (2) but doesn't satisfy (1). And anyway, doing (2) properly probably requires more than just wrappers, you usually also want to inject the dependencies into your code.
Doing extra work under the heading of flexibility, without actually providing for flexibility, is pretty much a waste of time. It can be very difficult or even provably impossible to implement one threading environment in terms of another one. If you decide in future to switch from pthreads to std::thread in C++, then having used an abstraction that looks exactly like the pthreads API under different names is (approximately) no help whatsoever.
For another possible change you might make, implementing the full pthreads API on Windows is sort of possible, but probably more difficult than only implementing what you actually need. So if you port to Windows, all your abstraction saves you is the time to search and replace all calls in the rest of your software. You're still going to have to (a) plug in a complete Posix implementation for Windows, or (b) do the work to figure out what you actually need, and only implement that. Your wrapper won't help with the real work.

Cross-compiler library communication

I need to develop a C++ front-end GUI using MSVC that needs to communicate with the bank-end library that is compiled with C++ Builder.
How can we define our interfaces so that we don't run into CRT library problems?
For example, I believe we will be unable to safely pass STL containers back and forth. Is that true?
I know I can pass POD types safely, but I am hoping that I can use some more sophisticated data structures as well.
You might find this article interesting Binary-compatible C++ Interfaces. The lesson in general is, never pass STL container, boost or anything of the like. Like the two other answers your best bet is to stick with PODs and functions with a calling convention specified.
Since implementations of the STL vary from compiler to compiler, it is not safe to pass STL classes. You can then either require the user to a specific implementation of the STL (and probably a specific version as well), or simply not use the STL between libraries.
Further more stick with the calling conventions where the behaviour can be considered cross compiler frieindly. For instance __cdecl and __stdcall will be handled equally on most compilers, while the __fastcall calling convention will be a problem, especially if you wish to use the code in C++ Builder.
As the article "Binary-compatible C++ Interface" mentions you can use interface as well, as long as you remember a few basic principles.
Always make interfaces pure virtual classes (that is no implementations).
Make sure to use a proper calling convention for the member functions in the interface (the article mentions __stdcall for Windows.
Keep the memory clean up at the same side of the DLL boundary.
And quite a few other things, like don't use exceptions, don't overload functions in the interface (compilers treat this differently), etc. Find them all at the bottom of the article.
You might want to read more about the Component Object Model (COM) if you choose to go with the C++ interfaces, to get an idea about how and why this will be able to work across compilers.
You should be able to pass data that you can safely pass via a C interface, in other words, PODs. Everything over and above PODs that is being passed by regular C or C++ function calls will run into issues with differing object layouts and different implementations of the runtime libraries.
You probably will be able to pass structs of PODs if you are very careful about how you lay them out in memory and ensure that both compilers are using the same data packing etc. Over and above C structs, you pretty much have a creek/paddle problem.
For passing more sophisticated data types, I would look into object component technologies like COM, CORBA or other technologies that allow you to make remote or cross-process function calls. These would solve the problem of marshalling the data between compilers and processes and thus solve your 'pod-only' problem.
Or you could write the front end using C++-Builder and save yourself a lot of grief and headaches.
I ran into problems when passing STL-Containers even when using the same STL-Implementation, but having set different levels of debug information etc. Therefore Passing PODs will be OK. C++ Containers will almost certainly result in problems.

Why are drivers and firmwares almost always written in C or ASM and not C++?

I am just curious why drivers and firmwares almost always are written in C or Assembly, and not C++?
I have heard that there is a technical reason for this.
Does anyone know this?
Lots of love,
Louise
Because, most of the time, the operating system (or a "run-time library") provides the stdlib functionality required by C++.
In C and ASM you can create bare executables, which contain no external dependencies.
However, since windows does support the C++ stdlib, most Windows drivers are written in (a limited subset of) C++.
Also when firmware is written ASM it is usually because either (A) the platform it is executing on does not have a C++ compiler or (B) there are extreme speed or size constraints.
Note that (B) hasn't generally been an issue since the early 2000's.
Code in the kernel runs in a very different environment than in user space. There is no process separation, so errors are a lot harder to recover from; exceptions are pretty much out of the question. There are different memory allocators, so it can be harder to get new and delete to work properly in a kernel context. There is less of the standard library available, making it a lot harder to use a language like C++ effectively.
Windows allows the use of a very limited subset of C++ in kernel drivers; essentially, those things which could be trivially translated to C, such as variable declarations in places besides the beginning of blocks. They recommend against use of new and delete, and do not have support for RTTI or most of the C++ standard library.
Mac OS X use I/O Kit, which is a framework based on a limited subset of C++, though as far as I can tell more complete than that allowed on Windows. It is essentially C++ without exceptions and RTTI.
Most Unix-like operating systems (Linux, the BSDs) are written in C, and I think that no one has ever really seen the benefit of adding C++ support to the kernel, given that C++ in the kernel is generally so limited.
1) "Because it's always been that way" - this actually explains more than you think - given that the APIs on pretty much all current systems were originally written to a C or ASM based model, and given that a lot of prior code exists in C and ASM, it's often easier to 'go with the flow' than to figure out how to take advantage of C++.
2) Environment - To use all of C++'s features, you need quite a runtime environment, some of which is just a pain to provide to a driver. It's easier to do if you limit your feature set, but among other things, memory management can get very interesting in C++, if you don't have much of a heap. Exceptions are also very interesting to consider in this environment, as is RTTI.
3) "I can't see what it does". It is possible for any reasonably skilled programmer to look at a line of C and have a good idea of what happens at a machine code level to implement that line. Obviously optimization changes that somewhat, but for the most part, you can tell what's going on. In C++, given operator overloading, constructors, destructors, exception, etc, it gets really hard to have any idea of what's going to happen on a given line of code. When writing device drivers, this can be deadly, because you often MUST know whether you are going to interact with the memory manager, or if the line of code affects (or depends on) interrupt levels or masking.
It is entirely possible to write device drivers under Windows using C++ - I've done it myself. The caveat is that you have to be careful about which C++ features you use, and where you use them from.
Except for wider tool support and hardware portability, I don't think there's a compelling reason to limit yourself to C anymore. I often see complicated hand-coded stuff done in C that can be more naturally done in C++:
The grouping into "modules" of functions (non-general purpose) that work only on the same data structure (often called "object") -> Use C++ classes.
Use of a "handle" pointer so that module functions can work with "instances" of data structures -> Use C++ classes.
File scope static functions that are not part of a module's API -> C++ private member functions, anonymous namespaces, or "detail" namespaces.
Use of function-like macros -> C++ templates and inline/constexpr functions
Different runtime behavior depending on a type ID with either hand-made vtable ("descriptor") or dispatched with a switch statement -> C++ polymorphism
Error-prone pointer arithmetic for marshalling/demarshalling data from/to a communications port, or use of non-portable structures -> C++ stream concept (not necessarily std::iostream)
Prefixing the hell out of everything to avoid name clashes: C++ namespaces
Macros as compile-time constants -> C++11 constexpr constants
Forgetting to close resources before handles go out of scope -> C++ RAII
None of the C++ features described above cost more than the hand-written C implementations. I'm probably missing some more. I think the inertia of C in this area has more to do with C being mostly used.
Of course, you may not be able to use STL liberally (or at all) in a constrained environment, but that doesn't mean you can't use C++ as a "better C".
The comments I run into as why a shop is using C for an embedded system versus C++ are:
C++ produces code bloat
C++ exceptions take up too much
room.
C++ polymorphism and virtual tables
use too much memory or execution
time.
The people in the shop don't know
the C++ language.
The only valid reason may be the last. I've seen C language programs that incorporate OOP, function objects and virtual functions. It gets very ugly very fast and bloats the code.
Exception handling in C, when implemented correctly, takes up a lot of room. I would say about the same as C++. The benefit to C++ exceptions: they are in the language and programmers don't have to redesign the wheel.
The reason I prefer C++ to C in embedded systems is that C++ is a stronger typed language. More issues can be found in compile time which reduces development time. Also, C++ is an easier language to implement Object Oriented concepts than C.
Most of the reasons against C++ are around design concepts rather than the actual language.
The biggest reason C is used instead of say extremely guarded Java is that it is very easy to keep sight of what memory is used for a given operation. C is very addressing oriented. Of key concern in writing kernel code is avoiding referencing memory that might cause a page fault at an inconvenient moment.
C++ can be used but only if the run-time is specially adapted to reference only internal tables in fixed memory (not pageable) when the run-time machinery is invoked implicitly eg using a vtable when calling virtual functions. This special adaptation does not come "out of the box" most of the time.
Integrating C with a platform is much easier to do as it is easy to strip C of its standard library and keep control of memory accesses utterly explicit. So what with it also being a well-known language it is often the choice of kernel tools designers.
Edit: Removed reference to new and delete calls (this was wrong/misleading); replaced with more general "run-time machinery" phrase.
The reason that C, not C++ is used is NOT:
Because C++ is slower
Or because the c-runtime is already present.
It IS because C++ uses exceptions.
Most implementations of C++ language exceptions are unusable in driver code because drivers are invoked when the OS is responding to hardware interrupts. During a hardware interrupt, driver code is NOT allowed to use exceptions as that would/could cause recursive interrupts. Also, the stack space available to code while in the context of an interrupt is typically very small (and non growable as a consequence of the no exceptions rule).
You can of course use new(std::nothrow), but because exceptions in c++ are now ubiqutious, that means you cannot rely on any library code to use std::nothrow semantics.
It IS also because C++ gave up a few features of C :-
In drivers, code placement is important. Device drivers need to be able to respond to interrupts. Interrupt code MUST be placed in code segments that are "non paged", or permanently mapped into memory, as, if the code was in paged memory, it might be paged out when called upon, which will cause an exception, which is banned.
In C compilers that are used for driver development, there are #pragma directives that can control which type of memory functions end up on.
As non paged pool is a very limited resource, you do NOT want to mark your entire driver as non paged: C++ however generates a lot of implicit code. Default constructors for example. There is no way to bracket C++ implicitly generated code to control its placement, and because conversion operators are automatically called there is no way for code audits to guarantee that there are no side effects calling out to paged code.
So, to summarise :- The reason C, not C++ is used for driver development, is because drivers written in C++ would either consume unreasonable amounts of non-paged memory, or crash the OS kernel.
C is very close to a machine independent assembly language. Most OS-type programming is down at the "bare metal" level. With C, the code you read is the actual code. C++ can hide things that C cannot.
This is just my opinion, but I've spent a lot of time in my life debugging device drivers and OS related things. Often by looking at assembly language. Keep it simple at the low level and let the application level get fancy.
Windows drivers are written in C++.
Linux drivers are written in c because the kernel is written in c.
Probably because c is still often faster, smaller when compiled, and more consistent in compilation between different OS versions, and with fewer dependencies. Also, as c++ is really built on c, the question is do you need what it provides?
There is probably something to the fact that people that write drivers and firmware are usually used to working at the OS level (or lower) which is in c, and therefore are used to using c for this type of problem.
The reason that drivers and firmwares are mostly written in C or ASM is, there is no dependency on the actual runtime libraries. If you were to imagine this imaginary driver written in C here
#include <stdio.h>
#define OS_VER 5.10
#define DRIVER_VER "1.2.3"
int drivermain(driverstructinfo **dsi){
if ((*dsi)->version > OS_VER){
(*dsi)->InitDriver();
printf("FooBar Driver Loaded\n");
printf("Version: %s", DRIVER_VER);
(*dsi)->Dispatch = fooDispatch;
}else{
(*dsi)->Exit(0);
}
}
void fooDispatch(driverstructinfo *dsi){
printf("Dispatched %d\n", dsi->GetDispatchId());
}
Notice that the runtime library support would have to be pulled in and linked in during compile/link, it would not work as the runtime environment (that is when the operating system is during a load/initialize phase) is not fully set up and hence there would be no clue on how to printf, and would probably sound the death knell of the operating system (a kernel panic for Linux, a Blue Screen for Windows) as there is no reference on how to execute the function.
Put it another way, with a driver, that driver code has privilege to execute code along with the kernel code which would be sharing the same space, ring0 is the ultimate code execution privilege (all instructions allowed), ring3 is where the front end of the operating system runs in (limited execution privilege), in other words, a ring3 code cannot have a instruction that is reserved for ring0, the kernel will kill the code by trapping it as if to say 'Hey, you have no privilege to tread up ring0's domain'.
The other reason why it is written in assembler, is mainly for code size and raw native speed, this could be the case of say, a serial port driver, where input/output is 'critical' to the function in relation to timing, latency, buffering.
Most device drivers (in the case of Windows), would have a special compiler toolchain (WinDDK) which can use C code but has no linkage to the normal standard C's runtime libraries.
There is one toolkit that can enable you to build a driver within Visual Studio, VisualDDK. By all means, building a driver is not for the faint of heart, you will get stress induced activity by staring at blue screens, kernel panics and wonder why, debugging drivers and so on.
The debugging side is harder, ring0 code are not easily accessible by ring3 code as the doors to it are shut, it is through the kernel trap door (for want of a better word) and if asked politely, the door still stays shut while the kernel delegates the task to a handler residing on ring0, execute it, whatever results are returned, are passed back out to ring3 code and the door still stays shut firmly. That is the analogy concept of how userland code can execute privileged code on ring0.
Furthermore, this privileged code, can easily trample over the kernel's memory space and corrupt something hence the kernel panic/bluescreens...
Hope this helps.
Perhaps because a driver doesn't require object oriented features, while the fact that C still has somewhat more mature compilers would make a difference.
There are many style of programming such as procedural, functional, object oriented etc. Object oriented programming is more suited for modeling real world.
I would use object-oriented for device drivers if it suites it. But, most of the time when you programming device drivers, you would not need the advantages provided by c++ such as, abstraction, polymorphism, code reuse etc.
Well, IOKit drivers for MacOSX are written in C++ subset (no exceptions, templates, multiple inheritance). And there is even a possibility to write linux kernel modules in haskell.)
Otherwise, C, being a portable assembly language, perfectly catches the von Neumann architecture and computation model, allowing for direct control over all it's peculiarities and drawbacks (such as the "von Neumann bottleneck"). C does exactly what it was designed for and catches it's target abstraction model completely and flawlessly (well except for implicit assumption in single control flow which could have been generalized to cover the reality of hardware threads) and this is why i think it is a beautiful language.) Restricting the expressive power of the language to such basics eliminates most of the unpredictable transformation details when different computational models are being applied to this de-facto standard. In other words, C makes you stick to basics and allows pretty much direct control over what you are doing, for example when modeling behavior commonality with virtual functions you control exactly how the function pointer tables get stored and used when comparing to C++'s implicit vtbl allocation and management. This is in fact helpful when considering caches.
Having said that, object-based paradigm is very useful for representing physical objects and their dependencies. Adding inheritance we get object-oriented paradigm which in turn is very useful to represent physical objects' structure and behavior hierarchy. Nothing stops anyone from using it and expressing it in C again allowing full control over exactly how your objects will be created, stored, destroyed and copied. In fact that is the approach taken in linux device model. They got "objects" to represent devices, object implementation hierarchy to model power management dependancies and hacked-up inheritance functionality to represent device families, all done in C.
because from system level, drivers need to control every bits of every bytes of the memory, other higher language cannot do that, or cannot do that natively, only C/Asm achieve~

Learning C when you already know C++?

I think I have an advanced knowledge of C++, and I'd like to learn C.
There are a lot of resources to help people going from C to C++, but I've not found anything useful to do the opposite of that.
Specifically:
Are there widely used general purpose libraries every C programmer should know about (like boost for C++) ?
What are the most important C idioms (like RAII for C++) ?
Should I learn C99 and use it, or stick to C89 ?
Any pitfalls/traps for a C++ developer ?
Anything else useful to know ?
There's a lot here already, so maybe this is just a minor addition but here's what I find to be the biggest differences.
Library:
I put this first, because this in my opinion this is the biggest difference in practice. The C standard library is very(!) sparse. It offers a bare minimum of services. For everything else you have to roll your own or find a library to use (and many people do). You have file I/O and some very basic string functions and math. For everything else you have to roll your own or find a library to use. I find I miss extended containers (especially maps) heavily when moving from C++ to C, but there are a lot of other ones.
Idioms:
Both languages have manual memory (resource) management, but C++ gives you some tools to hide the need. In C you will find yourself tracking resources by hand much more often, and you have to get used to that. Particular examples are arrays and strings (C++ vector and string save you a lot of work), smart pointers (you can't really do "smart pointers" as such in C. You can do reference counting, but you have to up and down the reference counts yourself, which is very error prone -- the reason smart pointers were added to C++ in the first place), and the lack of RAII generally which you will notice everywhere if you are used to the modern style of C++ programming.
You have to be explicit about construction and destruction. You can argue about the merits of flaws of this, but there's a lot more explicit code as a result.
Error handling. C++ exceptions can be tricky to get right so not everyone uses them, but if you do use them you will find you have to pay a lot of attention to how you do error notification. Needing to check for return values on all important calls (some would argue all calls) takes a lot of discipline and a lot of C code out there doesn't do it.
Strings (and arrays in general) don't carry their sizes around. You have to pass a lot of extra parameters in C to deal with this.
Without namespaces you have to manage your global namespace carefully.
There's no explicit tying of functions to types as there is with class in C++. You have to maintain a convention of prefixing everything you want associated with a type.
You will see a lot more macros. Macros are used in C in many places where C++ has language features to do the same, especially symbolic constants (C has enum but lots of older code uses #define instead), and for generics (where C++ uses templates).
Advice:
Consider finding an extended library for general use. Take a look at GLib or APR.
Even if you don't want a full library consider finding a map / dictionary / hashtable for general use. Also consider bundling up a bare bones "string" type that contains a size.
Get used to putting module or "class" prefixes on all public names. This is a little tedious but it will save you a lot of headaches.
Make heavy use of forward declaration to make types opaque. Where in C++ you might have private data in a header and rely on private is preventing access, in C you want to push implementation details into the source files as much as possible. (You actually want to do this in C++ too in my opinion, but C makes it easier, so more people do it.)
C++ reveals the implementation in the header, even though it technically hides it from access outside the class.
// C.hh
class C
{
public:
void method1();
int method2();
private:
int value1;
char * value2;
};
C pushes the 'class' definition into the source file. The header is all forward declarations.
// C.h
typedef struct C C; // forward declaration
void c_method1(C *);
int c_method2(C *);
// C.c
struct C
{
int value1;
char * value2;
};
Glib is a good starting point for modern C and gets you used to concepts like opaque types and semi-object orientation, which are common stylistically in modern C. On the other end of the spectrum standard POSIX APIs are kind of "classical" C.
The biggest gap in going from C++ to C isn't syntax, it's idiom and there, like C++, there are different schools of programming. You'll write fairly different C if you doing a device driver vs., say, an XML parser.
Q5. Anything else useful to know?
Buy a copy of K&R2 and read it through. On a cost per page basis it'll probably be the most expensive book on computing you'll ever buy with your own money but it will give you a deep appreciation for C and the thought processes that went into it. Doing the exercises will also hone your skills and get you used to what is available in the language as opposed to C++.
Taking your questions in order:
Unfortunately, there's nothing like Boost for C.
Nothing that's really on the order of RAII either.
The only compiler that tries to implement C99 is Comeau.
Lots of them all over the place, I'm afraid.
Quite a bit. C takes quite a different mindset than C.
Some of those may seem rather terse, but such is life. There are some good libraries for C, but no one place like Boost that they've been collected together or given a relatively uniform interface like Boost has done for C++.
There are lots of idioms, but many of them are in how you edit your code, such as sort of imitating RAII by writing an fopen() and a matching fclose() in quick succession, and only afterwards writing the code in between to process the data.
The pitfalls/traps that wait around every corner mostly stem from lack of dynamic data structures like string and vector, so you frequently have to write such things yourself. Without operator overloading, constructors, etc., it's considerably more difficult to make them really general purpose. Lots of libraries have them, but you end up rolling your own anyway because:the library doesn't do quite what you want, orusing the library is more work than it's worth.
The difference in mindset is almost certainly the biggest thing, at least for me. When I'm writing C++, I concentrate almost all my real effort on designing the cleanest possible interfaces, and I tend to treat the implementation of an interface as almost throwaway code. For the most part, I don't plan on making minor tweaks to that part of the code -- as long as the interface is good, replacing the entire implementation is usually easy enough that I don't worry about it much.
In C, it seems (at least to me) much more difficult to separate the interface from the implementation nearly as thoroughly or cleanly. As such, I tend to spend a lot more time trying to implement every part of the code as cleanly as possible, because later changes tend to be more difficult and throwing away and replacing pieces that aren't very good is substantially less likely to work out very well.
Edit (since people have raised questions about C99 support): While my statement about lack of C99 support may seem harsh, the fact is that it's true.
MS VC++: supports C95, and has a couple C99 features (e.g. C++ style comment delimiters), mostly because C99 standardized what they'd previously had as an extension.
Gnu: According to C99 Features Status page, the most recent iteration of gcc (4.4) has some C99 features, but some (including VLAs) are characterized as "broken", and others as "missing". Some of the missing "features" are really whole areas, not individual features.
PCC: The PCC site claims C99 conformance only as a goal for the future, not as a present reality.
Embarcadero Technologies (nee Borland) don't seem to say anything about conformance with C99 at all -- from the looks of things, the last time they worked on the C compiler may well have been before C99 was even released.
Microsoft openly states that they have no current plans for supporting C99, and they're not going to even consider it until VS 2010 is released. Though I can't find any public statements about it, Embarcadero appears about the same: no hint of a current plan, and nor even that they're going to consider working on it anytime soon.
While gcc and pcc both seem to have plans, they're currently just that: plans. They both openly admit that at the present time, they aren't really even very close to conforming with C99.
Here's a quick reference of some of the major things you'll want to know.
This is advice you didn't ask for: I think most potential employers take it as a given that if you C++ you know C. Learning the finer points of C, while an interesting academic exercise, will IMO not earn you a lot of eligibility points.
If you ever end up in a position of needing to do C, you'll catch on to the differences quickly enough.
But don't listen to me. I was too lazy and stupid to learn C++ :)
Anything else useful to know ?
C99 is not subset of c++ any revision, but separate language.
Just about the biggest shock I had when I went back to C was that variables are defined at the function level - i.e. you can't scope variables inside a block(if statement or for loop) inside a function.
Except for very few cases, any C code is valid C++, so there isn't actually anything new you should learn.
It's more a matter of unlearning.
Not using new, not using classes, defining variables at the beginning of a code block, etc.
In a strict sense, C++ is not object-oriented, but it's still procedural with support for classes. That said, you are actually using procedural programming in C++ already, the most shocking change will be not having classes, inheritance, polymorphism, etc.
As C++ is almost a superset of C89, you should know just about all of C89 already. You probably want to concentrate on the differences between C89 and C99.

To write a bootloader in C or C++?

I am writing a program, more specifically a bootloader, for an embedded system. I am going to use a C library to interact with some of the hardware components and I have the choice of writing it either in C or C++. Is there any reason I should choose one over the other? I do not need the object oriented features of C++ but it does have a stronger type system. Could it have other language features that would make the program more robust? I know some people avoid C++ because it can (but not always) generate large firmware images.
This isn't a particularly straightforward question to answer. It depends on a number of factors including:
How you prefer to layout your code.
Whether there's a C++ compiler available for your target (and any other targets you may wish to use the bootloader on).
How critical the code size is for your application (we're talking about 10% extra maybe, not MB as suggested by another answer).
Personally, I really like classes as a way of laying out my code. Even when writing C code, I'll tend to keep everything in modular files with file-scope static functions "simulating" member functions and (a few) file-scope static variables to "simulate" member variables. Having said that, most of my existing embedded projects (all of which are relatively small scale, up to a maximum of 128kB flash including bootloader, but usually less) have tended to be written in C. Now that I have a C++ compiler though, I'm certainly considering moving to C++.
There are considerable benefits to C++ from simply using references, overloading and templates, even if you don't go as far as classes. Certainly, I'd stop short of using a lot of more advanced features, including the use of dynamic memory allocation (new). Then again, I'd avoid dynamic memory allocation (malloc etc) in embedded C as well if possible.
If you have a C++ compiler (even if it's only g++), it is worth running your code through it just for the additional type checking so that you can reduce the number of problems in your code. The C++ compiler can pick up on a few things that even static analysis tools won't spot.
For a good discussion on many invalid reasons people reject C++, see Dan Saks' article on Embedded.com.
For a boot-loader the obvious choice is C, especially on an embedded system. The generated code will need to be close to the metal, and very easy to debug, likely by dropping into assembly, which quickly becomes difficult without care in C++. Also C tool-chains are far more ubiquitous than C++ tool-chains, allowing your boot-loader to be used on more platforms. Lastly, generated binaries are typically smaller, and use less memory when written C style.
If you don't need to use Object Orientation, use C. Simple choice there. Its simpler and easier, whilst accomplishing the same task.
Some die hards will disagree, but OO is what makes C++ > C, and vice versa in a lot of circumstances.
I would use C unless there is a specific reason to use C++. For a Bootloader you are not really going to need OO.
Use the simplest tool that will accomplish the job.
Write programs in C is not the same as writing it in C++. If you know how to do it only in C++, then your choice is C++. For writing bootloader it will be better to minimize code, so you probably will have to disable standard C++ library. If you know how to write in C then you should use C — it is more common choice for such kind of tasks.
Most of the previous answers assume that your bootloader is small and simple which is typically the case; however, if it becomes more complex (i.e. you need to be able to load from an Ethernet port, a USB port, or a serial port...you need to validate the code that is being loaded before you wipe out your existing code, etc.) you may want to consider C++.
I have also found that the bootloader and the application typically share some amount of common code so you may also want to consider using the same language as your application to facilitate the code sharing.
The C language is substantially easier to parse than C++. This means a program that is both valid C and valid C++ will compile faster as a C program. Probably not a major concern, but it is just another reason why C++ is probably overkill.
Go with C++ and objchoose what language features you need. You still have full control of the output object code as long as you understand the C++ abstractions that you're using.
Use of OO can still run well if you avoid the use of virtual functions. Avoid immutable object types that require a lot of copying in order to pass values, like std::string. But, you can still use features like templates without any real impact on runtime performance.
Use C with µClibc. It will make your code simpler and reduce its footprint. Can be found in: www.uclibc.org.