Kernel mode programming using simplistic c++? - c++

I am about to delve into kernel land. My question relates to the programming language. I have seen most tutorials to be written in C. I currently program in C++ and Assembly. I also studied C before C++, but I didn't use it a lot. Would it be possible to program in kernel mode using simplistic C++without using any advanced constructs? Basically I am trying to avoid the minor differences that exist between the two languages(like no bool in C, no automatic returning of 0 from main, really minor differences). I won't be using templates, classes and the like. So would it be possible to program in kernel mode using simplistic C++ without any major annoyances?

Even if not officially supported, you can use C++ as the development language for Windows kernel development.
You should be aware of the following things :
you MUST define the new and delete operator to map to ExAllocatePoolWithTag and ExFreePool.
try to avoid virtual functions. It seems not possible to control the location of the vtable of the object and this may have unexpected results if it is in a pageable portion and you code is called with IRQL >= DISPATCH_LEVEL.
if you still need to use virtual methods table than lock .rdata segment before using it on IRQL >= DISPATCH_LEVEL.
Apart from these kinds of limitations, you can use C++ for your driver development.

Add two links if you want to do C++ in WDK. It's a one time setup effort.
The NT Insider:Guest Article: C++ in an NT Driver
The NT Insider:Global Relief Effort - C++ Runtime Support for the NT DDK
Have seen kernel codes use lots of auto-locks/smart-pointers; although they make the code neat, I feel it has a learning curve for beginner to fully understand, and if abused, lots of construct/destruct codes slow things down.

If you write your code carefully, knowing what exactly stands behind each definition, operator, call, etc, then there should be no problem writing kernel code in C++. The Microsoft document mentioned in the comments above is a good reading precisely because it describes situations in which C++ isn't as transparent as C or doesn't provide similar important guarantees and from that you know what to avoid.

Microsoft has written a guide. Basically they tell us to steer clear of anything but using C++'s relaxed rules of variable declarations...sigh. Anything else and you're on your own. Anyway it can't be all that bad but here are some examples of what you need to remember:
Memory allocated in the paged pool can get paged out. If you try to access it when IRQL is above PASSIVE_LEVEL you're screwed (or at least you will be every once in a while when your customer complains about your driver BSODding their system)! Test your driver on a low memory system under load!
The non-paged pool is limited, you most likely cannot allocate all your needs from it.
Stack is much smaller than in user mode ~12-24K.
Anything you do involving floating point path in the kernel must be protected by KeSaveFloatingPointState and KeRestoreFloatingPointState
C++ exceptions: No
Read the guide for more. Now if you can make sure that the generated code follows the rules, go ahead and use C++.

Related

gcc - how to detect pointer-based memory access

I'm focusing on micropython, specifically the branch dynamic-native-modules.
This feature will, in the future, allow you to compile a C/C++ function into a native .obj and package it together with a .py interface for a huge speed boost.
Awesome! But the issue is that if you're using a RTOS, which doesn't have virtual memory, then any executing native code can access any part of the address space including peripherals, the RTOS' state etc.
You don't want the user to be able to do something like this:
void user_func()
{
/* point to arbitrary memory, potentially the reset registers, flash erase . . . you get the point */
int * a = (int*)0x1234;
*a = 0x10110000; // DESTROY!!!
}
Even the following should be disallowed:
void user_func()
{
int a;
(int*)(&a-1000) = 0x10010111;
}
SOLUTIONS?
Create own version of gcc (for each binary format)
Decompile .obj files and detect use of pointers (for each binary binary format)
FEEDBACK TO COMMENTS
I get that it may be impossible to stop a malicious user but that's not the #1 worry. We want to stop well-meaning but accidental code. If it's not possible to stop every, single case that's ok.
If we can prohibit/detect explicit pointer accesses and simply provide warnings regarding array use, that is still very valuable.
WARNING: YOU'RE USING AN ARRAY! MAKE SURE YOU DON'T GO OUT-OF-BOUNDS
Your best chance is a GCC plugin which looks at the frontend-generated GENERIC or GIMPLE IRs and implements the policies you want. Depending on the policies and the source code you want to accept, this could be a lot of work and very difficult.
If you want a purely syntax-based or type-based approach (simply rejecting all pointer arithmetic), Clang with its ASTs is easier to work with than GCC.
There could be a way of doing what you want - as long as you protect from honest mistakes, not deliberate attempts to break things.
First, you embrace C++ Core Guidelines and use support from the tools: https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#S-tools
For your purposes, it will stop your code from using raw pointers and pointer arithmetic in non-library code. Period. Core guidelines do more than that, but this is what is related to your question.
Than, you do a simple grep on the code to make sure it is not using pointers blessed by Guidelines - this would be a case of more or less simple grep.
In practice, it is not reasonably possible.
You might consider changing the compilation (e.g. with a GCC plugin, like mentioned in Florian Weimer's answer) to check every access to arrays, but that makes the generated code significantly slower. Your native hardware is slow enough, you don't want to make it even more slow.
And Python is not exactly the right source language. Its dynamic typing would make its quite slow already.
Perhaps you might consider a statically typed language, like Ocaml (combined with a JIT or AOT compilation library like GCCJIT, etc). The type system (and its inference) increase the safety (and the speed) of the generated code, and with lot of hard work (several years, probably worth a PhD
since you'll need do new research) you might improve the type inference to even infer the occurrences where the array index is never out-of-bounds (and an index boundary check is not even needed).
In most case, upgrading the hardware (to a Raspberry Pi style one, with an MMU, and probably the ability to have a real OS with some virtual memory) is probably the most pragmatic approach
PS. Be aware of Rice's theorem. Most static source analysis cannot work reliably and well (be sound and complete, in technical terms).

Atmel Studio 6 no new and delete operators for C++

I am using an ATMega32 uc with Atmel Studio 6.
I have some code that contains the new operator.
When I try to use it, it says that it is not defined and I don't know why.
I searched for something on Google but I didn't find anything relevant yet. All I could find was pieces of code that defines the new and delete operators but I really don't wanna work this way. Also the nullptr is missing.
Any other solutions?
It is perfectly possible to use dynamic memory management in embedded situations - you just need to be careful how you do it. In this case, using malloc() and free() is likely easier, though to help with compatibility you may want to define them as new and delete. A good source of information on the topic is over at AVR Freaks.
The reason these operators don't exist is simply that AVR-GCC does not fully support C++, only parts of it. This is partly due to the nature of embedded programming - some of the more advanced C++ features can rapidly chew up flash memory and RAM. The C vs C++ on embedded platforms argument is an old and often heated one, but usually comes down to the situation. Here is another another forum topic on the subject.
It sounds kinda embedded stuff. It's not a rare case, that you can use only C, not C++.
Anyway, new/delete - and malloc()/free() as well - is not a good idea in embedded world. Your program must work in any circumstances. There is no way to fail. It's just n.a. You have no console nor log file to write a message, or if you have some, no one will check it, no one will handle the error. You can flash a red led indicator, but in most cases, it's not permitted, e.g. pressing brake pedal should operate the brake, not a red led diode.
You should set up fixed size pools instead of dynamically allocate/deallocate items, and you have no option to not to handle any possible input. Your code will be full of "MAX_..." defines.

How to overcome C++'s lack of tool support for embedded systems? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
The question is NOT about the Linux kernel. It is NOT a C vs. C++ debate either.
I did a research and it seems to me that C++ lacks tool support when it comes to exception handling and memory allocation for embedded systems:
Why is the linux kernel not implemented in C++?
Beside the accepted answer see also Ben Collins' answer.
Linus Torvalds on C++:
"[...] anybody who designs his kernel modules for C++ is [...]
(b) a C++ bigot that can't see what he is writing is really just C anyway"
" - the whole C++ exception handling thing is fundamentally broken. It's especially broken for kernels.
- any compiler or language that likes to hide things like memory allocations behind your back just isn't a good choice for a kernel."
JOINT STRIKE FIGHTER AIR VEHICLE C++ CODING STANDARDS:
"AV Rule 208 C++ exceptions shall not be used"
Are the exception handling and the memory allocation the only points where C++ apparently lacks tool support (in this context)?
To fix the exception handling issue, one has to provide bound on the time till the exception is caught after it is thrown?
Could you please explain me why the memory allocation is an issue? How can one overcome this issue, what has to be done?
As I see this, in both cases one has to provide an upper bound at compile time on something nontrivial that happens and depends on things at run-time.
Answer:
No, dynamic casts were also an issue but it has been solved.
Basically yes. The time needed to handle exceptions has to be bounded by analyzing all the throw paths.
See solution on slides "How to live without new" in Embedded systems programming. In short: pre-allocate (global objects, stacks, pools).
Well, there are a couple of things. First, you have to remember that the STL is completely built on OS routines, the C standard library, and dynamic allocation. When your writing a kernel, there is no dynamic memory allocation for you (you're providing it) there is no C standard library (you have to provide one built on top of your kernel), and you are providing system calls. Then there is the fact that C interops very well and easily with assembly, whereas C++ is very difficult to interface with assembly because the ABI isn't necessarily constant, nor are names. Because of name mangling, you get a whole new level of complication.
Then, you have to remember that when you are building an OS, you need to know and control every aspect of the memory used by the kernel. In C++, there are quite a few hidden structures that you have no control over (vtables, RTTI, exceptions) that would severely interfere with your work.
In other words, what Linus is saying is that with C, you can easily understand the assembly being generated and it is simple enough to run directly on the machine. Although C++ can, you will always have to set up quite a bit of context and still do some C to interface the assembly and C. Another reason is that in systems programing, you need to know exactly how methods are being called. C has the very well documented C calling conventions, but in C++ you have this to deal with, name mangling, etc.
In short, it's because C++ does things without you asking.
Per #Josh's comment bellow, another thing C++ does behind your back is constructors and destructors. They add overhead to enter and exiting stack frames, and most of all, make assembly interop even harder, as when you destroy a C++ stack frame, you have to call the destructor of every object in it. This gets ugly quickly.
Why certain kernels refuse C++ code in their code base? Politics and preference, but I digress.
Some parts of modern OS kernels are written in certain subsets of C++. In these subsets mainly exceptions and RTTI are disabled (sometimes multiple inheritance and templates are disallowed, too).
This is the case too in C. Certain features should not be used in a kernel environment (e.g. VLAs).
Outside of exceptions and RTTI, certain features in C++ are heavily critiqued, when we are talking about kernel code (or embedded code).
These are vtables and constructors/destructors. They bring a bit of code under the hood, and that seems to be deemed 'bad'. If you don't want a constructor, then don't implement one. If you worry about using a class with a constructor, then worry too about a function you have to use to initialize a struct. The upside in C++ is, you cannot really forget using a dtor outside of forgetting to deallocate the memory.
But what about vtables?
When you implement a object which contains extension points (e.g. a linux filesystem driver), you implement something like a class with virtual methods. So why is it sooo bad, to have a vtable? You have to control the placement of this vtable when you have certain requirements in which pages the vtable resides. As far as I recall, this is irrelevant for linux, but under windows, code pages can be paged out, and when you call a paged out function from a too high irql, you crash. But you really have to watch out what functions you call, when you are on a high irql, whatever function it is. And you don't need to worry, if you don't use a virtual call in this context. In embedded software this could be worse, because (very seldomly) you need to directly control in which code page your code goes, but even there you can influence what your linker does.
So why are so many people so adamant of 'use C in kernel'?
Because they either got burned by a toolchain problem, or got burned by overenthusiastic developers using the latest stuff in kernel mode.
Maybe the kernel-mode developers are rather conservative, and C++ is a too newfangled thing ...
Why are exceptions not used in kernel-mode code?
Because they need to generate some code per function, introduce complexity in a code path and not handling an exception is bad for a kernel mode component, because it kills the system.
In C++, when an exception is thrown, the stack must be unwound and the according destructors must be called. This involves at least a bit of overhead. This is mostly negligible, but it does incur a cost, which may not be something you want. (Note I do not know how much a table base unwind does actually cost, I think I read that there is no cost when no exception is running, but ... I guess I have to look it up).
A code path, which cannot throw exceptions can be much easier reasoned about, then one which may.
So :
int f( int a )
{
if( a == 0 )
return -1;
if( g() < 0 )
return -2;
f3();
return h();
}
We can reason about every exit path, in this function, because we can easily see all returns, but when exceptions are enabled, the functions may throw and we cannot guarantee what the actual path is, that the function takes. This is the exact point of the code may do something we cannot see at once. (This is bad C++ code, when exceptions are enabled).
The third point is, you want user mode applications to crash, when something unexpected occurs (E.g. when memory runs out), a user mode application should crash (after freeing resources) to allow the developer to debug the problem or at least get a good error message. You should not have a uncaught exception in a kernel mode module, ever.
Note that all this can be overcome, there are SEH exceptions in the windows kernel, so point 2+3 are not really good points in the NT kernel.
There are no memory management problems with C++ in the kernel. E.g. the NT kernel headers provide overloads for new and delete, which let you specify the pool type of your allocation, but are otherwise exactly the same as the new and delete in a user mode application.
I don't really like language wars, and have voted to close this again. But anyway...
Well, there are a couple of things. First, you have to remember that the STL is completely built on OS routines, the C standard library, and dynamic allocation. When your writing a kernel, there is no dynamic memory allocation for you (you're providing it) there is no C standard library (you have to provide one built on top of your kernel), and you are providing system calls. Then there is the fact that C interops very well and easily with assembly, whereas C++ is very difficult to interface with assembly because the ABI isn't necessarily constant, nor are names. Because of name mangling, you get a whole new level of complication.
No, with C++ you can declare functions having an extern "C" (or optionally extern "assembly") calling convention. That makes the names compatible with everything else on the same platform.
Then, you have to remember that when you are building an OS, you need to know and control every aspect of the memory used by the kernel. In C++, there are quite a few hidden structures that you have no control over (vtables, RTTI, exceptions) that would severely interfere with your work.
You have to be careful when coding kernel features, but that is not limited to C++. Sure, you cannot use std::vector<byte> as the base for you memory allocation, but neither can you use malloc for that. You don't have to use virtual functions, multiple inheritance and dynamic allocations for all C++ classes, do you?
In other words, what Linus is saying is that with C, you can easily understand the assembly being generated and it is simple enough to run directly on the machine. Although C++ can, you will always have to set up quite a bit of context and still do some C to interface the assembly and C. Another reason is that in systems programing, you need to know exactly how methods are being called. C has the very well documented C calling conventions, but in C++ you have this to deal with, name mangling, etc.
Linus is possibly claiming that he can spot every call to f(x) and immediately see that it is calling g(x), h(x), and q(x) 20 levels deep. Still MyClass M(x); is a great mystery, as it might be calling some unknown code behind his back. Lost me there.
In short, it's because C++ does things without you asking.
How? If I write a constructor and a destructor for a class, it is because I am asking for the code to be executed. Don't tell me that C can magically copy an object without executing some code!
Per #Josh's comment bellow, another thing C++ does behind your back is constructors and destructors. They add overhead to enter and exiting stack frames, and most of all, make assembly interop even harder, as when you destroy a C++ stack frame, you have to call the destructor of every object in it. This gets ugly quickly.
Constuctors and destructors do not add code behind your back, they are only there if needed. Destructors are called only when it is required, like when dynamic memory needs to be deallocated. Don't tell me that C code work without this.
One reason for the lack of C++ support in both Linux and Windows is that a lot of the guys working on the kernels have been doing this since long before C++ was available. I have seen posts from the Windows kernel developers arguing that C++ support isn't really needed, as there are very few device drivers written in C++. Catch-22!
Are the exception handling and the memory allocation the only points where C++ apparently lacks tool support (in this context)?
In places where this is not properly handled, just don't use it. You don't have to use multiple inheritance, dynamic allocation, and throwing exceptions everywhere. If returning an error code works, fine. Do that!
To fix the exception handling issue, one has to provide bound on the time till the exception is caught after it is thrown?
No, but you just cannot use application levels features in the kernel. Implementing dynamic memory using a std::vector<byte> isn't a good idea, but who would really try that?
Could you please explain me why the memory allocation is an issue? How can one overcome this issue, what has to be done?
Using standard library features depending on memory allocation on a layer below the functions implementing the memory management would be a problem. Implementing malloc using calls to malloc would be just as silly. But who would try that?

Suitability of D for writing a Tracing JIT Compiler?

I'd like to write an interpreter and tracing JIT for a programming language I'm designing. I already have many years of experience programming in C++, but I've been wondering if perhaps newer alternatives might be better. One of the things I found most frustrating, back in my C++ days, was having to use header files to deal with the clunky one-pass compiler model. The problem is that not all languages are equally suited for this purpose. For my tracing JIT, I need to be able to write executable code into memory and have the interpreter call to that code. I will also need the generated code to be able to call back into host functions.
I started looking at Go and saw that the language had pointers but no pointer arithmetic. This immediately struck me as a huge issue. I may well want to write my own allocator and garbage collector. I will need to closely control the way my language objects are laid out in memory and be able to get the address of specific fields and write to them. Unless there's ways to deal with this, it kind of seems like Go fails to be low-level enough for my purposes.
The D language seems promising. It has pointer arithmetic and a clear outline of the ABI needed to call in and out of D. I've heard lots of good things about it. It also has garbage collection which is nice for compiler writing, but I still have a few things I'm not sure about:
Does D have standard libs that will allow me to mark chunks of memory as executable?
If I allocate a big chunk of memory that I want to manage myself, with my own GC, and have a bunch of pointers going into there, will this pose problems with D's garbage collector?
How well does D interoperate with C code, in your experience? Is loading C dynamic libraries and calling into them fairly easy?
Finally, there's the whole support aspect. For those who have used D on linux here, how good is the toolchain? Any issues? Has anyone written a JIT compiler in D, and if so, how was the experience?
I believe so, see core.memory.GC if I remember right.
No, it shouldn't. Just call malloc or whatever you need, and make sure the GC doesn't see it.
Yes, it's pretty easy to interoperate with C code.
Caveat: You probably don't want to rely on the GC either, since it's not 'precise' (i.e. can and does leak memory if you're unlucky). But for small blocks of data it's usually fine.
Go does allow pointer arithmetic, but you must import the unsafe package to do so (or use a C function). Pointer arithmetic is a common source of bugs, and Go has other mechanisms, like slices, which provide safe ways to do some of the same activities that require pointer arithmetic in C. With unsafe you can cast any pointer to a uintptr and back, and uintptr is an ordinary numeric type, which allows you do do arithmetic.
I started looking at Go and saw that the language had pointers but no pointer arithmetic. This immediately struck me as a huge issue.
You, obviously, haven't tried the language. It works pretty well without any "pointer arithmetic". If you really need to bend rules, there is always "unsafe" package that will allow you to do anything.
I may well want to write my own allocator and garbage collector. I will need to closely control the way my language objects are laid out in memory and be able to get the address of specific fields and write to them.
I haven't written allocatior or garbage collector myself, but you can take address of a field of a structure. All Go data structures are simple and easy to control and reason about. See http://research.swtch.com/godata for short introduction. Also size and aligment guarantees are part of the language http://golang.org/ref/spec#Size_and_alignment_guarantees. If nothing else, you could always jump into C or asm.
IMHO, you should try to implement some small task to see if Go fits your requirements. Feel free to ask questions at http://groups.google.com/group/golang-nuts.
Alex
There is already a JIT compiler, very serious one, done in D. I highly recommend taking a look at http://lycus.org/ , more specifically pages about the MCI project - http://github.com/lycus/mci . MCI documentation will give you some more information. As you will see, MCI is more than just a JIT, it has its own (better than anything else I have seen) IR, optimizer, verifier, etc...

debugging C++ when compared to debugging C

HI,
I am normally a C programmer.
I do regularly debug C programs on unix environment using tools like gdb,dbx.
i have never done debugging of big applications of C++.
Is that much different from how we debug in C.
theoretically i am quite good in C++ but have never got a chance to debug C++ programs.
I am also not sure about what kind of technical problems we face in c++ which will lead a developer to switch on the debugger for finding out the problem.
what are the common issues we face in C++ which will make debugger to be started
what are the challenges that a c programmer might face while debugging a C++ program?
Is it difficult and complex when compared to C?
It is basically the same.
Just remember when setting break points manually you need to fully qualify the method name with both the namespace(s) and class (As a resul i someti es find it easier to use line numbers to define break points)
Don't forget that calls to destructors are invisible in the source, but you can still step into them at the end of a block.
A few minor differences:
When typing a full-qualified symbol such as foo::bar::fum(args) in the gdb shell you have to start with a single quote for gdb to recognize it and calculate completions.
As others have said, library templates expose their internals in the debugger. You can poke around in std::vector pretty easily, but poking through std::map may not be a wise way to spend your time.
The aggressive and abundant inlining common in C++ programs can make a single line of code have seemingly endless steps. Things like shared_ptr can be particularly annoying because every access to the pointer expands inline to the template internals. You never really get to used it.
If you've got a ton of overloaded symbol names, selecting which one you want from the readline completion can be unpleasant. (Which "foo" did you want? All of them? Just these two?)
GDB can be used to debug C++ as well, so if you have an understanding of how C++ works (and understand problems that can stem from the object-oriented side of things), then you shouldn't have all that much trouble (at least, not much more than you would debugging a C program). I think...
Quite a few issues really, but it also depends on the debugger you are using, its versioning etc:
Accessing individual members of templatized class is not easy
Exception handling is a problem -- i have seen debuggers doing a better job with setjmp/longjmp
Setting breakpoints with something like obj1 == obj2, where these are not POD types may not work
The good thing that I like about debuggers is that to access private/protected class members I don't have to call get routines; just [obj-name].[var-name] is good enough.
Arpan
GDB has had a rocky past with regard to debugging c++. For a while it couldn't efficiently break inside constructors/destructors.
Also stl container were netoriously difficult to inspect in gdb. std::string was painful but generally workable. std::map was so difficult, that I generally added print statements unless there was no other way.
The constructor/destructor problem has been fixed for a few years.
The stl support got fixed in gdb 7.0.
You might still have issues with boost's libraries. I at time had difficulty getting gdb to give me asses to the contents of a shared_ptr.
So I guess debugging your own C++ isn't really that difficult, it's debugging 3rd party classes and template code that could be a problem.
C++ objects might be sometimes harder to analyze. Also as data is sometimes nested in several classes (across several layers) it might take some time to "unfold" it (as already said by others in this thread). Its hard to generally say so, as it depends very much on C++ features used and programming style and complexity of the problem to analyze (actually that is language independent).
IMO: if someone finds himselfself in the need to debug very often he should reconsider his programming style.
Usually for me it is all about error handling at the end. If a program behaves unexpected your error logs should indicate enough information to reconstruct what happened at any stage.
This also gives you the benefit that you can "debug" problems offline later once your program gets shipped to end users.