What is LLVM's GenericValue for? - llvm

I’m trying to figure out the data structures used for partial interpretation of LLVM intermediate code, and noticing that GenericValue is a container for a value of arbitrary type. But as I understand it, the same could also be said of ConstantExpr. What exactly does GenericValue do, that ConstantExpr does not? The closest I have found to existing discussion of the matter is How to convert a genericValue to Value in LLVM? the answer to which describes a couple of situations in which you do not need to use GenericValue, but none in which you do.

A ConstantExpr is a constant expression that exists within the program being compiled (or in a JIT, the VM where the code is run). It's distant from the code in the compiler — you need quite a bit of code if you want to check whether a ConstantExpr ≥42, but emitting code to check ≥42 is dead easy.
A GenericValue, on the other hand, is a compiletime thing. Testing whether it's ≥42 is easy (and has a ternary result), but on the other hand generating code to check ≥42 at runtime is hard.
There's a lot of this in compilering. Some things exist within the compiler, others exist within the output, and even though the two can be confusingly similar there's a world of difference. Trying to share code or concepts among the two universes always leads to pain, there are always little details that spoil everything at the last minute. (Just think about how you'd prevent sandbox escapes if the compiler's address space were shared with untrusted JITted code.)

Related

Is undefined behavior only an issue if you are deploying on several platforms?

Most of the conversations around undefined behavior (UB) talk about how there are some platforms that can do this, or some compilers do that.
What if you are only interested in one platform and only one compiler (same version) and you know you will be using them for years?
Nothing is changing but the code, and the UB is not implementation-defined.
Once the UB has manifested for that architecture and that compiler and you have tested, can't you assume that from then on whatever the compiler did with the UB the first time, it will do that every time?
Note: I know undefined behavior is very, very bad, but when I pointed out UB in code written by somebody in this situation, they asked this, and I didn't have anything better to say than, if you ever have to upgrade or port, all the UB will be very expensive to fix.
It seems there are different categories of Behavior:
Defined - This is behavior documented to work by the standards
Supported - This is behavior documented to be supported a.k.a
implementation defined
Extensions - This is a documented addition, support for low level
bit operations like popcount, branch hints, fall into this category
Constant - While not documented, these are behaviors that will
likely be consistent on a given platform things like endianness,
sizeof int while not portable are likely to not change
Reasonable - generally safe and usually legacy, casting from
unsigned to signed, using the low bit of a pointer as temp space
Dangerous - reading uninitialized or unallocated memory, returning
a temp variable, using memcopy on a non pod class
It would seem that Constant might be invariant within a patch version on one platform. The line between Reasonable and Dangerous seems to be moving more and more behavior towards Dangerous as compilers become more aggressive in their optimizations
OS changes, innocuous system changes (different hardware version!), or compiler changes can all cause previously "working" UB to not work.
But it is worse than that.
Sometimes a change to an unrelated compilation unit, or far away code in the same compilation unit, can cause previously "working" UB to not work; as an example, two inline functions or methods with different definitions but the same signature. One is silently discarded during linking; and completely innocuous code changes can change which one is discarded.
The code that is working in one context can suddenly stop working in the same compiler, OS and hardware when you use it in a different context. An example of this is violating strong aliasing; the compiled code might work when called at spot A, but when inlined (possibly at link-time!) the code can change meaning.
Your code, if part of a larger project, could conditionally call some 3rd party code (say, a shell extension that previews an image type in a file open dialog) that changes the state of some flags (floating point precision, locale, integer overflow flags, division by zero behavior, etc). Your code, which worked fine before, now exhibits completely different behavior.
Next, many kinds of undefined behavior are inherently non-deterministic. Accessing the contents of a pointer after it is freed (even writing to it) might be safe 99/100, but 1/100 the page was swapped out, or something else was written there before you got to it. Now you have memory corruption. It passes all your tests, but you lacked complete knowledge of what can go wrong.
By using undefined behavior, you commit yourself to a complete understanding of the C++ standard, everything your compiler can do in that situation, and every way the runtime environment can react. You have to audit the produced assembly, not the C++ source, possibly for the entire program, every time you build it! You also commit everyone who reads that code, or who modifies that code, to that level of knowledge.
It is sometimes still worth it.
Fastest Possible Delegates uses UB and knowledge about calling conventions to be a really fast non-owning std::function-like type.
Impossibly Fast Delegates competes. It is faster in some situations, slower in others, and is compliant with the C++ standard.
Using the UB might be worth it, for the performance boost. It is rare that you gain something other than performance (speed or memory usage) from such UB hackery.
Another example I've seen is when we had to register a callback with a poor C API that just took a function pointer. We'd create a function (compiled without optimization), copy it to another page, modify a pointer constant within that function, then mark that page as executable, allowing us to secretly pass a pointer along with the function pointer to the callback.
An alternative implementation would be to have some fixed size set of functions (10? 100? 1000? 1 million?) all of which look up a std::function in a global array and invoke it. This would put a limit on how many such callbacks we install at any one time, but practically was sufficient.
No, that's not safe. First of all, you will have to fix everything, not only the compiler version. I do not have particular examples, but I guess that a different (upgraded) OS, or even an upgraded processor might change UB results.
Moreover, even having a different data input to your program can change UB behavior. For example, an out-of-bound array access (at least without optimizations) usually depend on whatever is in the memory after the array.
UPD: see a great answer by Yakk for more discussion on this.
And a bigger problem is optimization and other compiler flags. UB may manifest itself in a different ways depending on optimization flags, and it's quite difficult to imagine somebody to use always the same optimization flags (at least you'll use different flags for debug and release).
UPD: just noticed that you did never mention fixing a compiler version, you only mentioned fixing a compiler itself. Then everything is even more unsafe: new compiler versions might definitely change UB behavior. From this series of blog posts:
The important and scary thing to realize is that just about any
optimization based on undefined behavior can start being triggered on
buggy code at any time in the future. Inlining, loop unrolling, memory
promotion and other optimizations will keep getting better, and a
significant part of their reason for existing is to expose secondary
optimizations like the ones above.
This is basically a question about a specific C++ implementation. "Can I assume that a specific behavior, undefined by the standard, will continue to be handled by ($CXX) on platform XYZ in the same way under circumstances UVW?"
I think you either should clarify by saying exactly what compiler and platform you are working with, and then consult their documentation to see if they make any guarantees, otherwise the question is fundamentally unanswerable.
The whole point of undefined behavior is that the C++ standard doesn't specify what happens, so if you are looking for some kind of guarantee from the standard that it's "ok" you aren't going to find it. If you are asking whether the "community at large" considers it safe, that's primarily opinion based.
Once the UB has manifested for that architecture and that compiler and you have tested, can't you assume that from then on whatever the compiler did with the UB the first time, it will do that every time?
Only if the compiler makers guarantee that you can do this, otherwise, no, it's wishful thinking.
Let me try to answer again in a slightly different way.
As we all know, in normal software engineering, and engineering at large, programmers / engineers are taught to do things according to a standard, the compiler writers / parts manufacturers produce parts / tools that meet a standard, and at the end you produce something where "under the assumptions of the standards, my engineering work shows that this product will work", and then you test it and ship it.
Suppose you had a crazy uncle jimbo and one day, he got all his tools out and a whole bunch of two by fours, and worked for weeks and made a makeshift roller coaster in your backyard. And then you run it, and sure enough it doesn't crash. And you even run it ten times, and it doesn't crash. Now jimbo is not an engineer, so this is not made according to standards. But if it didn't crash after even ten times, that means it's safe and you can start charging admission to the public, right?
To a large extent what's safe and what isn't is a sociological question. But if you want to just make it a simple question of "when can I reasonably assume that no one would get hurt by me charging admission, when I can't really assume anything about the product", this is how I would do it. Suppose I estimate that, if I start charging admission to the public, I'll run it for X years, and in that time, maybe 100,000 people will ride it. If it's basically a biased coin flip whether it breaks or not, then what I would want to see is something like, "this device has been run a million times with crash dummies, and it never crashed or showed hints of breaking." Then I could quite reasonably believe that if I start charging admission to the public, the odds that anyone will ever get hurt are quite low, even though there are no rigorous engineering standards involved. That would just be based on a general knowledge of statistics and mechanics.
In relation to your question, I would say, if you are shipping code with undefined behavior, which no one, either the standard, the compiler maker, or anyone else will support, that's basically "crazy uncle jimbo" engineering, and it's only "okay" if you do vastly increased amounts of testing to verify that it meets your needs, based on a general knowledge of statistics and computers.
What you are referring to is more likely implementation defined and not undefined behavior. The former is when the standard doesn't tell you what will happen but it should work the same if you are using the same compiler and the same platform. An example for this is assuming that an int is 4 bytes long. UB is something more serious. There the standard doesn't say anything. It is possible that for a given compiler and platform it works, but it is also possible that it works only in some of the cases.
An example is using uninitialized values. If you use an uninitialized bool in an if, you may get true or false, and it may happen that it is always what you want, but the code will break in several surprising ways.
Another example is dereferencing a null pointer. While it will probably result in a segfault in all cases, but the standard doesn't require the program to even produce the same results every time a program is run.
In summary, if you are doing something that is implementation defined, then you are safe if you are only developing to one platform and you tested that it works. If you are doing something that is undefined behavior, then you are probably not safe in any case. There may be that it works but nothing guarantees it.
Think about it a different way.
Undefined behavior is ALWAYS bad, and should never be used, because you never know what you will get.
However, you can temper that with
Behavior can be defined by parties other than just the language specification
Thus you should never rely on UB, ever, but you can find alternate sources which state that a certain behavior is DEFINED behavior for your compiler in your circumstances.
Yakk gave great examples regarding the fast delegate classes. In those cases, the author explicitly claims that they are engaging in undefined behavior, according to the spec. However, they then go to explain a business reason why the behavior is better defined than that. For example, they declare that the memory layout of a member function pointer is unlikely to change in Visual Studio because there would be rampant business costs due to incompatibilities which are distasteful to Microsoft. Thus they declare that the behavior is "de facto defined behavior."
Similar behavior can be seen in the typical linux implementation of pthreads (to be compiled by gcc). There are cases where they make assumptions about what optimizations a compiler is allowed to invoke in multithreaded scenarios. Those assumptions are stated plainly in comments in the sourcecode. How is this "de facto defined behavior?" Well, pthreads and gcc go kind of hand in hand. It would be considered unacceptable to add an optimization to gcc which broke pthreads, so nobody will ever do it.
However, you cannot make the same assumption. You may say "pthreads does it, so I should be able to as well." Then, someone makes an optimization, and updates gcc to work with it (perhaps using __sync calls instead of relying on volatile). Now pthreads keeps functioning... but your code doesn't anymore.
Also consider the case of MySQL (or was it Postgre?) where they found a buffer overflow error. The overflow had actually been caught in the code, but it did so using undefined behavior, so the latest gcc started optimizing the entire check out.
So, in all, look for an alternate source of defining the behavior, rather than using it while it is undefined. It is totally legit to find a reason why you know 1.0/0.0 equals NaN, rather than causing a floating point trap to occur. But never use that assumption without first proving that it is a valid definition of behavior for you and your compiler.
And please oh please oh please remember that we upgrade compilers every now and then.
Historically, C compilers have generally tended to act in somewhat-predictable fashion even when not required to do so by the Standard. On most platforms, for example, a comparison between a null pointer and a pointer to a dead object will simply report that they are not equal (useful if code wishes to safely assert that the pointer is null and trap if it isn't). The Standard does not require compilers to do these things, but historically compilers which could do them easily have done so.
Unfortunately, some compiler writers have gotten the idea that if such a comparison could not be reached while the pointer was validly non-null, the compiler should omit the assertion code. Worse, if it can also determine that certain input would cause the code to be reached with an invalid non-null pointer, it should assume that such input will never be received, and omit all code which would handle such input.
Hopefully such compiler behavior will turn out to be a short-lived fad. Supposedly, it's driven by a desire to "optimize" code, but for most applications robustness is more important than speed, and having compilers mess with code that would have limited the damage caused by errant inputs or errand program behavior is a recipe for disaster.
Until then, however, one must be very careful when using compilers to read the documentation carefully, since there's no guarantee that a compiler writer won't have decided that it was less important to support useful behaviors which, though widely supported, aren't mandated by the Standard (such as being able to safely check whether two arbitrary objects overlap), than to exploit every opportunity to eliminate code which the Standard doesn't require it to execute.
Undefined behavior can be altered by things such as the ambient temperature, which causes rotating hard disk latencies to change, which causes thread scheduling to change, which in turn changes the contents of the random garbage that's getting evaluated.
In short, not safe unless the compiler or the OS specifies the behavior (since the language standard didn't).
There is a fundamental problem with undefined behavior of any kind: It is diagnosed by sanitizers and optimizers. A compiler can silently change behavior corresponding to those from one version to another (e.g. by expanding its repertoire), and suddenly you'll have some untraceable error in your program. This should be avoided.
There is undefined behavior that is made "defined" by your particular implementation, though. A left shift by a negative amount of bits can be defined by your machine, and it would be safe to use it there, as breaking changes of documented features occur quite rarely. One more common example is strict aliasing: GCC can disable this restriction with -fno-strict-aliasing.
While I agree with the answers that say that it's not safe even if you don't target multiple platforms, every rule can have exceptions.
I would like to present two examples where I'm confident that allowing undefined / implementation-defined behavior was the right choice.
A single-shot program. It's not a program which is intended to be used by anyone, but it's a small and quickly written program created to calculate or generate something now. In such a case a "quick and dirty" solution can be the right choice, for example, if I know the endianness of my system and I don't want to bother with writing a code which works with the other endianness. For example, I only needed it to perform a mathematical proof to know if I'll be able to use a specific formula in my other, user-oriented program or not.
Very small embedded devices. The cheapest microcontrollers have memory measured in a few hundred bytes. If you develop a small toy with blinking LEDs or a musical postcard, etc, every penny counts, because it will be produced in the millions with a very low profit per unit. Neither the processor nor the code ever changes, and if you have to use a different processor for the next generation of your product, you will probably have to rewrite your code anyway. A good example of an undefined behavior in this case is that there are microcontrollers which guarantee a value of zero (or 255) for every memory location at power-up. In this case you can skip the initialization of your variables. If your microcontroller has only 256 bytes of memory, this can make a difference between a program which fits into the memory and a code which doesn't.
Anyone who disagrees with point 2, please imagine what would happen if you told something like this to your boss:
"I know the hardware costs only $ 0.40 and we plan selling it for $ 0.50. However, the program with 40 lines of code I've written for it only works for this very specific type of processor, so if in the distant future we ever change to a different processor, the code will not be usable and I'll have to throw it out and write a new one. A standard-conforming program which works for every type of processor will not fit into our $ 0.40 processor. Therefore I request to use a processor which costs $ 0.60, because I refuse to write a program which is not portable."
"Software that doesn't change, isn't being used."
If you are doing something unusual with pointers, there's probably a way to use casts to define what you want. Because of their nature, they will not be "whatever the compiler did with the UB the first time".
For example, when you refer to memory pointed at by an uninitialize pointer, you get a random address that is different every time you run the program.
Undefined behavior generally means you are doing something tricky, and you would be better off doing the task another way.
For instance, this is undefined:
printf("%d %d", ++i, ++i);
It's hard to know what the intent would even be here, and should be re-thought.
Changing the code without breaking it requires reading and understanding the current code. Relying on undefined behavior hurts readability: If I can't look it up, how am I supposed to know what the code does?
While portability of the program might not be an issue, portability of the programmers might be. If you need to hire someone to maintain the program, you'll want to be able to look simply for a '<language x> developer with experience in <application domain>' that fits well into your team rather than having to find a capable '<language x> developer with experience in <application domain> knowing (or willing to learn) all the undefined behavior intrinsics of version x.y.z on platform foo when used in combination with bar while having baz on the furbleblawup'.
Nothing is changing but the code, and the UB is not implementation-defined.
Changing the code is sufficient to trigger different behavior from the optimizer with respect to undefined behavior and so code that may have worked can easily break due to seemingly minor changes that expose more optimization opportunities. For example a change that allows a function to be inlined, this is covered well in What Every C Programmer Should Know About Undefined Behavior #2/3 which says:
While this is intentionally a simple and contrived example, this sort of thing happens all the time with inlining: inlining a function often exposes a number of secondary optimization opportunities. This means that if the optimizer decides to inline a function, a variety of local optimizations can kick in, which change the behavior of the code. This is both perfectly valid according to the standard, and important for performance in practice.
Compiler vendors have become very aggressive with optimizations around undefined behavior and upgrades can expose previously unexploited code:
The important and scary thing to realize is that just about any optimization based on undefined behavior can start being triggered on buggy code at any time in the future. Inlining, loop unrolling, memory promotion and other optimizations will keep getting better, and a significant part of their reason for existing is to expose secondary optimizations like the ones above.

When should you not use [[carries_dependency]]?

I've found questions (like this one) asking what [[carries_dependency]] does, and that's not what I'm asking here.
I want to know when you shouldn't use it, because the answers I've read all make it sound like you can plaster this code everywhere and magically you'd get equal or faster code. One comment said the code can be equal or slower, but the poster didn't elaborate.
I imagine appropriate places to use this is on any function return or parameter that is a pointer or reference and that will be passed or returned within the calling thread, and it shouldn't be used on callbacks or thread entry points.
Can someone comment on my understanding and elaborate on the subject in general, of when and when not to use it?
EDIT: I know there's this tome on the subject, should any other reader be interested; it may contain my answer, but I haven't had the chance to read through it yet.
In modern C++ you should generally not use std::memory_order_consume or [[carries_dependency]] at all. They're essentially deprecated while the committee comes up with a better mechanism that compilers can practically implement.
And that hopefully doesn't require sprinkling [[carries_dependency]] and kill_dependency all over the place.
2016-06 P0371R1: Temporarily discourage memory_order_consume
It is widely accepted that the current definition of memory_order_consume in the standard is not useful. All current compilers essentially map it to memory_order_acquire. The difficulties appear to stem both from the high implementation complexity, from the fact that the current definition uses a fairly general definition of "dependency", thus requiring frequent and inconvenient use of the kill_dependency call, and from the frequent need for [[carries_dependency]] annotations. Details can be found in e.g. P0098R0.
Notably that in C++ x - x still carries a dependency but most compilers would naturally break the dependency and replace that expression with a constant 0. But also compilers do sometimes turn data dependencies into control dependencies if they can prove something about value-ranges after a branch.
On modern compilers that just promote mo_consume to mo_acquire, fully aggressive optimizations can always happen; there's never anything to gain from [[carries_dependency]] and kill_dependency even in code that uses mo_consume, let alone in other code.
This strengthening to mo_acquire has potentially-significant performance cost (an extra barrier) for real use-cases like RCU on weakly-ordered ISAs like POWER and ARM. See this video of Paul E. McKenney's CppCon 2015 talk C++ Atomics: The Sad Story of memory_order_consume. (Link includes a summary).
If you want real dependency-ordering read-only performance, you have to "roll your own", e.g. by using mo_relaxed and checking the asm to verify it compiled to asm with a dependency. (Avoid doing anything "weird" with such a value, like passing it across functions.) DEC Alpha is basically dead and all other ISAs provide dependency ordering in asm without barriers, as long as the asm itself has a data dependency.
If you don't want to roll your own and live dangerously, it might not hurt to keep using mo_consume in "simple" use-cases where it should be able to work; perhaps some future mo_consume implementation will have the same name and work in a way that's compatible with C++11.
There is ongoing work on making a new consume, e.g. 2018's http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0750r1.html
because the answers I've read all make it sound like you can plaster
this code everywhere and magically you'd get equal or faster code
The only way you can get faster code is when that annotation allows the omission of a fence.
So the only case where it could possibly be useful is:
your program uses consume ordering on an atomic load operation, in an important frequently executed code;
the "consume value" isn't just used immediately and locally, but also passed to other functions;
the target CPU gives specific guarantees for consuming operations (as strong as a given fence before that operation, just for that operation);
the compiler writers take their job seriously: they manage to translate high level language consuming of a value to CPU level consuming, to get the benefit from CPU guarantees.
That's a bunch of necessary conditions to possibly get measurably faster code.
(And the latest trend in the C++ community is to give up inventing a proper compiling scheme that's safe in all cases and to come up with a completely different way for the user to instruct the compiler to produce code that "consumes" values, with much more explicit, naively translatable, C++ code.)
One comment said the code can be equal or slower, but the poster
didn't elaborate.
Of course annotations of the kind that you can randomly put on programs simply cannot make code more efficient in general! That would be too easy and also self contradictory.
Either some annotation specifies a constrain on your code, that is a promise to the compiler, and you can't put it anywhere where it doesn't correspond an guarantee in the code (like noexcept in C++, restrict in C), or it would break code in various ways (an exception in a noexcept function stops the program, aliasing of restricted pointers can cause funny miscompilation and bad behavior (formerly the behavior is not defined in that case); then the compiler can use it to optimize the code in specific ways.
Or that annotation doesn't constrain the code in any way, and the compiler can't count on anything and the annotation does not create any more optimization opportunity.
If you get more efficient code in some cases at no cost of breaking program with an annotation then you must potentially get less efficient code in other cases. That's true in general and specifically true with consume semantic, which imposes the previously described constrained on translation of C++ constructs.
I imagine appropriate places to use this is on any function return or
parameter that is a pointer or reference and that will be passed or
returned within the calling thread
No, the one and only case where it might be useful is when the intended calling function will probably use consume memory order.

C++: Using '.' operator on expressions and function calls

I was wondering if it is good practice to use the member operator . like this:
someVector = (segment.getFirst() - segment.getSecond()).normalize().normalCCW();
Just made that to show the two different things I was wondering, namely if using (expressions).member/function() and foo.getBar().getmoreBar() were in keeping with the spirit of readability and maintainability. In all the c++ code and books I learned from, I've never seen it used in this way, yet its intoxicatingly easy to use it as such. Don't want to develop any bad habits though.
Probably more (or less) important than that, I was also wondering if there would be any performance gains/losses by using it in this fashion, or unforeseen pitfalls that would introduce bugs into the program.
Thank you in advance!
or unforeseen pitfalls that would introduce bugs into the program
Well, the possible pitfalls would be
Harder to debug. You won't be able to view the results of each function call, so if one of them is returning something unexpected you will need to break it up into smaller segments to see what is going on. Also, any call in the chain may fail completely, so again, you may have to break it up to find out which call is failing.
Harder to read (sometimes). Chaining function calls can make the code harder to read. It depends on the situation, there's no hard and fast rule here. If the expression is even somewhat complex it can make things hard to follow. I don't have any problem reading your specific example.
It ultimately comes down to personal preference. I don't strive to fit as much as possible on a single line, and I have been bitten enough times by chaining where I shouldn't that I tend to break things up a bit. However, for simple expressions which are not likely to fail, chaining is fine.
Yes, this is perfectly acceptable and in fact would be completely unreadable in a lot of contexts if you were to NOT do this.
It's called method chaining.
There MIGHT be some performance gain in that you're not creating temporary variables. But any competent compiler will optimise it anyway.
it is perfectly valid to use it the way you showed. It is used in the named parameter idiom described in C++ faq lite for example.
One reason it is not always used is when you have to store intermediate result for performance reasons (if normalize is costly and you have to use it more than one time, it is better to store the result in a variable) or readability.
my2c
Using a variable to hold intermediate results can sometimes enhance readability, especially if you use good variable names. Excessive chaining can make it hard to understand what is happening. You have to use your judgement to decide if it's worthwhile to break down chains using variables. The example you present above is not excessive to me. Performance shouldn't differ much one way or the other if you enable optimization.
someVector = (segment.getFirst() - segment.getSecond()).normalize().normalCCW();
Not an answer to your question, but I should tell you that
the behavior of the expression (segment.getFirst() - segment.getSecond()) is not well-defined as per the C++ Standard. The order in which each operand is evaluated is unspecified by the Standard!
Also, see this related topic : Is this code well-defined?
I suppose what you are doing is less readable, however on the other hand, too many temporary variables can also become unreadable.
As far performance I'm sure there is a little overhead when making temporary variables but the compiler could optimize that out.
There's no big problem with using it in this way- some APIs benefit greatly from method chaining. Plus, it's misleading to create a variable, and then only use it once. When someone reads the next line, they don't have to think about all those variables that you now didn't keep.
It depends of what you're doing.
For readability you should try to use intermediate variables.
Assign calculation results to pointers, and then use them.

How do I find how C++ compiler implements something except inspecting emitted machine code?

Suppose I crafted a set of classes to abstract something and now I worry whether my C++ compiler will be able to peel off those wrappings and emit really clean, concise and fast code. How do I find out what the compiler decided to do?
The only way I know is to inspect the disassembly. This works well for simple code, but there're two drawbacks - the compiler might do it different when it compiles the same code again and also machine code analysis is not trivial, so it takes effort.
How else can I find how the compiler decided to implement what I coded in C++?
I'm afraid you're out of luck on this one. You're trying to find out "what the compiler did". What the compiler did is to produce machine code. The disassembly is simply a more readable form of the machine code, but it can't add information that isn't there. You can't figure out how a meat grinder works by looking at a hamburger.
I was actually wondering about that.
I have been quite interested, for the last few months, in the Clang project.
One of Clang particular interests, wrt optimization, is that you can emit the optimized LLVM IR code instead of machine code. The IR is a high-level assembly language, with the notion of structure and type.
Most of the optimizations passes in the Clang compiler suite are indeed performed on the IR (the last round is of course architecture specific and performed by the backend depending on the available operations), this means that you could actually see, right in the IR, if the object creation (as in your linked question) was optimized out or not.
I know it is still assembly (though of higher level), but it does seem more readable to me:
far less opcodes
typed objects / pointers
no "register" things or "magic" knowledge required
Would that suit you :) ?
Timing the code will directly measure its speed and can avoid looking at the disassembly entirely. This will detect when compiler, code modifications or subtle configuration changes have affected the performance (either for better or worse). In that way it's better than the disassembly which is only an indirect measure.
Things like code size can also serve as possible indicators of problems. At the very least they suggest that something has changed. It can also point out unexpected code bloat when the compiler should have boiled down a bunch of templates (or whatever) into a concise series of instructions.
Of course, looking at the disassembly is an excellent technique for developing the code and helping decide if the compiler is doing a sufficiently good translation. You can see if you're getting your money's worth, as it were.
In other words, measure what you expect and then dive in if you think the compiler is "cheating" you.
You want to know if the compiler produced "clean, concise and fast code".
"Clean" has little meaning here. Clean code is code which promotes readability and maintainability -- by human beings. Thus, this property relates to what the programmer sees, i.e. the source code. There is no notion of cleanliness for binary code produced by a compiler that will be looked at by the CPU only. If you wrote a nice set of classes to abstract your problem, then your code is as clean as it can get.
"Concise code" has two meanings. For source code, this is about saving the scarce programmer eye and brain resources, but, as I pointed out above, this does not apply to compiler output, since there is no human involved at that point. The other meaning is about code which is compact, thus having lower storage cost. This can have an impact on execution speed, because RAM is slow, and thus you really want the innermost loops of your code to fit in the CPU level 1 cache. The size of the functions produced by the compiler can be obtained with some developer tools; on systems which use GNU binutils, you can use the size command to get the total code and data sizes in an object file (a compiled .o), and objdump to get more information. In particular, objdump -x will give the size of each individual function.
"Fast" is something to be measured. If you want to know whether your code is fast or not, then benchmark it. If the code turns out to be too slow for your problem at hand (this does not happen often) and you have some compelling theoretical reason to believe that the hardware could do much better (e.g. because you estimated the number of involved operations, delved into the CPU manuals, and mastered all the memory bandwidth and cache issues), then (and only then) is it time to have a look at what the compiler did with your code. Barring these conditions, cleanliness of source code is a much more important issue.
All that being said, it can help quite a lot if you have a priori notions of what a compiler can do. This requires some training. I suggest that you have a look at the classic dragon book; but otherwise you will have to spend some time compiling some example code and looking at the assembly output. C++ is not the easiest language for that, you may want to begin with plain C. Ideally, once you know enough to be able to write your own compiler, then you know what a compiler can do, and you can guess what it will do on a given code.
You might find a compiler that had an option to dump a post-optimisation AST/representation - how readable it would be is another matter. If you're using GCC, there's a chance it wouldn't be too hard, and that someone might have already done it - GCCXML does something vaguely similar. Of little use if the compiler you want to build your production code on can't do it.
After that, some compiler (e.g. gcc with -S) can output assembly language, which might be usefully clearer than reading a disassembly: for example, some compilers alternate high-level source as comments then corresponding assembly.
As for the drawbacks you mentioned:
the compiler might do it different when it compiles the same code again
absolutely, only the compiler docs and/or source code can tell you the chance of that, though you can put some performance checks into nightly test runs so you'll get alerted if performance suddenly changes
and also machine code analysis is not trivial, so it takes effort.
Which raises the question: what would be better. I can image some process where you run the compiler over your code and it records when variables are cached in registers at points of use, which function calls are inlined, even the maximum number of CPU cycles an instruction might take (where knowable at compile time) etc. and produces some record thereof, then a source viewer/editor that colour codes and annotates the source correspondingly. Is that the kind of thing you have in mind? Would it be useful? Perhaps some more than others - e.g. black-and-white info on register usage ignores the utility of the various levels of CPU cache (and utilisation at run-time); the compiler probably doesn't even try to model that anyway. Knowing where inlining was really being done would give me a warm fuzzy feeling. But, profiling seems more promising and useful generally. I fear the benefits are more intuitively real than actually, and compiler writers are better off pursuing C++0x features, run-time instrumentation, introspection, or writing D "on the side" ;-).
The answer to your question was pretty much nailed by Karl. If you want to see what the compiler did, you have to start going through the assembly code it produced--elbow grease is required. As to discovering the "why" behind the "how" of how it implemented your code...every compiler (and every build, potentially), as you mentioned, is different. There are different approaches, different optimizations, etc. However, I wouldn't worry about whether it's emitting clean, concise machine code--cleanliness and concision should be left to the source code. Speed, on the other hand, is pretty much the programmer's responsibility (profiling ftw). More interesting concerns are correctness, maintainability, readability, etc. If you want to see if it made a specific optimization, the compiler docs might help (if they're available for your compiler). You can also just trying searching to see if the compiler implements a known technique for optimizing whatever. If those approaches fail, though, you're right back to reading assembly code. Keep in mind that the code that you're checking out might have little to no impact on performance or executable size--grab some hard data before diving into any of this stuff.
Actually, there is a way to get what you want, if you can get your compiler to
produce DWARF debugging information. There will be a DWARF description for each
out-of-line function and within that description there will (hopefully) be entries
for each inlined function. It's not trivial to read DWARF, and sometimes compilers
don't produce complete or accurate DWARF, but it can be a useful source of information
about what the compiler actually did, that's not tied to any one compiler or CPU.
Once you have a DWARF reading library there are all sorts of useful tools you can
build around it.
Don't expect to use it with Visual C++ as that uses a different debugging format.
(But you might be able to do similar queries through the debug helper library
that comes with it.)
If your compiler manages to translate your "wrappings and emit really clean, concise and fast code" the effort to follow-up the emitted code should be reasonable.
Contrary to another answer I feel that emitted assembly code may well be "clean" if it is (relatively) easily mappable to the original source code, if it doesn't consist of calls all over the place and that the system of jumps is not too complex. With code scheduling and re-ordering an optimized machine code which is also readable is, alas, a thing of the past.

How to update old C code? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Closed 4 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I have been working on some 10 year old C code at my job this week, and after implementing a few changes, I went to the boss and asked if he needed anything else done. That's when he dropped the bomb. My next task was to go through the 7000 or so lines and understand more of the code, and to modularize the code somewhat. I asked him how he would like the source code modularized, and he said to start putting the old C code into C++ classes.
Being a good worker, I nodded my head yes, and went back to my desk, where I sit now, wondering how in the world to take this code, and "modularize" it. It's already in 20 source files, each with its own purpose and function. In addition, there are three "main" structs. each of these structures has 30 plus fields, many of them being other, smaller structs. It's a complete mess to try to understand, but almost every single function in the program is passed a pointer to one of the structs and uses the struct heavily.
Is there any clean way for me to shoehorn this into classes? I am resolved to do it if it can be done, I just have no idea how to begin.
First, you are fortunate to have a boss who recognizes that code refactoring can be a long-term cost-saving strategy.
I've done this many times, that is, converting old C code to C++. The benefits may surprise you. The final code may be half the original size when you're done, and much simpler to read. Plus, you will likely uncover tricky C bugs along the way. Here are the steps I would take in your case. Small steps are important because you can't jump from A to Z when refactoring a large body of code. You have to go through small, intermediate steps which may never be deployed, but which can be validated and tagged in whatever RCS you are using.
Create a regression/test suite. You will run the test suite each time you complete a batch of changes to the code. You should have this already, and it will be useful for more than just this refactoring task. Take the time to make it comprehensive. The exercise of creating the test suite will get you familiar with the code.
Branch the project in your revision control system of choice. Armed with a test suite and playground branch, you will be empowered to make large modifications to the code. You won't be afraid to break some eggs.
Make those struct fields private. This step requires very few code changes, but can have a big payoff. Proceed one field at a time. Try to make each field private (yes, or protected), then isolate the code which access that field. The simplest, most non-intrusive conversion would be to make that code a friend function. Consider also making that code a method. Converting the code to be a method is simple, but you will have to convert all of the call sites as well. One is not necessarily better than the other.
Narrow the parameters to each function. It's unlikely that any function requires access to all 30 fields of the struct passed as its argument. Instead of passing the entire struct, pass only the components needed. If a function does in fact seem to require access to many different fields of the struct, then this may be a good candidate to be converted to an instance method.
Const-ify as many variables, parameters, and methods as possible. A lot of old C code fails to use const liberally. Sweeping through from the bottom up (bottom of the call graph, that is), you will add stronger guarantees to the code, and you will be able to identify the mutators from the non-mutators.
Replace pointers with references where sensible. The purpose of this step has nothing to do with being more C++-like just for the sake of being more C++-like. The purpose is to identify parameters that are never NULL and which can never be re-assigned. Think of a reference as a compile-time assertion which says, this is an alias to a valid object and represents the same object throughout the current scope.
Replace char* with std::string. This step should be obvious. You might dramatically reduce the lines of code. Plus, it's fun to replace 10 lines of code with a single line. Sometimes you can eliminate entire functions whose purpose was to perform C string operations that are standard in C++.
Convert C arrays to std::vector or std::array. Again, this step should be obvious. This conversion is much simpler than the conversion from char to std::string because the interfaces of std::vector and std::array are designed to match the C array syntax. One of the benefits is that you can eliminate that extra length variable passed to every function alongside the array.
Convert malloc/free to new/delete. The main purpose of this step is to prepare for future refactoring. Merely changing C code from malloc to new doesn't directly gain you much. This conversion allows you to add constructors and destructors to those structs, and to use built-in C++ automatic memory tools.
Replace localize new/delete operations with the std::auto_ptr family. The purpose of this step is to make your code exception-safe.
Throw exceptions wherever return codes are handled by bubbling them up. If the C code handles errors by checking for special error codes then returning the error code to its caller, and so on, bubbling the error code up the call chain, then that C code is probably a candidate for using exceptions instead. This conversion is actually trivial. Simply throw the return code (C++ allows you to throw any type you want) at the lowest level. Insert a try{} catch(){} statement at the place in the code which handles the error. If no suitable place exists to handle the error, consider wrapping the body of main() in a try{} catch(){} statement and logging it.
Now step back and look how much you've improved the code, without converting anything to classes. (Yes, yes, technically, your structs are classes already.) But you haven't scratched the surface of OO, yet managed to greatly simplify and solidify the original C code.
Should you convert the code to use classes, with polymorphism and an inheritence graph? I say no. The C code probably does not have an overall design which lends itself to an OO model. Notice that the goal of each step above has nothing to do with injecting OO principles into your C code. The goal was to improve the existing code by enforcing as many compile-time constraints as possible, and by eliminating or simplifying the code.
One final step.
Consider adding benchmarks so you can show them to your boss when you're done. Not just performance benchmarks. Compare lines of code, memory usage, number of functions, etc.
Really, 7000 lines of code is not very much. For such a small amount of code a complete rewrite may be in order. But how is this code going to be called? Presumably the callers expect a C API? Or is this not a library?
Anyway, rewrite or not, before you start, make sure you have a suite of tests which you can run easily, with no human intervention, on the existing code. Then with every change you make, run the tests on the new code.
This shoehorning into C++ seems to be arbitrary, ask your boss why he needs that done, figure out if you can meet the same goal less painfully, see if you can prototype a subset in the new less painful way, then go and demo to your boss and recommend that you follow the less painful way.
First, tell your boss you're not continuing until you have:
http://www.amazon.com/Refactoring-Improving-Design-Existing-Code/dp/0201485672
and to a lesser extent:
http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
Secondly, there is no way of modularising code by shoe-horning it into C++ class. This is a huge task and you need to communicate the complexity of refactoring highly proceedural code to your boss.
It boils down to making a small change (extract method, move method to class etc...) and then testing - there is no short cuts with this.
I do feel your pain though...
I guess that the thinking here is that increasing modularity will isolate pieces of code, such that future changes are facilitated. We have confidence in changing one piece because we know it cannot affect other pieces.
I see two nightmare scenarios:
You have nicely structured C code, it will easily transform to C++ classes. In which case it probably already is pretty darn modular, and you've probably done nothing useful.
It's a rats-nest of interconnected stuff. In which case it's going to be really tough to disentangle it. Increasing modularity would be good, but it's going to be a long hard slog.
However, maybe there's a happy medium. Could there be pieces of logic that important and conceptually isolated but which are currently brittle because of a lack of data-hiding etc. (Yes good C doesn't suffer from this, but we don't have that, otherwise we would leave well alone).
Pulling out a class to own that logic and its data, encpaulating that piece could be useful. Whether it's better to do it wih C or C++ is open to question. (The cynic in me says "I'm a C programmer, great C++ a chance to learn something new!")
So: I'd treat this as an elephant to be eaten. First decide if it should be eaten at all, bad elephent is just no fun, well structured C should be left alone. Second find a suitable first bite. And I'd echo Neil's comments: if you don't have a good automated test suite, you are doomed.
I think a better approach could be totally rewrite the code, but you should ask your boss for what purpose he wants you "to start putting the old C code into c++ classes".
You should ask for more details
Surely it can be done - the question is at what cost? It is a huge task, even for 7K LOC. Your boss must understand that it's gonna take a lot of time, while you can't work on shiny new features etc. If he doesn't fully understand this, and/or is not willing to support you, there is no point starting.
As #David already suggested, the Refactoring book is a must.
From your description it sounds like a large part of the code is already "class methods", where the function gets a pointer to a struct instance and works on that instance. So it could be fairly easily converted into C++ code. Granted, this won't make the code much easier to understand or better modularized, but if this is your boss' prime desire, it can be done.
Note also, that this part of the refactoring is a fairly simple, mechanical process, so it could be done fairly safely without unit tests (with hyperaware editing of course). But for anything more you need unit tests to make sure your changes don't break anything.
It's very unlikely that anything will be gained by this exercise. Good C code is already more modular than C++ typically can be - the use of pointers to structs allows compilation units to be independent in the same was as pImpl does in C++ - in C you don't have to expose the data inside a struct to expose its interface. So if you turn each C function
// Foo.h
typedef struct Foo_s Foo;
int foo_wizz (const Foo* foo, ... );
into a C++ class with
// Foo.hxx
class Foo {
// struct Foo members copied from Foo.c
int wizz (... ) const;
};
you will have reduced the modularity of the system compared with the C code - every client of Foo now needs rebuilding if any private implementation functions or member variables are added to the Foo type.
There are many things classes in C++ do give you, but modularity is not one of them.
Ask your boss what the business goals are being achieved by this exercise.
Note on terminology:
A module in a system is a component with a well defined interface which can be replaced with another module with the same interface without effecting the rest of the system. A system composed of such modules is modular.
For both languages, the interface to a module is by convention a header file. Consider string.h and string as defining the interfaces to simple string processing modules in C and C++. If there is a bug in the implementation of string.h, a new libc.so is installed. This new module has the same interface, and anything dynamically linked to it immediately gets the benefit of the new implementation. Conversely, if there is a bug in string handling in std::string, then every project which uses it needs to be rebuilt. C++ introduces a very large amount of coupling into systems, which the language does nothing to mitigate - in fact, the better uses of C++ which fully exploit its features are often a lot more tightly coupled than the equivalent C code.
If you try and make C++ modular, you typically end up with something like COM, where every object has to have both an interface (a pure virtual base class) and an implementation, and you substitute an indirection for efficient template generated code.
If you don't care about whether your system is composed of replaceable modules, then you don't need to perform actions to to make it modular, and can use some of the features of C++ such as classes and templates which, suitable applied, can improve cohesion within a module. If your project is to produce a single, statically linked application then you don't have a modular system, and you can afford to not care at all about modularity. If you want to create something like anti-grain geometry which is beautiful example of using templates to couple together different algorithms and data structures, then you need to do that in C++ - pretty well nothing else widespread is as powerful.
So be very careful what your manager means by 'modularise'.
If every file already has "its own purpose and function" and "every single function in the program is passed a pointer to one of the structs" then the only difference made in changing it into classes would be to replace the pointer to the struct with the implicit this pointer. That would have no effect on how modularised the system is, in fact (if the struct is only defined in the C file rather than in the header) it will reduce modularity.
With “just” 7000 lines of C code, it will probably be easier to rewrite the code from scratch, without even trying to understand the current code.
And there is no automated way to do or even assist the modularization and refactoring that you envisage.
7000 LOC may sound like much but a lot of this will be boilerplate.
Try and see if you can simplify the code before changing it to c++. Basically though I think he just wants you to convert functions into class methods and convert structs into class data members (if they don't contain function pointers, if they do then convert these to actual methods). Can you get in touch with the original coder(s) of this program? They could help you get some understanding done but mainly I would be searching for that piece of code that is the "engine" of the whole thing and base the new software from there. Also, my boss told me that sometimes it is better to simply rewrite the whole thing, but the existing program is a very good reference to mimic the run time behavior of. Of course specialized algorithms are hard to recode. One thing I can assure you of is that if this code is not the best it could be then you are going to have alot of problems later on. I would go up to your boss and promote the fact that you need to redo from scratch parts of the program. I have just been there and I am really happy my supervisor gave me the ability to rewrite. Now the 2.0 version is light years ahead of the original version.
I read this article which is titled "Make bad code good" from http://www.javaworld.com/javaworld/jw-03-2001/jw-0323-badcode.html?page=7 . Its directed at Java users, but all of its ideas our pretty applicable to your case I think. Though the title makes it sound likes it is only for bad code, I think the article is for maintenance engineers in general.
To summarize Dr. Farrell's ideas, he says:
Start with the easy things.
Fix the comments
Fix the formatting
Follow project conventions
Write automated tests
Break up big files/functions
Rewrite code you don't understand
I think after following everyone else's advice this might be a good article to read when you have some free time.
Good luck!