What does the setCleanup method do on a LandingPad? - llvm

What does the setCleanUp(true) actually do when applied to a landing pad and what is the effect of not doing it.
I have this code
LLVM.SetCleanup(pad, true);
(which is LLVMSharp) and this works fine for catching and handling exceptions, but if I remove it, the code still works fine, so what does this actually do and what is the effect of not doing it? The LLVM docs just say Indicate that this landingpad instruction is a cleanup but not what its effect is. Or in other words what is an example of a landingpad that is not a cleanup?

Cleanup landing pads can be used for things like calling destructors when unwinding the stack. Generally, cleanup clauses are always executed, whereas non-cleanup landing pads would be used for actually handling exceptions, and only entered if the type of the exception matches the type specified by the landing pad.
What the cleanup clauses actually do will depend entirely on the target language.
See this document for a bit more context https://llvm.org/docs/ExceptionHandling.html#cleanups

Related

How to handle failed methods: by using exceptions or making the methods return bool?

How to handle failed methods:
using exceptions
making the methods return bool
The first approach is when something goes wrong to throw an exception.
But the problematic code needs to be placed in a try block,
and then you need to write the catch block.
The second approach you need to check the return value from
the method, and then do something.
So basically isn't it the same mechanism? You have two parts:
detecting that something goes wrong and then doing something about it.
So does it matter then which approach I use?
The main benefit with exceptions is that they are non-local. You can catch an exception several invocation layers away from where it was thrown. That way, code in between doesn't have to care about exceptions (except ensuring proper cleanup during unwinding, i.e. being exception safe), which makes it less likely that an exceptional situation gets forgotten. But this benefit comes at a price: stack unwinding is more complicated than simply returning a value. In terms of performance, the return value approach is usually simpler.
So I'd use these to choose: if for some reason the only reasonable place to deal with a problem is directly at the location where the function was called, and if you are fairly certain that every caller will include some kind of error handling code in any case, and is not likely to forget doing so, then a return value would be best. Otherwise, I'd go for an exception.
Basically you can reach the same behavior with both approaches, but Exception can give 2 added values:
1) You don't have to handle the error in the exact calling method, it can be anywhere up the call stack. this remove the if(!doSomthing()) return false; from the code when you just want to pass the error up.
2) It allow you to write a block of code, under one try and handle all the errors under it in one catch block.
There is no simple answer. For instance, here is the conclusion of the article C++ Exceptions: Pros and Cons
There is no simple answer to the "exceptions or error codes" question. The decision needs to be made based on a specific situation that a development team faces. Some rough guidelines may be:
If you have a good development process and code standards that are actually being followed, if you are writing modern-style C++ code that relies on RAII to clean up resources for you, if your code base is modular, using exceptions may be a good idea.
If you are working with code that was not written with exception safety in mind, if you feel there is a lack of discipline in your development team, or if you are developing hard real-time systems, you should probably not use exceptions.
My personal rule is to raise exception only when something exceptional occurs, ie when the problem may not have appeared at all. Otherwise I use return value (most of the time).
For example, when searching for a file that MUST exists, not finding it raises an exception. But if the file may or may not exists, not finding it is not exceptional so no need for an exception.
There's no answer for all situations. Both approaches have strengths and weaknesses:
Exceptions:
are slightly more verbose to handle locally
can simply be ignored if the error can't be handled locally
can carry as much information as you like about the error, both statically (in the exception type) and dynamically (in the thrown object)
require a handler somewhere to avoid terminating the program
may have more runtime overhead (but may have less when nothing is thrown, depending on how they're implemented)
require the code to be exception safe
Return values:
must be manually passed up the stack if not handled locally: prone to bugs if you forget
have a fixed type, limiting how much information they can carry (although you could return a pointer to a polymorphic type, and deal with the associated lifetime management issues)
are awkward to use if the function also needs to return something on success
There are two main differences: (a) it is easier for the calling code to just silently ignore the boolean status code. (b) Exceptions provide more context than mere false. You can distinguish business-logic errors from I/O errors from input validation errors etc.
I prefer bools. I'd say its personal preference.
I have found it easier to read.

function attribute returns_twice

I just was looking up funciton attributes for gcc
(http://gcc.gnu.org/onlinedocs/gcc-4.7.2/gcc/Function-Attributes.html)
and came across the returns_twice attribute.
And I am absolutely clueless in what case a function can return twice... I looked up quickly the mentioned vfork() and setjmp() but continue without an idea how an applicable scenario looks like - anyone of you used it or can explain a bit?
The setjmp function is analogous to creating a label (in the goto sense), as such you will first return from setjmp when you set the label, and then each time that you actually jump to it.
If it seems weird, rest assured, you should not be using setjmp in your daily programming. Or actually... you should probably not be using it at all. It is a very low-level command that break the expected execution flow (much like goto) and, especially in C++, most of the invariants you could expect.
When you call setjmp, it establishes that as a return point, then execution continues at the code immediately following the setjmp call.
At some point later in the code, calling longjmp (with the jump buffer initialized by the previous call to setjmp) returns execution to start from that same point again (i.e., the code immediately following the call the setjmp).
Therefore, the original call returns normally, then at arbitrary later times, execution returns (or at least may return) to the same point again.
The attribute simply warns the compiler of that fact.

Does try-catch block decrease performance [duplicate]

This question already has answers here:
In what ways do C++ exceptions slow down code when there are no exceptions thown?
(6 answers)
Closed 9 years ago.
This link states,
To catch exceptions we must place a portion of code under exception
inspection. This is done by enclosing that portion of code in a try
block. When an exceptional circumstance arises within that block, an
exception is thrown that transfers the control to the exception
handler. If no exception is thrown, the code continues normally and
all handlers are ignored.
Does it mean that having a try block reduces performance due to the extra task of "inspection" during run time?
TL;DR NO, exceptions are usually faster on the non-exceptional path compared to error code handling.
Well, the obvious remark is compared to what ?
Compared to not handling the error, it obviously decrease performance; but is performance worth the lack of correctness ? I would argue it is not, so let us supposed that you meant compared to an error code checked with an if statement.
In this case, it depends. There are multiple mechanisms used to implement exceptions. Actually, they can be implemented with a mechanism so close to an if statement that they end up having the same cost (or slightly higher).
In C++ though, all major compilers (gcc introduced it in 4.x serie, MSVC uses it for 64 bits code) now use the Zero-Cost Exception Model. If you read this technical paper that Need4Sleep linked to, it is listed as the table-driven approach. The idea is that for each point of the program that may throw you register in a side-table some bits and pieces that will allow you to find the right catch clause. Honestly, it is a tad more complicated implementation-wise than the older strategies, however the Zero Cost name is derived by the fact that it is free should no exception be thrown. Contrast this to a branch misprediction on a CPU. On the other hand, when an exception is thrown, then the penalty is huge because the table is stored in a cold zone (so likely requires a round-trip to RAM or worse)... but exceptions are exceptional, right ?
To sum up, with modern C++ compilers exceptions are faster than error codes, at the cost of larger binaries (due to the static tables).
For the sake of exhaustiveness, there is a 3rd alternative: abortion. Instead of propagating an error via either status or exception, it is possible to abort the process instead. This is only suitable in a restricted number of situations, however it optimizes better than either alternative.
Take a look at Section 5.4 of draft Technical Report on C++ Performance
which is specifically about the overhead of try-catch statements in c++.
A little excerpt from the section:
5.4.1.1.2 Time Overhead of the “Code” Approach
• On entry to each try-block
♦ Commit changes to variables enclosing the try-block
♦ Stack the execution context
♦ Stack the associated catch clauses
• On exit from each try-block
♦ Remove the associated catch clauses
♦ Remove the stacked execution context
• When calling regular functions
♦ If a function has an exception-specification, register it for checking
• As local and temporary objects are created
♦ Register each one with the current exception context as it is created
• On throw or re-throw
♦ Locate the corresponding catch clause (if any) – this involves some runtime check (possibly resembling RTTI checks)
If found, then:
destroy the registered local objects
check the exception-specifications of the functions called in-between
use the associated execution context of the catch clause
Otherwise:
call the terminate_handler6
It depends. For exception handling, the compiler has to do something - otherwise it could not do stack unwinding and such. That means yes, exception handling decreases performance - even if the exception is not thrown. How much - this depends on your compilers implementation.
On the other hand you have to question yourself: If you insert your error handling code yourself, would it really be faster (measure it - don't guess it). Can it do the same as exceptions (exceptions can't be ignored by the client - error codes can - and they can do stackunwinding which error codes can't). Additionally, the code can be written to be more maintainable with exceptions.
Short: unless your code is very very very very very time critical, use exceptions. Even if you decide against them: measure first. One exception to the rule: it's a bad idea to throw exceptions across module boundaries or in a destructor.
It really depends on the specific compiler.
If the compiler prefers to consider exception throwing a really exceptional condition then you can implement a scheme where in case of no exception you have zero-overhead but that, in turn, will cost more time in case of an exception and/or more code size.
To implement a zero-overhead approach you can notice that in C++ you cannot dynamically change code so once you know what is the stack frame and the return address it's fixed what are the objects that must be destroyed in case of unwinding or if there is an exception handling code section. The code for throwing an exception could therefore check a global table of all function call sites to decide what should be done.
On the other side you can make exception throwing faster by preparing the list of objects to be explicitly destroyed and the address of the exception handling code during normal execution. This will make regular code slower, but exception handling faster and also I'd say code a bit smaller.
Unfortunately there is no standard way in C++ to completely give up exception support, so something has to be paid to this possibility: the standard library throws exceptions and any code that calls unknown code (e.g. using a function pointer or calling a virtual method) must be prepared to handle an exception.
I would recommend adding try catch in functions which does memory allocation, deletion, calling another complex functions etc. Actually performance wise try catch adds a little bit of overhead.
But considering the merit of catching the unknown exceptions it is very helpful.
Good programming practices always recommend adding some kind of exception handling in your code unless you are an exceptional programmer.
I wonder why you are so concerned about small performance issue than the entire program being stuck with an exception.

Alternative to exceptions for methods that return objects

I have a class that can perform many types of transformations to a given object.
All data is supplied by the users (via hand crafted command files) but for the most part there is no way to tell if the file commands are valid until we start transforming it (if we checked everything first we'd end up doing the same amount of work as the transformations themselves)
These transformation methods are called on the object and it returns a newly transformed object however if there is a problem we throw an exception since returning an object (even a blank object) could be confused with success whereas an exception sends a clear signal there was a problem and an object can not be returned. It also means we don't need to call a "get last error" type function to find out exactly what went wrong (error code, message, statement, etc.)
However from reading numerous answers on SO it seems this is incorrect since invalid user input is not an exceptional circumstance and due to the complexity of this thing its not uncommon for the user to submit incorrect command files.
Performance is not really an issue here because if we hit an error we have to stop and go back to the user.
If I use return codes I'd have to take the output object as a parameter and then to check the return codes I'd need long nested if blocks (ie. the way you check HRESULT from COM)
How would I go about avoiding exceptions in this case while still keeping the code reasonably tidy?
The design of your problem really lends itself for exceptions. The fact that program execution will halt or be invalid once the user supplies a bad input file, is a sign of "exceptional circumstance". Sure, a lot of program runs will end in an exception being thrown (and caught), but this is in one execution of the program.
The things you are reading about exceptions being slow when used for every other circumstance, is only valid if the program can recover, and has to recover often (e.g. a compiler that fails to find a header in a search directory, and has to look at the next directory in the list of search directories, which really happens a lot).
Use exceptions. When in doubt, measure if this is killing performance. Cleaner code >> following what people say on SO. But then again, that would be a reason to ignore what I just said.
Well, I wouldn't (avoid exceptions). If you really need an "error reporting" mechanism (and you do), there is not much besides return values and exceptions. I think the whole argument about what is exceptional enough (and therefor deserves exceptions) and what is not, is not that important to keep you from solving your problem. Especially if performace isn't an issue. But if you REALLY want avoid exceptions, you could use some kind of global queues to queue your error iformation. But that is utterly ugly, if you ask me.

Exception handling - what happens after it leaves catch

So imagine you've got an exception you're catching and then in the catch you write to a log file that some exception occurred. Then you want your program to continue, so you have to make sure that certain invariants are still in a a good state. However what actually occurs in the system after the exception was "handled" by a catch?
The stack has been unwound at that point so how does it get to restore it's state?
"Stack unwinding" means that all scopes between throw and the matching catch clause are left, calling destructors for all automatic objects in those scopes, pretty much in the same way function scopes are left when you return from a function.
Nothing else "special" is done, the scope of a catch clause is a normal scope, and leaving it is no different from leaving the scope of an else clause.
If you need to make sure certain invariants still hold, you need to program the code changing them in a thread-safe manner. Dave Abrahams wrote a classic on the different levels of exception safety, you might want to read that. Basically, you will have to consequently employ RAII in order to be on the safe side when exceptions are thrown.
Only objects created inside the try will have been destroyed during unwinding. It's up to you to write a program in such way that if an exception occurs program state stays consistent - that's called exception safety.
C++ doesn't care - it unwinds stack, then passes control into an appropriate catch, then control flow continues normally.
It is up to you to ensure that the application is recovered into a stable state after catching the exception. Usually it is achieved by "forgetting" whatever operation or change(s) produced the exception, and starting afresh on a higher level.
This includes ensuring that any resources allocated during the chain of events leading to the exception gets properly released. In C++, the standard idiom to ensure this is RAII.
Update
For example, if an error occurs while processing a request in a web server, it generates an exception in some lower level function, which gets caught in a higher level class (possibly right in the top level request handler). Usually the best possible thing to do is to roll back any changes done and free any resources allocated so far related to the actual request, and return an appropriate error message to the client. Changes may include DB transactions, file writes, etc - one must implement all these in an exception safe manner. Databases typically have built in transactions to deal with this; with other resources it may be more tricky.
This is up to the application. There are several levels of exception-safety. The level you describe is hard to achieve for the whole application.
Certain pieces of code, however, can be made 'Failure transparent', by using techniques like RAII, and by smartly ordering the sequence of actions. I could imagine a piece of code querying several urls for data, for instance: when one url would 'throw', the rest of the urls can still be handled. Or it can be retried...
If you have exception handling in every function you can resume on the next higher level but its rather complicated, in fact I use exceptions mainly to detect errors as close to the source as possible but don't use them for resuming execution.
if on the other hand there are errors that are predictable one can devise schemes to handle that, but for me exceptions are considered exceptions so I tend to try and exit gracefully instead with a good hint in the log file where it happened. JM2CW
It can't. Exceptions aren't resumable in C++. Nor in most
modern languages; some of the first languages to support
exceptions did support resumable exceptions, and found that it
wasn't a good idea.
If you want to be able to resume from some specific point, you
have to put your try/catch block there. If you just want to log
and continue, don't throw the exception in the first place.