Can an OCL Post condition be inside an if then statement? - ocl

I'm new to OCL and I have some doubts about how the pre and postconditions work.
Can a post condition be placed inside an if then statement?
For example, the following piece of code is valid or I'm just mixing concepts?
Context [some context here]
if (
... some conditions...
) then (
result = 1
post: self.isComplete() -- for example
)
endif
Thank you very much for your help

I would rewrite it as:
Context MyContext :: Integer
post :
if <some condition>
then
result = 1
endif
If you need more condition you can do this with:
Context MyContext :: Integer
post :
if <some condition>
then
-- Another condition
if self.isComplete()
then
result = 1
else
result = 0
endif
else
result = 0
endif

You could use implies operator like below:
context k
inv
(k.count=0)implies(k.status='nothing')

Simple answer. No. The Complete OCL post: declaration is an independent complement to an Operation.
A post-condition may fail if the actual model data is not compatible with the OCL programmer's expectations. So if you want a failure, then you get into the difficult area of how OCL fails reliably and usefully. See [1]. Alternatively if you just want a more complex control flow, rewrite as indicated in the earlier answer.
Since your internal post-condition could be reified as an external post-condition on a helper operation, there is clearly an opportunity for an OCL syntax enhancement to allow something along the lines of what you suggested.
Pre and post-conditions are poorly supported by current tooling. USE has an option to execute them, Eclipse OCL doesn't; they are only parsed for syntax validity. Executing post-conditions at run-time can be very expensive and difficult since the most pathological usage of #pre may require a copy of the whole system state.
The expensive run-time execution may be acceptable for a few test runs, but do you want to inflict it on your end users and what are those users to do if the post-condition fails?
Work described in [1] uses symbolic analysis to migrate the evaluation to compile-time ensuring that the conditions are respected and would not fail at run-time if the extra time was used.
However no symbolic analysis can accommodate all possible control flow so an oclAssert(expr, invariant) wrapper is needed to help the symbolic analysis by asserting that the local invariant is true for the returned expr. A similar wrapper could support your usage except that you want to provoke a run-time failure rather than promise that a hazard is not possible.
[1] http://www.eclipse.org/modeling/mdt/ocl/docs/publications/OCL2021Validity/OCLValidity.pdf

Related

Should I use assert to verify a third party function?

I use some third party function which based on the filter returns specified number of objects:
//void GetObjects(std::vector<T>&, Filter, int /*objectsNumber*/)
GetObjects(vec, filter, 1);
if(vec.empty())
{
throw ObjectNotFound();
}
assert(vec.size() == 1);
Should I use assert like above ? Is it a typical assert scenario ?
How to handle errors in your program depends on you, and on your program nature.
In a production environment, you usually try not to assert because then it means your application dies. In other cases, the process that executes your program would realize your program died and then restart it.
If it's just for learning/training, asserting with a proper message is a good way to find your problem easily and fast.
Bottom line - it's really up to you. There's no right or wrong here.
If you do want to assert, usually you do it only when some very basic invariant/condition is not met, when your program just cannot know how to proceed from this point.
Well, assert() is a macro that allows the checking code to be disabled for production code, so ensuring that the foreign function complies to the interface contract, it's a good custom to use assert to ensure the function specification are met.
Anyway, using mock instances and some Unit testing framework will give you better results, as it allows you to ensure the interface contract with finer exposure to foreign mistakes. I recommend you both, and happy to see those ideas circulating over these environments :)

Assertions in Fortran

Does Fortran have a standard function/keyword equivalent for C assert?
I could not find assert mentioned in Fortran2003 standard I have. I have found few ways how to use pre-processor, but in this answer it is suggested to write own assertions. Is it possible to create such user function/subroutine without using pre-processor?
I expect that these assertions are disabled for release builds.
Conditional compilation has never really caught on in Fortran and there isn't a standard pre-processor. If getting your pre-processor to switch in and out a dummy assert routine isn't something you want to tackle you could ...
Define a global parameter such as:
logical, parameter :: debugging = .true.
If you are of a nervous disposition you could put this into a module and use-associate it into every scope where it is needed; to my mind using a global parameter seems a reasonable approach here.
Then write guarded tests such as
if (debugging) call assert(...)
Once you want to release the code, set the value of debugging to .false. I expect, though I haven't tested this so you may care to, that any current Fortran compiler can remove dead code when it encounters an expression equivalent to
if (.false.) call assert(...)
and that your released code will pay no penalty for a dummy call to an assert routine.
Another approach might be to create a module, let's call it assertions, along these lines:
module assertions
contains
subroutine assert_prd(args)
! declare args
end subroutine
subroutine assert_dbg(args)
! declare args
! now do do some assertion checking and exception raising, etc
end subroutine
end module assertions
You could then rename the subroutines when you use-associate them, such as:
use, non_intrinsic :: assertions, assert=>assert_dbg
and change that to assert=>assert_prd when you want to turn off assertion checking. I suspect though that compilers may not eliminate the call to the empty subroutine entirely and that your production code might pay a small penalty for every assertion it encounters.
Beyond this, see the paper by Arjen Markus that #AlexanderVogt has referred you to.
To my knowledge, there is no such statement or function/subroutine in Standard Fortran. But - as you have said - you can use your own subroutines/function and/or OOP in Fortran to realize this goal. See Arjen Markus' excellent paper on this topic.
For fixed form sources, one can use -fd-lines-as-comments for release builds and -fd-lines-as-code for debug build (Intel Fortran -d-lines)
and use custom assert:
D if (assert_condition) then
D write(*,*) 'assert message'
D call exit(1)
D endif

Assert() - what is it good for ?

I don't understand the purpose of assert() .
My lecturer says that the purpose of assert is to find bugs .
For example :
double divide(int a , int b )
{
assert (0 != b);
return a/b;
}
Does the above assert justified ? I think that the answer is yes , because if my program
doesn't supposed to work with 0 (the number zero) , but somehow a zero does find its way into the b variable , then something is wrong with the code .
Am I correct ?
Can you show me some examples for a justified assert() ?
Regards
assert is used to validate things that should always be true if the
program is correct. Whether assert is justified in your example
depends on the specification of divide: if b != 0 is a precondition,
then the assert is usually the preferred way of verifying it: if
someone calls the function without fulfilling the preconditions, it is a
programming error, and you should terminate the program with extreme
prejudice, doing as little additional work as possible. (Usually.
There are applications where this is not the case, and where it is
better to throw an exception, and stumble along, hoping for the best.)
If, however, the specification of divide defines somw behavior when b
== 0 (e.g. return +/-Inf), then you should implement this instead of
using assert.
Also, it's possible to turn the assert off, if it turns out that it
takes too much runtime. Generally, however, this should only be done in
critical sections of code, and only if the profiler shows that you
really need it.
FWIW: not related to your question, but the code you've posted will
return 0.0 for divide( 1, 3 ). Somehow, I don't think that this is
what you wanted.
Another aspect of assertions:
They are also a kind of documentation.
Instead of comments like
// ptr is never NULL
// vec has now n elements
better write
assert(ptr!=0);
assert(vec.size()==n);
Comments may become outdated over time and will cause confusion.
But assertions are verified all the time.
Comments can be ignored. Assertions cannot.
You're pretty much spot-on in your assesment of assert, except for the fact you typically use assert during a debug-phase ... This is because you don't want an assert to trigger during production code ... throwing exceptions (and properly handling them) is the proper method for run-time error-management in production level code.
In general though, assert is used for testing an assumption. If an assumed condition is not met in the code during the debugging phase, especially when you are getting values that are out-of-bound for the desired input, you want your program to bail out at the point that the error is encountered so you can fix it. For instance, suppose you were calling a function that returned a pointer, and that function should never return a NULL pointer value. In other words returning a NULL value is not just some indicator of an error-condition, but it means that the assumption of how you imagine your code works is wrong. That is a good place to use assert ... you assume your program will work one way, and if it doesn't then you don't want that error propagating to cause some crazy hard-to-find bug somewhere else ... you want to nix it right when it occurs.
Finally, you can combine built in macros with assert such as __LINE__ and __FILE__ that will give you the file and line number in the code where the assert took place to help you quickly identify the problem area.
The purpose of an assert is to signal out unexpected behavior during debugging (as it's only available in a debug build). Your example is a justified case of assert. The next line would probably crash, but with the assert there you have the option to break execution right before the line is hit, and do some debugging.
This is usually done in parallel with exceptions - you assert to signal that something is wrong, and throw an exception to treat the case gracefully (even exiting the program):
double divide(int a , int b )
{
assert (0 != b);
if ( b )
return a/b;
throw division_by_0_exception();
}
There are cases where you want to continue execution, but still want to signal that something went wrong.
Assert is used to test assumptions about your code in a debug environment. Asserts generally have no effect on your final build.
Whether or not it is a valid test is another matter entirely. We can't answer that without intimate knowledge of your application.
Asserts should never fail. If you see any possibility that the assertion could fail, then you need an if statement instead to handle those cases where the condition is not true. Assertions are only for conditions that you believe will never fail.
Asserts are used to check invariants during code execution, those are the conditions that are assumed by programmer to always stay the same, if they differ from assumptions then there is a bug in the code.
Asserts can be also used for checking preconditions and postconditions, the first is checked before some code block and verifies if provided data/state is correct, the second one checks whether the outcome of some calculations are correct. This helps to narrow where problems/bugs might be located:
assert( /*preconditions*/ );
/*here some algorithm - and maybe more asserts checking invariants*/
assert( /*postconditions*/ );
Some examples of justified asserts:
Checking function return value, for example if you call some external API function and you know that it returns some error value only in case of programming error:
WinAPI Thread32First function requires that provided LPTHREADENTRY32 structure has properly assigned dwSize field, in case of error it fails. This failure should be catched by assert.
If function accepts pointer to some data, then add assert at the start of function to verify that it is non-null. This makes sense if this function cannot work on null pointer.
If you have a lock on mutex with set timeout then if this timeout ends then you can use assert to indicate possible race condition / deadlock
... and many many more
Nice trick with asserts is to add some info inside, ex.:
assert(false && "Reason for this assert");
"Reason for this assert" will show up to you in a message box
You might also want to know that we also have static asserts that indicate errors during compilation.

Should it be "Arrange-Assert-Act-Assert"?

Regarding the classic test pattern of Arrange-Act-Assert, I frequently find myself adding a counter-assertion that precedes Act. This way I know that the passing assertion is really passing as the result of the action.
I think of it as analogous to the red in red-green-refactor, where only if I've seen the red bar in the course of my testing do I know that the green bar means I've written code that makes a difference. If I write a passing test, then any code will satisfy it; similarly, with respect to Arrange-Assert-Act-Assert, if my first assertion fails, I know that any Act would have passed the final Assert - so that it wasn't actually verifying anything about the Act.
Do your tests follow this pattern? Why or why not?
Update Clarification: the initial assertion is essentially the opposite of the final assertion. It's not an assertion that Arrange worked; it's an assertion that Act hasn't yet worked.
This is not the most common thing to do, but still common enough to have its own name. This technique is called Guard Assertion. You can find a detailed description of it on page 490 in the excellent book xUnit Test Patterns by Gerard Meszaros (highly recommended).
Normally, I don't use this pattern myself, since I find it more correct to write a specific test that validates whatever precondition I feel the need to ensure. Such a test should always fail if the precondition fails, and this means that I don't need it embedded in all the other tests. This gives a better isolation of concerns, since one test case only verifies one thing.
There may be many preconditions that need to be satisfied for a given test case, so you may need more than one Guard Assertion. Instead of repeating those in all tests, having one (and one only) test for each precondition keeps your test code more mantainable, since you will have less repetition that way.
It could also be specified as Arrange-Assume-Act-Assert.
There is a technical handle for this in NUnit, as in the example here:
http://nunit.org/index.php?p=theory&r=2.5.7
Here's an example.
public void testEncompass() throws Exception {
Range range = new Range(0, 5);
assertFalse(range.includes(7));
range.encompass(7);
assertTrue(range.includes(7));
}
It could be that I wrote Range.includes() to simply return true. I didn't, but I can imagine that I might have. Or I could have written it wrong in any number of other ways. I would hope and expect that with TDD I actually got it right - that includes() just works - but maybe I didn't. So the first assertion is a sanity check, to ensure that the second assertion is really meaningful.
Read by itself, assertTrue(range.includes(7)); is saying: "assert that the modified range includes 7". Read in the context of the first assertion, it's saying: "assert that invoking encompass() causes it to include 7. And since encompass is the unit we're testing, I think that's of some (small) value.
I'm accepting my own answer; a lot of the others misconstrued my question to be about testing the setup. I think this is slightly different.
An Arrange-Assert-Act-Assert test can always be refactored into two tests:
1. Arrange-Assert
and
2. Arrange-Act-Assert
The first test will only assert on that which was set up in the Arrange phase, and the second test will only assert for that which happened in the Act phase.
This has the benefit of giving more precise feedback on whether it's the Arrange or the Act phase that failed, while in the original Arrange-Assert-Act-Assert these are conflated and you would have to dig deeper and examine exactly what assertion failed and why it failed in order to know if it was the Arrange or Act that failed.
It also satisfies the intention of unit testing better, as you are separating your test into smaller independent units.
I am now doing this. A-A-A-A of a different kind
Arrange - setup
Act - what is being tested
Assemble - what is optionally needed to perform the assert
Assert - the actual assertions
Example of an update test:
Arrange:
New object as NewObject
Set properties of NewObject
Save the NewObject
Read the object as ReadObject
Act:
Change the ReadObject
Save the ReadObject
Assemble:
Read the object as ReadUpdated
Assert:
Compare ReadUpdated with ReadObject properties
The reason is so that the ACT does not contain the reading of the ReadUpdated is because it is not part of the act. The act is only changing and saving. So really, ARRANGE ReadUpdated for assertion, I am calling ASSEMBLE for assertion. This is to prevent confusing the ARRANGE section
ASSERT should only contain assertions. That leaves ASSEMBLE between ACT and ASSERT which sets up the assert.
Lastly, if you are failing in the Arrange, your tests are not correct because you should have other tests to prevent/find these trivial bugs. Because for the scenario i present, there should already be other tests which test READ and CREATE. If you create a "Guard Assertion", you may be breaking DRY and creating maintenance.
I don't use that pattern, because I think doing something like:
Arrange
Assert-Not
Act
Assert
May be pointless, because supposedly you know your Arrange part works correctly, which means that whatever is in the Arrange part must be tested aswell or be simple enough to not need tests.
Using your answer's example:
public void testEncompass() throws Exception {
Range range = new Range(0, 5);
assertFalse(range.includes(7)); // <-- Pointless and against DRY if there
// are unit tests for Range(int, int)
range.encompass(7);
assertTrue(range.includes(7));
}
Tossing in a "sanity check" assertion to verify state before you perform the action you're testing is an old technique. I usually write them as test scaffolding to prove to myself that the test does what I expect, and remove them later to avoid cluttering tests with test scaffolding. Sometimes, leaving the scaffolding in helps the test serve as narrative.
I've already read about this technique - possibly from you btw - but I do not use it; mostly because I'm used to the triple A form for my unit tests.
Now, I'm getting curious, and have some questions: how do you write your test, do you cause this assertion to fail, following a red-green-red-green-refactor cycle, or do you add it afterwards ?
Do you fail sometimes, perhaps after you refactor the code ? What does this tell you ? Perhaps you could share an example where it helped. Thanks.
I have done this before when investigating a test that failed.
After considerable head scratching, I determined that the cause was the methods called during "Arrange" were not working correctly. The test failure was misleading. I added a Assert after the arrange. This made the test fail in a place which highlighted the actual problem.
I think there is also a code smell here if the Arrange part of the test is too long and complicated.
In general, I like "Arrange, Act, Assert" very much and use it as my personal standard. The one thing it fails to remind me to do, however, is to dis-arrange what I have arranged when the assertions are done. In most cases, this doesn't cause much annoyance, as most things auto-magically go away via garbage collection, etc. If you have established connections to external resources, however, you will probably want to close those connections when you're done with your assertions or you many have a server or expensive resource out there somewhere holding on to connections or vital resources that it should be able to give away to someone else. This is particularly important if you're one of those developers who does not use TearDown or TestFixtureTearDown to clean up after one or more tests. Of course, "Arrange, Act, Assert" is not responsible for my failure to close what I open; I only mention this "gotcha" because I have not yet found a good "A-word" synonym for "dispose" to recommend! Any suggestions?
Have a look at Wikipedia's entry on Design by Contract. The Arrange-Act-Assert holy trinity is an attempt to encode some of the same concepts and is about proving program correctness. From the article:
The notion of a contract extends down to the method/procedure level; the
contract for each method will normally contain the following pieces of
information:
Acceptable and unacceptable input values or types, and their meanings
Return values or types, and their meanings
Error and exception condition values or types that can occur, and their meanings
Side effects
Preconditions
Postconditions
Invariants
(more rarely) Performance guarantees, e.g. for time or space used
There is a tradeoff between the amount of effort spent on setting this up and the value it adds. A-A-A is a useful reminder for the minimum steps required but shouldn't discourage anyone from creating additional steps.
Depends on your testing environment/language, but usually if something in the Arrange part fails, an exception is thrown and the test fails displaying it instead of starting the Act part. So no, I usually don't use a second Assert part.
Also, in the case that your Arrange part is quite complex and doesn't always throw an exception, you might perhaps consider wrapping it inside some method and writing an own test for it, so you can be sure it won't fail (without throwing an exception).
If you really want to test everything in the example, try more tests... like:
public void testIncludes7() throws Exception {
Range range = new Range(0, 5);
assertFalse(range.includes(7));
}
public void testIncludes5() throws Exception {
Range range = new Range(0, 5);
assertTrue(range.includes(5));
}
public void testIncludes0() throws Exception {
Range range = new Range(0, 5);
assertTrue(range.includes(0));
}
public void testEncompassInc7() throws Exception {
Range range = new Range(0, 5);
range.encompass(7);
assertTrue(range.includes(7));
}
public void testEncompassInc5() throws Exception {
Range range = new Range(0, 5);
range.encompass(7);
assertTrue(range.includes(5));
}
public void testEncompassInc0() throws Exception {
Range range = new Range(0, 5);
range.encompass(7);
assertTrue(range.includes(0));
}
Because otherwise you are missing so many possibilities for error... eg after encompass, the range only inlcudes 7, etc...
There are also tests for length of range (to ensure it didn't also encompass a random value), and another set of tests entirely for trying to encompass 5 in the range... what would we expect - an exception in encompass, or the range to be unaltered?
Anyway, the point is if there are any assumptions in the act that you want to test, put them in their own test, yes?
I use:
1. Setup
2. Act
3. Assert
4. Teardown
Because a clean setup is very important.

What are assertions? and why would you use them?

How are assertions done in c++? Example code is appreciated.
Asserts are a way of explicitly checking the assumptions that your code makes, which helps you track down lots of bugs by narrowing down what the possible problems could be. They are typically only evaluated in a special "debug" build of your application, so they won't slow down the final release version.
Let's say you wrote a function that took a pointer as an argument. There's a good chance that your code will assume that the pointer is non-NULL, so why not explicitly check that with an assertion? Here's how:
#include <assert.h>
void function(int* pointer_arg)
{
assert(pointer_arg != NULL);
...
}
An important thing to note is that the expressions you assert must never have side effects, since they won't be present in the release build. So never do something like this:
assert(a++ == 5);
Some people also like to add little messages into their assertions to help give them meaning. Since a string always evaulates to true, you could write this:
assert((a == 5) && "a has the wrong value!!");
Assertion are boolean expressions which should typically always be true.
They are used to ensure what you expected is also what happens.
void some_function(int age)
{
assert(age > 0);
}
You wrote the function to deal with ages, you also 'know' for sure you're always passing sensible arguments, then you use an assert. It's like saying "I know this can never go wrong, but if it does, I want to know", because, well, everyone makes mistakes.
So it's not to check for sensible user input, if there are scenario's where something can go wrong, don't use an assert. Do real checks and deal with the errors.
Asserts are typically only for debug builds, so don't put code with side effects in asserts.
Assertions are used to verify design assumptions, usually in terms of input parameters and return results. For example
// Given customer and product details for a sale, generate an invoice
Invoice ProcessOrder(Customer Cust,Product Prod)
{
assert(IsValid(Cust));
assert(IsValid(Prod);
'
'
'
assert(IsValid(RetInvoice))
return(RetInvoice);
}
The assert statements aren't required for the code to run, but they check the validity of the input and output. If the input is invalid, there is a bug in the calling function. If the input is valid and output is invalid, there is a bug in this code. See design by contract for more details of this use of asserts.
Edit: As pointed out in other posts, the default implementation of assert is not included in the release run-time. A common practice that many would use, including myself, is to replace it with a version that is included in the release build, but is only called in a diagnostics mode. This enables proper regression testing on release builds with full assertion checking. My version is as follows;
extern void _my_assert(void *, void *, unsigned);
#define myassert(exp) \
{ \
if (InDiagnostics) \
if ( !(exp) ) \
_my_assert(#exp, __FILE__, __LINE__); \
} \
There is a small runtime overhead in this technique, but it makes tracking any bugs that have made it into the field much easier.
Use assertions to check for "can't happen" situations.
Typical usage: check against invalid/impossible arguments at the top of a function.
Seldom seen, but still useful: loop invariants and postconditions.
Assertions are statements allowing you to test any assumptions you might have in your program. This is especially useful to document your program logic (preconditions and postconditions). Assertions that fail usually throw runtime errors, and are signs that something is VERY wrong with your program - your assertion failed because something you assumed to be true was not. The usual reasons are: there is a flaw in your function's logic, or the caller of your function passed you bad data.
An assertion is something you add to your program that causes the program to stop immediately if a condition is met, and display an error message. You generally use them for things which you believe can never happen in your code.
This doesn't address the assert facility which has come down to us from early C days, but you should also be aware of Boost StaticAssert functionality, in the event that your projects can use Boost.
The standard C/C++ assert works during runtime. The Boost StaticAssert facility enables you to make some classes of assertions at compile time, catching logic errors and the like even earlier.
Here is a definition of what an assertion is and here is some sample code. In a nutshell an assertion is a way for a developer to test his (or her) assumptions about the state of the code at any given point. For example, if you were doing the following code:
mypointer->myfunct();
You probably want to assert that mypointer is not NULL because that's your assumption--that mypointer will never be NULL before the call.