In Z3, how can we write a program to get the result from evaluation? By default model.eval(expression) will return another expression of the evaluation result. How can I assign the result to a type-specific data? Below is what I want to do in my program.
int a = model.eval(x + 1) // compiler error
Sometimes models are not complete. For instance, when nothing depends on the value of x, then Z3 may not assign any value at all to it, i.e., you're free to chose whatever value suits you. The eval function has a second argument, which, when set to true, will enable model completion, i.e., eval will substitute those don't-cares with some legal value (often 0).
Z3-ints are actual integers, not C/C++-ints smaller than 2^32-1, so the conversion is not performed automatically. If you know that in your application this will always be ok, and that eval will always return a numeral, then you can use Z3_get_numeral_int to perform that conversion.
Related
In C++11 is there a way to implement a sqrt function that works for both positive and negative double input values? I would like the return type to be std::complex<double> if the input is negative and a double if it is positive. I realize the simple solution is to just always return std::complex<double> but this is not what I am looking for.
Below I have an example of my first attempt, however this does not compile due to the presence of a in the return type:
inline decltype((a > 0)?(double):(std::complex<double>)) sqrt(const double& a)
{
if(a > 0)
{
return std::sqrt(a);
}
else
{
return ((std::complex<double>(0.0,1.0))*std::sqrt(-a));
}
}
No, this is not possible.
Types are compile-time constructs, and your function's type is fixed at compile-time.
Were the argument provided as a constexpr "constant expression", then you might have used templates to solve your problem. But, with dynamic inputs, this is simply not possible.
You could return a boost::variant<double, std::complex<double>> (since the decision has to be made at runtime), but that seems like adding a lot of complexity to your code for no real gain.
It's not possible, and probably more importantly, probably wouldn't be useful even if you could do it.
Since the function can return a complex number, the calling code needs to be prepared to process a complex number. If your code needs to be able to process complex numbers in any case, it almost certainly makes more sense to always return a complex number, and if the input was positive it'll be of the form X + 0i (i.e., the imaginary part will equal 0).
In theory, you could do this in a language that supported dynamic typing -- but even with most of them, sqrt is basically statically typed. If the input you provide is a real number, the result will also be a real number, and if the input was negative the result will typically be a NaN or something on that order. If (and only if) you provide the input as a complex number will you receive a complex number as the result.
I'm trying to replace all instances of an address with a constant.
I'm getting & testing the address of store with the following (i is an instruction)
//already know it's a store instruction at this point
llvm::Value *addy = i->getOperand(0);
if(llvm::ConstantInt* c = dyn_cast<llvm:::ConstantInt>(addy)){
//replace all uses of the address with the constant
//operand(1) will be the address the const would be stored at
i->getOperand(1)->replaceAllUsesWith(c);
}
I'd think this would work, but I'm getting the error that
"Assertion: New->getType()== getType() && replaceAllUses of value with new value of different type!" failed
and I'm not sure why...my understanding of replaceAllUses is that it would replace usage of address (i->getOperand(1) with the constant?
The error message is pretty straightforward: the type of the new value is not identical to the type of the old value that you are replacing.
LLVM IR is strongly typed, and as you can see in the language reference, every instruction has a specific type it expects as each operand. For example, store requires that the address's type will always be a pointer to the type of the value being stored.
As a result, whenever you replace the usage of a value, you must ensure first that they both have the same type - replaceAllUsesWith actually has an assert to verify it, as you can see, and you failed it. It's also simple to see why: operand 1 of a store instruction is always of some pointer type, and a ConstantInt always represents something of some integer type, so surely they can never match.
What exactly are you trying to achieve? Perhaps you are thinking about replacing each load of that store's address with a usage of the constant? In that case, you'll have to find yourself all the loads that use that address, and for each of them (for each of the loads, I mean, not of the addresses) perform replaceAllUsesWith with the constant. There are standard LLVM passes that can do those things for you, by the way - check out the pass list. I'm guessing mem2reg followed by some constant propagation pass will take care of this.
My program requires several floats to be set to a default number when the program launches. As the program runs these integers will be set to their true values. These true values however can be any real number. My program will be consistently be checking these numbers to see if their value has been changed from the default.
For example lets say I have integers A,B,C. All these integers will be set to a default value at the start (lets say -1). Then as the program progresses, lets say A and B are set to 3 and 2 respectfully. Since C is still at the default value, the program can conclude than C hasn't been assigned a non-default value yet.
The problem arises when trying to find a unique default value. Since the values of the numbers can be set to anything, if the value its set to is identical to the default value, my program won't know if a float still has the default value or its true value is just identical to the default value.
I considered NULL as a default value, but NULL is equal to 0 in C++, leading to the same problem!
I could create a whole object consisting of an bool and a float as members, where the bool indicates whether the float has been assigned its own value yet or not. This however seems like an overkill. Is there a default value I can set my floats to such that the value isn't identical to any other value? (Examples include infinity or i)
I am asking for C/C++ solutions.
I could create a whole object consisting of an bool and a integer as
members, where the bool indicates whether the number has been assigned
its own value yet or not. This however seems like an overkill.
What you described is called a "nullable type" in .NET. A C++ implementation is boost::optional:
boost::optional<int> A;
if (A)
do_something(*A);
On a two's complement machine there's an integer value that is less useful than the others: INT_MIN. You can't make a valid positive value by negating it. Since it's the least useful value in the integer range, it makes a good choice for a marker value. It also has an easily recognizable hex value, 0x80000000.
There is no bit pattern you can assign to an int that isn't an actual int. You need to keep separate flags if you really have no integer values that are out of bounds.
If the domain of valid int values is unlimited, the only choice is a management bit indicating whether it is assigned or not.
But, are you sure MAX_INT is a desired choice?
There is no way to guarantee that a value you assign an int to is not going to be equal to another random int. The only way to assure that what you want to happen occurs, is to create a separate bool to account for changes.
No, you will have to create your own data type which contains the information about whether it has been assigned or not.
If as you say, no integer value is off limits, then you cannot assign a default "uninitialised" value. Just use a struct with an int and a bool as you suggest in your question.
I could create a whole object consisting of an bool and a integer as
members, where the bool indicates whether the number has been assigned
its own value yet or not. This however seems like an overkill.
My first guess would be to effectively use a flag and mark each variable. But this is not your only choice of course.
You can use pointers (which can be NULL) and assign dynamically the memory. Not very convenient.
You can pick a custom value which is almost never used. You can then define this value to be the default value. Ofc, some time, you will need to assign this value to your floats, but this case won't happen often and you just need to keep track of this variables. Given the occurrence of such case, a simple linked list should do.
I am reading Modern C++ design. It was mentioned about sizeof opeator as following description. Following paragraph is explained from generic programming point of view.
There is a surprising amount of power in sizeof: You can apply sizeof to any expression, no matter how complex, and sizeof returns its size without actually evaluating that expression at runtime. This means that sizeof is aware of overloading, template instantiation, conversion rules—everything that can take part in a C++ expression. In fact, sizeof conceals a complete facility for deducing the type of an expression; eventually, sizeof throws away the expression and returns only the size of its result.
My question is what does author mean sizeof returns its size with out actually evalutating the exression at runtime. And also in last line it was mentioned that sizeof throws away the expression. Request help in understanding these statements, it would be good if it is done with example.
Thanks
what does author mean sizeof returns its size with out actually evalutating the exression at runtime.
It means that sizeof(1/0) will yield sizeof(int), even though 1/0 would normally abort the program, because division by zero is a runtime error. Also, for any p declared as T* p, sizeof(*p) will yield sizeof(T) no matter what value is stored in p, even if p is dangling or not initialized at all.
sizeof is evaluated at compile time: the compiler computes the type of the expression that follows the sizeof operator. This is done once and for all by the compiler, hence the sentence “without actually evaluating that expression at runtime”.
The compiler computes the type, then it is able to deduce the size of the expression from the type, and then, still at compile time, the whole sizeof expression is replaced by the calculated size. So the expression itself does not make it into the executable code. That's what the sentence “sizeof throws away the expression and returns only the size of its result” means.
The following gives you the sizeof of the type that i++ has, which is int (usually an int has 4 or 8 bytes, so it will likely give you value 4 or 8). However, since the expression is not evaluated, no runtime action is done for the expression.
int i = 0;
sizeof(i++);
Evaluating an expression basically means to execute its side effects (e.g incrementing a variable) or reading values from memory or registers at runtime. So in some sense sizeof "throws away" its operand, since it does not really perform the runtime operation it specifies (the value of i will still be zero).
The compiler needs to calculate the sizes of types/structs/classes for various operations. The sizeof operator makes these sizes available to your program as a constant. So for example, if you do sizeof(int) the compiler knows how big an int is (in bytes) and will insert that value instead. The same applies for more complex things like sizeof(myVariable) with myVariable being of type MyClass: the compiler does know how much space MyClass takes up and thus can insert that value.
The point is that this evaluation takes places at compile-time: the result is a number. During runtime, the evaluation does not need to be done again.
It means int j=sizeof(int); would be compiled to int j=4;
I have read the compiled assembly, there is no actually calc during execution!
Is there a reason why zero is used as a "default" function return value? I noticed that several functions from the stdlib and almost everywhere else, when not returning a proper number (e.g pow(), strcpy()) or an error (negative numbers), simply return zero.
I just became curious after seeing several tests performed with negated logic. Very confusing.
Why not return 1, or 0xff, or any positive number for that matter?
The rationale is that you want to distinguish the set of all the possible (negative) return values corresponding to different errors from the only situation in which all went OK. The simplest, most concise and most C-ish way to pursue such distinction is a logical test, and since in C all integers are "true" except for zero, you want to return zero to mean "the only situation", i.e. you want zero as the "good" value.
The same line of reasoning applies to the return values of Unix programs, but indeed in the tests within Unix shell scripts the logic is inverted: a return value of 0 means "true" (for example, look at the return value of /bin/true).
Originally, C did not have "void". If a function didn't return anything, you just left the return type in the declaration blank. But that meant, that it returned an int.
So, everything returned something, even if it didn't mean anything. And, if you didn't specifically provide a return value, whatever value happened to be in the register the compiler used to return values became the function's return value.
// Perfectly good K&R C code.
NoReturn()
{
// do stuff;
return;
}
int unknownValue = NoReturn();
People took to clearing that to zero to avoid problems.
In shell scripting, 0 represents true, where another number typically represents an error code. Returning 0 from a main application means everything went successfully. The same logic may be being applied to the library code.
It could also just be that they return nothing, which is interpreted as 0. (Essentially the same concept.)
Another (minor) reason has to do with machine-level speed and code size.
In most processors, any operation that results in a zero automatically sets the zero flag, and there is a very cheap operation to jump against the zero flag.
In other words, if the last machine operation (e.g., PUSH) got us to zero, all we need is a jump-on-zero or a jump-not-zero.
On the other hand, if we test things against some other value, then we have to move that value into the register, run a compare operation that essentially subtracts the two numbers, and equality results in our zero.
Because Bash and most other UNIX shell environments regard 0 as success, and -x as a system error, and x as a user-defined error.
There's probably a bunch of forgotten history dating back to the days when everything was written in asm. In general it is much easier to test for zero than for other specific values.
I may be wrong about this, but I think that it's mainly for historical reasons (hysterical raisins?). I believe that K&R C (pre-ANSI) didn't have a void type, so the logical way to write a function that didn't return anything interesting was to have it return 0.
Surely somebody here will correct me if I'm wrong... :)
My understanding is that it was related to the behaviour of system calls.
Consider the open() system call; if it is successful, it returns a non-negative integer, which is the file descriptor that was created. However, down at the assembler level (where there's a special, non-C instruction that traps into the kernel), when an error is returned, it is returned as a negative value. When it detects an error return, the C code wrapper around the system call stores the negated value into errno (so errno has a positive value), and the function returns -1.
For some other system calls, the negative return code at the assembler level is still negated and placed into errno and -1 is returned. However, these system calls have no special value to return, so zero was chosen to indicate success. Clearly, there is a large variety of system calls, but most manage to fit these conventions. For example, stat() and its relatives return a structure, but a pointer to that structure is passed as an input parameter, and the return value is a 0 or -1 status. Even signal() manages it; -1 was SIG_DFL and 0 was SIG_IGN, and other values were function pointers. There are a few system calls with no error return - getpid(), getuid() and so on.
This zero-indicates-success mechanism was then emulated by other functions which were not actually system calls.
Conventionally, a return code of 0 specifies that your program has ended normally and all is well. (You can remember this as "zero errors", although for technical reasons, you cannot use the number of errors your program found as the return code. See Style.) A return code other than 0 indicates that some sort of error has occurred. If your code terminates when it encounters an error, use exit, and specify a non-zero return code. Source
Because 0 is false and null in C/C++ and you can make handy short cuts when that happens.
It is because when used from a UNIX shell a command that returns 0 indicates success.
Any other value indicates a failure.
As Paul Betts indicates positive and negative values delimitate where the error probably originated, but this is only a convention and not an absolute. A user application may return a negative value without any bad consequence (other than it is indicating to the shell that the application failed).
Besides all the fine points made by previous posters, it also cleans up the code considerably when a function returns 0 on success.
Consider:
if ( somefunc() ) {
// handle error
}
is much cleaner than:
if ( !somefunc() ) {
// handle error
}
or:
if ( somefunc() == somevalue ) {
// handle error
}