if statement and the maximal value in Pascal - if-statement

Looking at this code:
I thought that the program won't give the right maximal value if z>x and y>x, but, to my surprise, it did give the correct value. Why is that? Did the program compare Y and Z and gave the biggest value without me ordering it to do so?

After the first if statement, max holds the maximum of x and y. This maximum is then compared with z in the second if statement. You don't need to compere y and z directly due to the transitive quality of the > operator.

Knowing that (z > y) and (y > x) makes it certain that z > x, so you don't have to compare every value to assume that one of them is the maximum.
Talking about Compiler i can suggest you Sublime as a good Compiler for object pascal all you have is to install FPC and add it on building packages or you can simply use MyPascal which is seemed very helpful as this one you're using seems old-fashioned.

Related

How to optimize nonlinear funtion with some constraint in c++

I want to find a variable in C++ that allows a given nonlinear formula to have a maximum value in a constraint.
It is to calculate the maximum value of the formula below in the given constraint in C++.
You can also use the library.(e.g. Nlopt)
formula : ln(1+ax+by+c*z)
a, b, c are numbers input by the user
x, y, z are variables to be derived
variable constraint is that x, y, z are positive and x+y+z<=1
This can actually be transformed into a linear optimization problem.
max ln(1+ax+by+cz) <--> max (ax+by+cz) s.t. ax+by+cz > -1
This means that it is a linear optimization problem (with one more constraint) that you can easily handle with whatever C++ methods together with your Convex Optimization knowledge.
Reminders to write a good code:
Check the validity of input value.
Since the input value can be negative, you need to consider this circumstance which can yield different results.
P.S.
This problem seems to be Off Topic on SO.
If it is your homework, it is for your own good to write the code yourself. Besides, we do not have enough time to write that for you.
This should have been a comment if I had more reputation.

Is it always safe to return limit numbers (e.g. INT_MAX) to signify false?

Say, I have the following (member) function that returns the nearest left or right side of a rect, if the rect has been hit:
int hit(int x)
{
if (rc.left <= x && x < rc.right)
{
if (x <= (rc.left + size.width / 2)) return rc.left;
else return rc.right;
}
else return INT_MAX;
}
My concern is that INT_MAX, which is just a macro of a number on my machine, can't be represented on machine on which the code will run. INT_MAX is not a runtime thing, so I have some doubts.
This pattern you're using is a sentinel value, a value that is taken out from the possible value range of your data type and given a specific meaning. This can be a valid approach, but does require that all of the code is aware of that value and its meaning. If you're not pressed for the potential small performance gain, you can instead use a more expressive type:
std::optional<int> hit(int x)
{
if (rc.left <= x && x < rc.right)
{
if (x <= (rc.left + size.width / 2)) return rc.left;
else return rc.right;
}
else return {};
}
This way you can return either a vanilla int with no special casing, or nothing.
My concern is that INT_MAX, which is just a macro of a number on my machine, can't be represented on machine on which the code will run.
INT_MAX is not a fixed value. Yes, it is a macro, but different machines will have it differently defined. It is per machine, per compiler just like almost every other thing. The compiler will ensure that INT_MAX fits in int. And yes, you cannot just run a binary on architecture A, when compiled for architecture B. Even if technically it is doable for some architectures (because one extends the other, e.g. x86 vs x64), recompilation is always safer. But if both architectures are the same there should be no problem.
else return INT_MAX;
The real problem with your code is that INT_MAX may be a valid value. And even if it is not, then it looks like a value and this may be an error prone approach (you force the caller to do a numerical check, what if he forgets?). Meaning values and errors are different things and thus it would be better to represent them differently as well.
You should use some other way to signal errors, e.g. std::optional or exceptions. Or maybe divide the function into two functions: check and calculate. Either way you will get rid of INT_MAX as a bonus.
Here, I am talking about the typical values as of the time I made this post. As of today, most systems have int as 32-bit.
For 32-bit signed integers, INT_MAX represents the number 2147483647, the existence of INT_MAX is for the sake of making it more convenient for users so that they won't have to remember/search for/calculate this number whenever they want to use it.
INT_MAX is a C++ thing, not just on your machine, a proper C++ compiler on any machine, including the one which it will run on, should be able to compile it properly. If you are sure that the values of rc.left and rc.right will not be the same as INT_MAX, as long as you also use it at the place where you check the output of the function, it should be fine.
I would also like to know which machine it would run on, and how it would run on that machine. If you mean run by simply executing the binary file you compiled, there shouldn't be a problem. However, if you're compiling it there too, I am assuming that you're submitting an assignment or to some grader sort of thing? Then you can try reading the specifications of their compiler and see what they use if you really want to know what the limits on the place you want to compile your code are.
Main point is that since it is a C++ thing, it can be represented on the machine regardless of what the limits actually are.

Integration of 1/(1-x) with SymPy gives wrong sign inside the logarithm

I'd like to integrate the following and easyfunction in sympy.
import sympy
x = sympy.symbols('x')
e_a = 1/(1-x)
u_x = sympy.integrate(e_a,x)
print(u_x)
sympy.plot(u_x)
My calculus memories suggests me to get -log(1-x) as a result, while sympy returns -log(x - 1). Can't understand what's wrong with the code...
Both -log(1-x) and -log(x-1) are valid answers. This was discussed on issue tracker, so I quote from there.
-log(x-1)=-(log(1-x)+log(-1)) by log rules, and log(-1)=i*pi (as e**(i*pi)=-1). -log(x-1) is the same as -log(1-x)-i*pi so the expressions actually differ by a constant, which makes no difference to the result when taking the derivative of the expression.
and
This is indeed correct behavior. SymPy doesn't return log(abs(x)) for integrate(1/x) because it isn't valid for complex numbers. Instead, the answer is correct up to an integration constant (which may be complex). All SymPy operations assume that variables are complex by default.
There are some workarounds suggested at the end of the thread.
But the bottom line, -log(x-1) is correct, and it is the desired form of answer when x is greater than 1. SymPy does not know if you mean for x to be less than 1 or greater than 1.
To get a specific antiderivative, integrate from a given initial point. For example, integration starting from 0 gives the antiderivative that is 0 when x=0.
x, t = sympy.symbols('x t')
e_a = 1/(1-x)
u_x = sympy.integrate(e_a, (x, 0, t)).subs(t, x)
sympy.plot(u_x)

Optimal (Time paradigm) solution to check variable within boundary

Sorry if the question is very naive.
I will have to check the below condition in my code
0 < x < y
i.e code similar to if(x > 0 && x < y)
The basic problem at system level is - currently, for every call (Telecom domain terminology), my existing code is hit (many times). So performance is very very critical, Now, I need to add a check for boundary checking (at many location - but different boundary comparison at each location).
At very normal level of coding, the above comparison would look very naive without any issue. However, when added over my statistics module (which is dipped many times), performance will go down.
So I would like to know the best possible way to handle the above scenario (kind of optimal way for limits checking technique). Like for example, if bit comparison works better than normal comparison or can both the comparison be evaluation in shorter time span?
Other Info
x is unsigned integer (which must be checked to be greater than 0 and less than y).
y is unsigned integer.
y is a non-const and varies for every comparison.
Here time is the constraint compared to space.
Language - C++.
Now, later if I need to change the attribute of y to a float/double, would there be another way to optimize the check (i.e will the suggested optimal technique for integer become non-optimal solution when y is changed to float/double).
Thanks in advance for any input.
PS : OS used is SUSE 10 64 bit x64_64, AIX 5.3 64 bit, HP UX 11.1 A 64.
As always, profile first, optimize later. But, given that this is actually an issue, these could be things to look into:
"Unsigned and greater than zero" is the same as "not equal to zero", which is usually about as fast as a comparison gets. So a first optimization would be to do x != 0 && x < y.
Make sure that you do the comparison that is most likely to fail the first one, to maximize the gain from short circuiting.
If possible, use compiler directives to tell the compiler about the most likely code path. This will optimize instruction prefetching etc. I.e. for GCC look at something like this, done in the kernel.
I don't think tricks with subtraction and comparison against zero, etc. will be of any gain. If that is the most effective way to do a less-than comparison, you can be sure your compiler already knows about it.
This eliminates a compare and branch at the expense of two adds; it should be faster:
(x-1) < (y-1)
It works as long as y is guaranteed non-zero.
You probably don't need to change y to a float or a double; you should endeavor to stay in integer for as much as you can. Instead of representing y as seconds, try microseconds or milliseconds (depending on the resolution you need).
Anyway- I suspect you can change
if (x > 0 && x < y)
;
to
if ((unsigned int)x < (unsigned int)y)
;
but that's probably not going to actually speed anything up. Checking against zero is often one or two instructions (depending on ISA) so the read from memory is certainly the bottleneck here.
After you've profiled your code and determined that this is actually where the performance problems are, you could investigate tweaking the branch predictor, since that's somewhere a lot of time can be wasted if it's regularly mispredicting. Different compilers do it differently, but some have an intrinsic like __expect(x < 0);, which will tell the predictor to assume that's usually the case.

How does this C++ function use memoization?

#include <vector>
std::vector<long int> as;
long int a(size_t n){
if(n==1) return 1;
if(n==2) return -2;
if(as.size()<n+1)
as.resize(n+1);
if(as[n]<=0)
{
as[n]=-4*a(n-1)-4*a(n-2);
}
return mod(as[n], 65535);
}
The above code sample using memoization to calculate a recursive formula based on some input n. I know that this uses memoization, because I have written a purely recursive function that uses the same formula, but this one much, much faster for much larger values of n. I've never used vectors before, but I've done some research and I understand the concept of them. I understand that memoization is supposed to store each calculated value, so that instead of performing the same calculations over again, it can simply retrieve ones that have already been calculated.
My question is: how is this memoization, and how does it work? I can't seem to see in the code at which point it checks to see if a value for n already exists. Also, I don't understand the purpose of the if(as[n]<=0). This formula can yield positive and negative values, so I'm not sure what this check is looking for.
Thank you, I think I'm close to understanding how this works, it's actually a bit more simple than I was thinking it was.
I do not think the values in the sequence can ever be 0, so this should work for me, as I think n has to start at 1.
However, if zero was a viable number in my sequence, what is another way I could solve it? For example, what if five could never appear? Would I just need to fill my vector with fives?
Edit: Wow, I got a lot of other responses while checking code and typing this one. Thanks for the help everyone, I think I understand it now.
if (as[n] <= 0) is the check. If valid values can be negative like you say, then you need a different sentinel to check against. Can valid values ever be zero? If not, then just make the test if (as[n] == 0). This makes your code easier to write, because by default vectors of ints are filled with zeroes.
The code appears to be incorrectly checking is (as[n] <= 0), and recalculates the negative values of the function(which appear to be approximately every other value). This makes the work scale linearly with n instead of 2^n with the recursive solution, so it runs a lot faster.
Still, a better check would be to test if (as[n] == 0), which appears to run 3x faster on my system. Even if the function can return 0, a 0 value just means it will take slightly longer to compute (although if 0 is a frequent return value, you might want to consider a separate vector that flags whether the value has been computed or not instead of using a single vector to store the function's value and whether it has been computed)
If the formula can yield both positive and negative values then this function has a serious bug. The check if(as[n]<=0) is supposed to be checking if it had already cached this value of computation. But if the formula can be negative this function recalculates this cached value alot...
What it really probably wanted was a vector<pair<bool, unsigned> >, where the bool says if the value has been calculated or not.
The code, as posted, only memoizes about 40% of the time (precisely when the remembered value is positive). As Chris Jester-Young pointed out, a correct implementation would instead check if(as[n]==0). Alternatively, one can change the memoization code itself to read as[n]=mod(-4*a(n-1)-4*a(n-2),65535);
(Even the ==0 check would spend effort when the memoized value was 0. Luckily, in your case, this never happens!)
There's a bug in this code. It will continue to recalculate the values of as[n] for as[n] <= 0. It will memoize the values of a that turn out to be positive. It works a lot faster than code without the memoization because there are enough positive values of as[] so that the recursion is terminated quickly. You could improve this by using a value of greater than 65535 as a sentinal. The new values of the vector are initialized to zero when the vector expands.