Formula for PI-regulation Proportional Integral algorithm - c++

I've been reading this website: http://www.csimn.com/CSI_pages/PIDforDummies.html and I'm confused about the proportional integral part. Here's what it says.
Proportional control
Here’s a diagram of the controller when we have enabled only P control:
In Proportional Only mode, the controller simply multiplies the Error by the Proportional Gain (Kp) to get the controller output.
The Proportional Gain is the setting that we tune to get our desired performance from a “P only” controller.
A match made in heaven: The P + I Controller
If we put Proportional and Integral Action together, we get the humble PI controller. The Diagram below shows how the algorithm in a PI controller is calculated.
The tricky thing about Integral Action is that it will really screw up your process unless you know exactly how much Integral action to apply.
A good PID Tuning technique will calculate exactly how much Integral to apply for your specific process - but how is the Integral Action adjusted in the first place?
As you can see, the proportional part is easy to understand it says that you multiply error by tuning variable. The part that I don't get is where you get the P and I from on the second part, and what mathematical operation you do with them. I don't have a degree in mathematics or advanced calculus knowledge, so I would appreciate it if you would try to keep it algebra level.

There is a big part missing from the text, the actual physical system that turns the control into a process and the actual physical variable.
Think of the integral as some kind of averaging operation that filters out small oscillations in the PV input. It also represents some kind of memory of the immediate past of the process.
A moving exponential average, for instance, can be thought of being a mix of integral and proportional action.
Staying with the car driving example, if you come to a curb where you need the steering wheel in a certain position to go in a circle, you don't just yank the wheel to that position, you move it gradually (most of the time). Exactly such ramp-up and -down actions are effects of using the integral action part.

I integral part is just summation also multiplied by some constant.
Analogue integration is done by nonlinear gain and amplifier.
Digital integration of first order is just:
output += input*dt;
second order is:
temp += input*dt;
output += temp*dt;
dt is the duration time of iteration loop (timer or what ever)
do not forget that PI regulator can have more complicated response
i1 += input*dt;
i2 += i1*dt;
i3 += i2*dt;
output = a0*input + a1*i1 + a2*i2 +a3*i3 ...;
where a0 is the P part
Now the I regulator adds more and more amount of control value
until the controlled value is the same as the preset value
the longer it takes to match it the faster it controls
this creates fast oscillations around preset value
in comparison to P with the same gain
but in average the control time is smaller then in just P regulators
therefore the I gain is usually much much smaller which creates the memory and smooth effect LutzL mentioned. (while the regulation time is similar or smaller then just for P regulation)
The controlled device has its own response
this can be represented as differential function
there is a lot of theory in cybernetics about obtaining the right regulator response
to match your process needs as:
quality of control
reaction times
max oscillations amplitude
stability
but for all you need differential math like solving system of differential equations of any order
strongly recommend use of Laplace transform
but many people also use Z transform instead
So I-regulator add speed to regulation
but it also create bigger oscillations
and when not matching the regulated system properly also creates instability
Integration adds overflow risks to regulation (Analog integration is very sensitive to it)
Also take in mind you can also substracting the I part from control value
which will make the exact opposite
sometimes the combination of more I parts are used to match desired regulation response shape

Related

Exploding gradient for gpflow SVGP

When optimizing a SVGP with Poisson Likelihood for a big data set I see what I think are exploding gradients.
After a few epochs I see a spiky drop of the ELBO, which then very slowly recovers after getting rid of all progress made before.
Roughly 21 iterations correspond to an Epoch.
This spike (at least the second one) resulted in a complete shift of the parameters (for vectors of parameters I just plotted the norm to see changes):
How can I deal with that? My first approach would be to clip the gradient, but that seems to require digging around the gpflow code.
My Setup:
Training works via Natural Gradients for the variational parameters and ADAM for the rest, with a slowly (linearly) increasing schedule for the Natural Gradient Gamma.
The batch and inducing point sizes are as large as possible for my setup
(both 2^12, with the data set consisting of ~88k samples). I include 1e-5 jitter and initialize the inducing points with kmeans.
I use a combined kernel, consisting of a combination of RBF, Matern52, a periodic and a linear kernel on a total of 95 features (a lot of them due to a one-hot encoding), all learnable.
The lengthscales are transformed with gpflow.transforms.
with gpflow.defer_build():
k1 = Matern52(input_dim=len(kernel_idxs["coords"]), active_dims=kernel_idxs["coords"], ARD=False)
k2 = Periodic(input_dim=len(kernel_idxs["wday"]), active_dims=kernel_idxs["wday"])
k3 = Linear(input_dim=len(kernel_idxs["onehot"]), active_dims=kernel_idxs["onehot"], ARD=True)
k4 = RBF(input_dim=len(kernel_idxs["rest"]), active_dims=kernel_idxs["rest"], ARD=True)
#
k1.lengthscales.transform = gpflow.transforms.Exp()
k2.lengthscales.transform = gpflow.transforms.Exp()
k3.variance.transform = gpflow.transforms.Exp()
k4.lengthscales.transform = gpflow.transforms.Exp()
m = gpflow.models.SVGP(X, Y, k1 + k2 + k3 + k4, gpflow.likelihoods.Poisson(), Z,
mean_function=gpflow.mean_functions.Constant(c=np.ones(1)),
minibatch_size=MB_SIZE, name=NAME)
m.mean_function.set_trainable(False)
m.compile()
UPDATE: Using only ADAM
Following the suggestion by Mark, I switched to ADAM only,
which helped me get rid of that sudden explosion. However, I still initialized with an epoch of natgrad only, which seems to save a lot of time.
In addition, the variational parameters seem to change a lot less abrupt (in terms of their norm at least). I guess they'll converge way slower now, but at least it's stable.
Just to add to Mark's answer above, when using nat grads in non-conjugate models it can take a bit of tuning to get the best performance, and instability is potentially a problem. As Mark points out, the large steps that provide potentially faster convergence can also lead to the parameters ending up in in bad regions of the parameter space. When the variational approximation is good (i.e. the true and approximate posterior are close) then there is good reason to expect that the nat grad will perform well, but unfortunately there is no silver bullet in the general case. See https://arxiv.org/abs/1903.02984 for some intuition.
This is very interesting. Perhaps trying to not use natgrads is a good idea as well. Clipping gradients indeed seems like a hack that could work. And yes, this would require digging around in the GPflow code a bit. One tip that can help towards this, is by not using the GPflow optimisers directly. The model._likelihood_tensor contains the TF tensor that should be optimised. Perhaps with some manual TensorFlow magic, you can do the gradient clipping on here before running an optimiser.
In general, I think this sounds like you've stumbled on an actual research problem. Usually these large gradients have a good reason in the model, which can be addressed with careful thought. Is it variance in some monte carlo estimate? Is the objective function behaving badly?
Regarding why not using natural gradients helps. Natural gradients use the Fisher matrix as a preconditioner to perform second order optimisation. Doing so can result in quite aggressive moves in parameter space. In certain cases (when there are usable conjugacy relations) these aggressive moves can make optimisation much faster. This case, with the Poisson likelihood, is not one where there are conjugacy relations that will necessarily help optimisation. In fact, the Fisher preconditioner can often be detrimental, particularly when variational parameters are not near the optimum.

Why does skipgram model take more time than CBOW

Why does skipgram model take more time than CBOW model. I train the model with same parameters (Vector size and window size).
The skip-gram approach involves more calculations.
Specifically, consider a single 'target word' with a context-window of 4 words on either side.
In CBOW, the vectors for all 8 nearby words are averaged together, then used as the input for the algorithm's prediction neural-network. The network is run forward, and its success at predicting the target word is checked. Then back-propagation occurs: all neural-network connection values – including the 8 contributing word-vectors – are nudged to make the prediction slightly better.
Note, though, that the 8-word-window and one-target-word only require one forward-propagation, and one-backward-propagation – and the initial averaging-of-8-values and final distribution-of-error-correction-over-8-vectors are each relatively quick/simple operations.
Now consider instead skip-gram. Each of the 8 context-window words is in turn individually provided as input to the neural-network, forward-checked for how well the target word is predicted, then backward-corrected. Though the averaging/splitting is not done, there's 8 times as much of the neural-network operations. Hence, much more net computation and more run-time.
Note the extra effort/time may pay itself back by improving vector quality on your final evaluations. Whether and to what extent depends on your specific goals and corpus.

Another way to calculate double type variables in c++?

Short version of the question: overflow or timeout in current settings when calculating large int64_t and double, anyway to avoid these?
Test case:
If only demand is 80,000,000,000, solved with correct result. But if it's 800,000,000,000, returned incorrect 0.
If input has two or more demands (means more inequalities need to be calculated), smaller value will also cause incorrectness. e.g., three equal demands of 20,000,000,000 will cause the problem.
I'm using COIN-OR CLP linear programming solver to solve some network flow problems. I use int64_t when representing the link bandwidth. But CLP uses double most of time and cannot transfer to other types easily.
When the values of the variables are not that large (typically smaller than 10,000,000,000) and the constraints (inequalities) are relatively few, it will give the solution I want it to. But if either of the above factors increases, the tool will stop and return a 0 value solution. I think the reason is the calculation complexity is over its maximum, so program breaks at some trivial point (it uses LP simplex method).
The inequality is some kind of:
totalFlowSum <= usePercentage * demand
I changed it to
totalFlowSum - usePercentage * demand <= 0
Since totalFLowSum and demand are very large int64_t, usePercentage is double, if the constraints like this are too many (several or even more), or if the demand is larger than 100,000,000,000, the returned solution will be wrong.
Is there any way to correct this, like increase the break threshold or avoid this level of calculation magnitude?
Decrease some accuracy is acceptable. I have a possible solution is that 1,000 times smaller on inputs and 1,000 time larger on outputs. But this is kind of naïve and may cause too much code modification in the program.
Update:
I have changed the formulation to
totalFlowSum / demand - usePercentage <= 0
but the problem still exists.
Update 2:
I divided usePercentage by 1000, making its coefficient from 1 to 0.001, it worked. But if I also divide totalFlowSum/demand by 1000 simultaneously, still no result. I don't know why...
I changed the rhs of equalities from 0 to 0.1, the problem is then solved! Since the inputs are very large, 0.1 offset won't impact the solution at all.
I think the reason is that previous coeffs are badly scaled, so the complier failed to find an exact answer.

odeint and ad hoc change of state variable

I just implemented the numerical integration for a set of coupled ODEs
from a discretized PDE using the odeint C++ library. It works nicely and
is lightning fast, but there is one issue:
My system of ODEs has, so-called, absorbing boundary conditions: the time
derivatives of my state variable, n, which is a vector of N doubles
(a population density) gets calculated in the system function, but before that happens
(or after the time integration) I would like to set:
n[N]=n[N-2];
n[N-1]=n[N-2];
However, of course this doesn't work because the state variable in the system
function is declared as const, and it looks as if this could not be changed
other than through meddling with the library... is there any way around this?
I should mention that setting dndt[N] and dndt[N-1] to zero might look like a
solution, but it doesn't really help as it defies the concept of absorbing boundary
conditions (n[N] and n[N-1] would then always have the values they had at t=0, rather
then the value of n[N-2] at any point in time), and so I'd really prefer to change n.
Thanks for any help!
Regards,
Michael
Usually absorbing boundary condition manifests itself in the equations of motion. n[N] = n[N-1] = n[N-2], so can insert n[N]=n[N-2] and n[N-1]=n[N-2] into the equation for dndt[N-2].
For example, the discrete Laplacian Lx[i] = x[i+1]-2 x[i] +x[i-1] with absorbing boundaries x[n]=x[n-1] can be written as Lx[n-1] = x[n-2] - x[n-1]. The equation for x[n] can then be omitted.

GSL interpolation error, values must be x values must be monotonically increasing

Hi my problem is that my data set is monotonically increasing but towards the end the of the data it looks like it does below ,where some of the x[i-1] = x[i] as shown below. This causes an error to be raised in GSL because it thinks that the values are not monotonically increasing. Is there a solution, fix or work around for this problem?
the values are already double precision ,this particular data set starts at 9.86553e-06 and ends at .999999
would the only solution be to offset every value in a for loop?
0.999981
0.999981
0.999981
0.999982
0.999982
0.999983
0.999983
0.999983
0.999984
0.999984
0.999985
0.999985
0.999985
I had a similar issue. I had removed duplicates by a simple condition operator (if statement) and this did not affect the final result (checked by MatLab). Though, this might be a bit problem-specific.
If you've genuinely reached the limits of what double precision allows--your delta is < machine epsilon--then there is nothing you can do with the data as they are. The x data aren't monotonically increasing. Rather you'll have to go back to where they are generated and apply some kind of transform to them to make the differences bigger at the tails. Or you could multiply by a scalar factor and then interpolate between the x values on the fly; and then divide the factor back out when you are done.
Edit: tr(x) = (x-0.5)^3 might do reasonably well to space things out, or tr(x) = tan( (x-0.5)*pi ). Have to watch out for extreme values in the latter case though. And of course, these transformations might screw up the analysis you're trying to do so a scalar factor might be the answer--has to be a transformation under which your analysis is invariant, obviously. Adding a constant is also likely possible.