Automatic Gain Control click & pop bursts - c++

This is my Automatic Gain Control Method, it works but I get a quick burst on the first impulse. How can I scale it down so it doesn't go past 0dbfs? Also the rate at 1e-4 somewhat works but it's too slow.
double AGC(double x)
{
double ref = pow(10.0, (-18.0/10.0); //-18dBFS Convert to Linear
double rate = 1.0; // coefficient when increasing/decreasing gain
x = x * m_Gain; //scale input(x)
m_Gain += (ref - (fabs(x) * fabs(x))) * rate;
return x;
}

Related

PID Control: Is adding a delay before the next loop a good idea?

I am implementing PID control in c++ to make a differential drive robot turn an accurate number of degrees, but I am having many issues.
Exiting control loop early due to fast loop runtime
If the robot measures its error to be less than .5 degrees, it exits the control loop and consider the turn "finished" (the .5 is a random value that I might change at some point). It appears that the control loop is running so quickly that the robot can turn at a very high speed, turn past the setpoint, and exit the loop/cut motor powers, because it was at the setpoint for a short instant. I know that this is the entire purpose of PID control, to accurately reach the setpoint without overshooting, but this problem is making it very difficult to tune the PID constants. For example, I try to find a value of kp such that there is steady oscillation, but there is never any oscillation because the robot thinks it has "finished" once it passes the setpoint. To fix this, I have implemented a system where the robot has to be at the setpoint for a certain period of time before exiting, and this has been effective, allowing oscillation to occur, but the issue of exiting the loop early seems like an unusual problem and my solution may be incorrect.
D term has no effect due to fast runtime
Once I had the robot oscillating in a controlled manner using only P, I tried to add D to prevent overshoot. However, this was having no effect for the majority of the time, because the control loop is running so quickly that 19 loops out of 20, the rate of change of error is 0: the robot did not move or did not move enough for it to be measured in that time. I printed the change in error and the derivative term each loop to confirm this and I could see that these would both be 0 for around 20 loop cycles before taking a reasonable value and then back to 0 for another 20 cycles. Like I said, I think that this is because the loop cycles are so quick that the robot literally hasn't moved enough for any sort of noticeable change in error. This was a big problem because it meant that the D term had essentially no effect on robot movement because it was almost always 0. To fix this problem, I tried using the last non-zero value of the derivative in place of any 0 values, but this didn't work well, and the robot would oscillate randomly if the last derivative didn't represent the current rate of change of error.
Note: I am also using a small feedforward for the static coefficient of friction, and I call this feedforward "f"
Should I add a delay?
I realized that I think the source of both of these issues is the loop running very very quickly, so something I thought of was adding a wait statement at the end of the loop. However, it seems like an overall bad solution to intentionally slow down a loop. Is this a good idea?
turnHeading(double finalAngle, double kp, double ki, double kd, double f){
std::clock_t timer;
timer = std::clock();
double pastTime = 0;
double currentTime = ((std::clock() - timer) / (double)CLOCKS_PER_SEC);
const double initialHeading = getHeading();
finalAngle = angleWrapDeg(finalAngle);
const double initialAngleDiff = initialHeading - finalAngle;
double error = angleDiff(getHeading(), finalAngle);
double pastError = error;
double firstTimeAtSetpoint = 0;
double timeAtSetPoint = 0;
bool atSetpoint = false;
double integral = 0;
double derivative = 0;
double lastNonZeroD = 0;
while (timeAtSetPoint < .05)
{
updatePos(encoderL.read(), encoderR.read());
error = angleDiff(getHeading(), finalAngle);
currentTime = ((std::clock() - timer) / (double)CLOCKS_PER_SEC);
double dt = currentTime - pastTime;
double proportional = error / fabs(initialAngleDiff);
integral += dt * ((error + pastError) / 2.0);
double derivative = (error - pastError) / dt;
//FAILED METHOD OF USING LAST NON-0 VALUE OF DERIVATIVE
// if(epsilonEquals(derivative, 0))
// {
// derivative = lastNonZeroD;
// }
// else
// {
// lastNonZeroD = derivative;
// }
double power = kp * proportional + ki * integral + kd * derivative;
if (power > 0)
{
setMotorPowers(-power - f, power + f);
}
else
{
setMotorPowers(-power + f, power - f);
}
if (fabs(error) < 2)
{
if (!atSetpoint)
{
atSetpoint = true;
firstTimeAtSetpoint = currentTime;
}
else //at setpoint
{
timeAtSetPoint = currentTime - firstTimeAtSetpoint;
}
}
else //no longer at setpoint
{
atSetpoint = false;
timeAtSetPoint = 0;
}
pastTime = currentTime;
pastError = error;
}
setMotorPowers(0, 0);
}
turnHeading(90, .37, 0, .00004, .12);

How to measure the rate of rise of a variable

I am reading in a temperature value every 1 second/minute (this rate is not crucial). I want to measure this temperature so that if it begins to rise rapidly above a certain threshold I perform an action.
If the temperature rises above 30 degrees ( at any rate ) I increase the fan speed.
I think I must do something like set old temperature to new temp and then each time it loops set old temp to the current temp of the engine. But I am not sure if I need to use arrays for the engine temp or not.
Of course you can store just one old sample, then check difference like in:
bool isHot(int sample) {
static int oldSample = sample;
return ((sample > 30) || (sample - oldSample > threshold));
}
It's OK from C point of view, but very bad from metrology point of view. You should consider some conditioning of your signal (in this case temperature) to smothen out any spikes.
Of course you can add signal conditioning letter on. For (easy) example look at Simple Moving Avarage: https://en.wikipedia.org/wiki/Moving_average
If you want control the fan speed "right way" you should consider learning a bit about PID controller: https://en.wikipedia.org/wiki/PID_controller
Simple discrete PID:
PidController.h:
class PidController
{
public:
PidController();
double sim(double y);
void UpdateParams(double kp, double ki, double kd);
void setSP(double setPoint) { m_setPoint = setPoint; } //set current value of r(t)
private:
double m_setPoint; //current value of r(t)
double m_kp;
double m_ki;
double m_kd;
double m_outPrev;
double m_errPrev[2];
};
PidController.cpp
#include "PidController.h"
PidController::PidController():ControllerObject()
{
m_errPrev[0] = 0;
m_errPrev[1] = 0;
m_outPrev = 0;
}
void PidController::UpdateParams(double kp, double ki, double kd)
{
m_kp = kp;
m_ki = ki;
m_kd = kd;
}
//calculates PID output
//y - sample of y(t)
//returns sample of u(t)
double PidController::sim(double y)
{
double out; //u(t) sample
double e = m_setPoint - y; //error
out = m_outPrev + m_kp * (e - m_errPrev[0] + m_kd * (e - 2 * m_errPrev[0] + m_errPrev[1]) + m_ki * e);
m_outPrev = out; //store previous output
//store previous errors
m_errPrev[1] = m_errPrev[0];
m_errPrev[0] = e;
return out;
}

Linear regression poor gradient descent performance

I have implemented a simple Linear Regression (single variate for now) example in C++ to help me get my head around the concepts. I'm pretty sure that the key algorithm is right but my performance is terrible.
This is the method which actually performs the gradient descent:
void LinearRegression::BatchGradientDescent(std::vector<std::pair<int,int>> & data,float& theta1,float& theta2)
{
float weight = (1.0f/static_cast<float>(data.size()));
float theta1Res = 0.0f;
float theta2Res = 0.0f;
for(auto p: data)
{
float cost = Hypothesis(p.first,theta1,theta2) - p.second;
theta1Res += cost;
theta2Res += cost*p.first;
}
theta1 = theta1 - (m_LearningRate*weight* theta1Res);
theta2 = theta2 - (m_LearningRate*weight* theta2Res);
}
With the other key functions given as:
float LinearRegression::Hypothesis(float x,float theta1,float theta2) const
{
return theta1 + x*theta2;
}
float LinearRegression::CostFunction(std::vector<std::pair<int,int>> & data,
float theta1,
float theta2) const
{
float error = 0.0f;
for(auto p: data)
{
float prediction = (Hypothesis(p.first,theta1,theta2) - p.second) ;
error += prediction*prediction;
}
error *= 1.0f/(data.size()*2.0f);
return error;
}
void LinearRegression::Regress(std::vector<std::pair<int,int>> & data)
{
for(unsigned int itr = 0; itr < MAX_ITERATIONS; ++itr)
{
BatchGradientDescent(data,m_Theta1,m_Theta2);
//Some visualisation code
}
}
Now the issue is that if the learning rate is greater than around 0.000001 the value of the cost function after gradient descent is higher than it is before. That is to say, the algorithm is working in reverse. The line forms into a straight line through the origin pretty quickly but then takes millions of iterations to actually reach a reasonably well fit line.
With a learning rate of 0.01, after six iterations the output is: (where difference is costAfter-costBefore)
Cost before 102901.945312, cost after 517539430400.000000, difference 517539332096.000000
Cost before 517539430400.000000, cost after 3131945127824588800.000000, difference 3131944578068774912.000000
Cost before 3131945127824588800.000000, cost after 18953312418560698826620928.000000, difference 18953308959796185006080000.000000
Cost before 18953312418560698826620928.000000, cost after 114697949347691988409089177681920.000000, difference 114697930004878874575022382383104.000000
Cost before 114697949347691988409089177681920.000000, cost after inf, difference inf
Cost before inf, cost after inf, difference nan
In this example the thetas are set to zero, the learning rate to 0.000001, and there are 8,000,000 iterations! The visualisation code only updates the graph after every 100,000 iterations.
Function which creates the data points:
static void SetupRegressionData(std::vector<std::pair<int,int>> & data)
{
srand (time(NULL));
for(int x = 50; x < 750; x += 3)
{
data.push_back(std::pair<int,int>(x+(rand() % 100), 400 + (rand() % 100) ));
}
}
In short, if my learning rate is too high the gradient descent algorithm effectively runs backwards and tends to infinity and if it is lowered to the point where it actually converges towards a minima the number of iterations required to actually do so is unacceptably high.
Have I missed anything/made a mistake in the core algorithm?
Looks like everything is behaving as expected, but you are having problems selecting a reasonable learning rate. That's not a totally trivial problem, and there are many approaches ranging from pre-defined schedules that progressively reduce the learning rate (see e.g. this paper) to adaptive methods such as AdaGrad or AdaDelta.
For your vanilla implementation with fixed learning rate you should make your life easier by normalising the data to zero mean and unit standard deviation before you feed it into the gradient descent algorithm. That way you will be able to reason about the learning rate more easily. Then you can just rescale your prediction accordingly.

How do I keep the jump height the same when using delta time?

I'm using delta time so I can make my program frame rate independent.
However I can't get the jump height it be the same, the character always jumps higher on a lower frame rate.
Variables:
const float gravity = 0.0000000014f;
const float jumpVel = 0.00000046f;
const float terminalVel = 0.05f;
bool readyToJump = false;
float verticalVel = 0.00f;
Logic code:
if(input.isKeyDown(sf::Keyboard::Space)){
if(readyToJump){
verticalVel = -jumpVel * delta;
readyToJump = false;
}
}
verticalVel += gravity * delta;
y += verticalVel * delta;
I'm sure the delta time is correct because the character moves horizontally fine.
How do I get my character to jump the same no matter the frame rate?
The formula for calculating the new position is:
position = initial_position + velocity * time
Taking into account gravity which reduces the velocity according to the function:
velocity = initial_velocity + (gravity^2 * time)
NOTE: gravity in this case is not the same as the gravity.
The final formula then becomes:
position = initial_position + (initial_velocity + (gravity^2 * time) * time
As you see from the above equation, initial_position and initial_velocity is not affected by time. But in your case you actually set the initial velocity equal to -jumpVelocity * delta.
The lower the frame rate, the larger the value of delta will be, and therefore the character will jump higher. The solution is to change
if(readyToJump){
verticalVel = -jumpVel * delta;
readyToJump = false;
}
to
if(readyToJump){
verticalVel = -jumpVel;
readyToJump = false;
}
EDIT:
The above should give a pretty good estimation, but it is not entirely correct. Assuming that p(t) is the position (in this case height) after time t, then the velocity given by v(t) = p'(t)', and the acceleration is given bya(t) = v'(t) = p''(t)`. Since we know that the acceleration is constant; ie gravity, we get the following:
a(t) = g
v(t) = v0 + g*t
p(t) = p0 + v0*t + 1/2*g*t^2
If we now calculate p(t+delta)-p(t), ie the change in position from one instance in time to another we get the following:
p(t+delta)-p(t) = p0 + v0*(t+delta) + 1/2*g*(t+delta)^2 - (p0 + v0*t + 1/2*g*t^2)
= v0*delta + 1/2*g*delta^2 + g*delta*t
The original code does not take into account the squaring of delta or the extra term g*delta*t*. A more accurate approach would be to store the increase in delta and then use the formula for p(t) given above.
Sample code:
const float gravity = 0.0000000014f;
const float jumpVel = 0.00000046f;
const float limit = ...; // limit for when to stop jumping
bool isJumping = false;
float jumpTime;
if(input.isKeyDown(sf::Keyboard::Space)){
if(!isJumping){
jumpTime = 0;
isJumping = true;
}
else {
jumpTime += delta;
y = -jumpVel*jumpTime + gravity*sqr(jumpTime);
// stop jump
if(y<=0.0f) {
y = 0.0f;
isJumping = false;
}
}
}
NOTE: I have not compiled or tested the code above.
By "delta time" do you mean variable time steps? As in, at every frame, you compute a time step that can be completely different from the previous?
If so, DON'T.
Read this: http://gafferongames.com/game-physics/fix-your-timestep/
TL;DR: use fixed time steps for the internal state; interpolate frames if needed.

Fixed timestep loop in C++

I am trying to implement a fixed timestep loop so that the game refreshes at a constant rate. I found a great article at http://gafferongames.com/game-physics/fix-your-timestep/ but am having trouble translating that into my own 2d engine.
The specific place I am referring to is the function in the last part "The Final Touch" which is what most people recommend. This is his function:
double t = 0.0;
const double dt = 0.01;
double currentTime = hires_time_in_seconds();
double accumulator = 0.0;
State previous;
State current;
while ( !quit )
{
double newTime = time();
double frameTime = newTime - currentTime;
if ( frameTime > 0.25 )
frameTime = 0.25; // note: max frame time to avoid spiral of death
currentTime = newTime;
accumulator += frameTime;
while ( accumulator >= dt )
{
previousState = currentState;
integrate( currentState, t, dt );
t += dt;
accumulator -= dt;
}
const double alpha = accumulator / dt;
State state = currentState*alpha + previousState * ( 1.0 - alpha );
render( state );
}
For myself, I am just moving a player across the screen keeping track of an x and y location as well as velocity rather than doing calculus integration. **I am confused as to what I would apply to the updating of the player's location (dt or t?). Can someone break this down and explain it further?
The second part is the interpolation which I understand as the formula provided makes sense and I could simply interpolate between the current and previous x, y player positions.
Also, I realize I need to get a more accurate timer.
If you can get at least microsecond accuracy, try this:
long int start = 0, end = 0;
double delta = 0;
double ns = 1000000.0 / 60.0; // Syncs updates at 60 per second (59 - 61)
while (!quit) {
start = timeAsMicro();
delta+=(double)(start - end) / ns;
end = start;
while (delta >= 1.0) {
doUpdates();
delta-=1.0;
}
}
See:
http://en.wikipedia.org/wiki/Integral
&
http://en.wikipedia.org/wiki/Numerical_integration
The function is a numerical technique (2nd link) for approximating the integral function (1st link).
You should review your high school physics. Simply put velocity is change in distance over change in time(dxdt), acceleration is change in velocity over change in time(dvdt) If you know dxdt you can get the distance by integrating it w/ respect to t, if you know dvdt you can get the velocity by integrating it with respect to t. Obviously this is a very simple explanation, but there are tons of references out there with more details if you so desire.