How could I estimate the instantaneous throughput ? For example, in a way similar to what the browser does when downloading a file. It's not just a mean throughput, but rather the an instantaneous estimation, maybe with a 'moving average'. I'm looking for the algorithm, but you can specify it in c++. Ideally, it would not involve a thread (i.e., being continuously refreshed, say every second) but rather be only evaluated when the value is asked.
You can use an exponential moving average, as explained here, but I'll repeat the formula:
accumulator = (alpha * new_value) + (1.0 - alpha) * accumulator
To achieve an estimation, suppose you intend to query the computation every second, but you want an average over the last minute. Then, here would be one way to get that estimate:
struct AvgBps {
double rate_; // The average rate
double last_; // Accumulates bytes added until average is computed
time_t prev_; // Time of previous update
AvgBps () : rate_(0), last_(0), prev_(time(0)) {}
void add (unsigned bytes) {
time_t now = time(0);
if (now - prev_ < 60) { // The update is within the last minute
last_ += bytes; // Accumulate bytes into last
if (now > prev_) { // More than a second elapsed from previous
// exponential moving average
// the more time that has elapsed between updates, the more
// weight is assigned for the accumulated bytes
double alpha = (now - prev_)/60.0;
rate_ = (1 -alpha) * last_ + alpha * rate_;
last_ = 0; // Reset last_ (it has been averaged in)
prev_ = now; // Update prev_ to current time
}
} else { // The update is longer than a minute ago
rate_ = bytes; // Current update is average rate
last_ = 0; // Reset last_
prev_ = now; // Update prev_
}
}
double rate () {
add(0); // Compute rate by doing an update of 0 bytes
return rate_; // Return computed rate
}
};
You should actually use a monotonic clock instead of time.
You probably want a boxcar average.
Just keep the last n values, and average them. For each subsequent block, subtract out the oldest and add in the most recent. Note that for floating point values, you may get some aggregated error, in which case you might want to recalculate the total from scratch every m values. For integer values of course, you don't need something like that.
Related
I am implementing PID control in c++ to make a differential drive robot turn an accurate number of degrees, but I am having many issues.
Exiting control loop early due to fast loop runtime
If the robot measures its error to be less than .5 degrees, it exits the control loop and consider the turn "finished" (the .5 is a random value that I might change at some point). It appears that the control loop is running so quickly that the robot can turn at a very high speed, turn past the setpoint, and exit the loop/cut motor powers, because it was at the setpoint for a short instant. I know that this is the entire purpose of PID control, to accurately reach the setpoint without overshooting, but this problem is making it very difficult to tune the PID constants. For example, I try to find a value of kp such that there is steady oscillation, but there is never any oscillation because the robot thinks it has "finished" once it passes the setpoint. To fix this, I have implemented a system where the robot has to be at the setpoint for a certain period of time before exiting, and this has been effective, allowing oscillation to occur, but the issue of exiting the loop early seems like an unusual problem and my solution may be incorrect.
D term has no effect due to fast runtime
Once I had the robot oscillating in a controlled manner using only P, I tried to add D to prevent overshoot. However, this was having no effect for the majority of the time, because the control loop is running so quickly that 19 loops out of 20, the rate of change of error is 0: the robot did not move or did not move enough for it to be measured in that time. I printed the change in error and the derivative term each loop to confirm this and I could see that these would both be 0 for around 20 loop cycles before taking a reasonable value and then back to 0 for another 20 cycles. Like I said, I think that this is because the loop cycles are so quick that the robot literally hasn't moved enough for any sort of noticeable change in error. This was a big problem because it meant that the D term had essentially no effect on robot movement because it was almost always 0. To fix this problem, I tried using the last non-zero value of the derivative in place of any 0 values, but this didn't work well, and the robot would oscillate randomly if the last derivative didn't represent the current rate of change of error.
Note: I am also using a small feedforward for the static coefficient of friction, and I call this feedforward "f"
Should I add a delay?
I realized that I think the source of both of these issues is the loop running very very quickly, so something I thought of was adding a wait statement at the end of the loop. However, it seems like an overall bad solution to intentionally slow down a loop. Is this a good idea?
turnHeading(double finalAngle, double kp, double ki, double kd, double f){
std::clock_t timer;
timer = std::clock();
double pastTime = 0;
double currentTime = ((std::clock() - timer) / (double)CLOCKS_PER_SEC);
const double initialHeading = getHeading();
finalAngle = angleWrapDeg(finalAngle);
const double initialAngleDiff = initialHeading - finalAngle;
double error = angleDiff(getHeading(), finalAngle);
double pastError = error;
double firstTimeAtSetpoint = 0;
double timeAtSetPoint = 0;
bool atSetpoint = false;
double integral = 0;
double derivative = 0;
double lastNonZeroD = 0;
while (timeAtSetPoint < .05)
{
updatePos(encoderL.read(), encoderR.read());
error = angleDiff(getHeading(), finalAngle);
currentTime = ((std::clock() - timer) / (double)CLOCKS_PER_SEC);
double dt = currentTime - pastTime;
double proportional = error / fabs(initialAngleDiff);
integral += dt * ((error + pastError) / 2.0);
double derivative = (error - pastError) / dt;
//FAILED METHOD OF USING LAST NON-0 VALUE OF DERIVATIVE
// if(epsilonEquals(derivative, 0))
// {
// derivative = lastNonZeroD;
// }
// else
// {
// lastNonZeroD = derivative;
// }
double power = kp * proportional + ki * integral + kd * derivative;
if (power > 0)
{
setMotorPowers(-power - f, power + f);
}
else
{
setMotorPowers(-power + f, power - f);
}
if (fabs(error) < 2)
{
if (!atSetpoint)
{
atSetpoint = true;
firstTimeAtSetpoint = currentTime;
}
else //at setpoint
{
timeAtSetPoint = currentTime - firstTimeAtSetpoint;
}
}
else //no longer at setpoint
{
atSetpoint = false;
timeAtSetPoint = 0;
}
pastTime = currentTime;
pastError = error;
}
setMotorPowers(0, 0);
}
turnHeading(90, .37, 0, .00004, .12);
We have two places in the same gen~ code box object with phasor:
wander = phasor(in8/dense);
...some code later...
phas = (triangle(phasor(freq), sharp)*len-rot_x/(2*pi))%1;
I understand that phasor() produces a rising sawtooth, outputting values of 0 to 1. I understand the argument of phasor() is frequency. What I don't understand is how phasor() can output a value of 0 to 1 given frequency, when you would need frequency over time to produce a value other than 0. It would seem that phasor(frequency) should always output 0 unless somehow phasor() is keeping track of time and its own phase.
If phasor is keeping track of time/phase, how can we call phasor() twice in the same gen code box? It would seem impossible that we could have two time values. Unless...
...we have one time/phase value shared between all calls to phasor() but it is the last call to phasor() that sets the final frequency before phasor() increments its phase, which happens at the end of the code block.
Am I correct?
Edit: No that can't be, then why would you ever put a frequency into phasor twice? It wouldn't change the output under my logic.
From my tests, phasor is indeed a sawtooth oscillator object where each call to phasor is a unique oscillator, so, calling phasor twice in the same code box will instantiate two objects.
class Phasor
{
public:
double getSample()
{
double ret = phase/PI_z_2;
phase = fmod(phase+phase_inc, TAU); //increment phase
return ret;
}
void setSampleRate(double v) { sampleRate = v; calculateIncrement(); }
void setFrequency(double v) { frequency = v; calculateIncrement(); }
void reset() { phase = 0.0; }
protected:
void calculateIncrement() { phase_inc = TAU * frequency / sampleRate; }
double sampleRate = 44100.0;
double frequency = 1.0;
double phase = 0.0;
double phase_inc = 0.0;
const double TAU = 2*PI;
const double PI_z_2 = PI/2.0;
};
I have a rendering function that runs hundreds of times per second, and it tells me how many milliseconds each frame takes to draw.
I made a function to calculate the current render speed average of all the frames, which uses an std::vector to hold all the previous frames.
However, every time I run my program the vector that stores the averages becomes huge and takes up an increasing amount of memory, along with slowing down my program by almost 10 times (draw speed).
Averaging function (please note I am a C++ beginner):
double average(std::vector<double> input_vector)
{
double total = 0;
for(unsigned int i = 0; i < input_vector.size(); i++)
{
total += input_vector.at(i);
}
return (total / (double)input_vector.size());
}
Can someone help me fix this?
Thank you
Given the definition of arithmetic mean is sum( n ) / count( n ) you don't need to store every value of n in order to recompute the running mean, you only need the current sum and the current count, like so:
double runningMean(double newValue) {
static double sum = 0;
static double count = 0;
count++;
sum += newValue;
return sum / count;
}
No vector needed at all.
I have a vector of structures that stores a stream of values that arrive at different intervals. The structure consists of two elements , one for the value and the other records the time when the value arrived.
struct Data {
Time timeOfArrival;
Time value;
}cd;
Lets say that in another thread I want to calculate the moving average of the values that arrived in the last 10 (say) seconds. So in one thread the vector is being populated and in another I want to calculate the moving average.
Here is how I would do it. For simplicity's sake I redefined Data as follows:
struct Data
{
int timeOfArrival;
int value;
};
One way to do what you asked is to use a circular buffer where you store just the amount of data that you need for your moving average.
enum { MOVING_AVG_SIZE = 64, }; // number of elements that you use for your moving average
std::vector<Data> buffer(MOVING_AVG_SIZE);
std::vector<Data>::iterator insertIt = buffer.begin();
// saving to circular buffer
Data newData;
++insertIt
if (insertIt == buffer.end()) insertIt = buffer.begin();
*insertIt = newData;
// average
int sum = 0;
for (std::vector<Data>::const_iterator it = buffer.begin(); it != buffer.end(); ++it)
{
sum += it->value;
}
float avg = sum / (float)buffer.size();
If you don't have a circular buffer and you just keep adding values to your vector, then you can just get the last number of elements needed for calculating your moving average.
// saving to circular buffer
Data newData;
buffer.push_back(newData);
// average
// this algorithm calculates the moving average even if there is not enough samples in the buffer for the "10 s"
std::vector<Data>::const_reverse_iterator it = buffer.rbegin();
int i;
int sum = 0;
for (i = 0; i < MOVING_AVG_SIZE || it == buffer.rend(); ++i)
{
sum += it->value;
}
float avg = sum / (float)i;
As you have already decided that one thread will always populate and another thread will do moving average. You can then do this:
Keep a structure with two elements RunningSum and no of items in vector.
Write a loop which removes elements which are older than 10 sec and deduct their values from RunningSum. All elemenst in vector are sorted on timeofArrival, so you need not iterate the whole vector.
Add to sum the value of new elemnts which are not yet added.
You need a way to deffrentiate items that have been added( used in sum) and the one that are not yet summed. You can use a boolean for this or put them in a new dataStructure(internal to your class).
Keep count of number of elements and calculate the average.
long long delta;
auto oldTime = std::chrono::high_resolution_clock::now();
std::vector<int> list;
for (int i = 0; i < 100; i++)
{
auto x = i * i / std::pow((double)i / 50, 2) ;
list.push_back(x);
auto now = std::chrono::high_resolution_clock::now();
delta = std::chrono::duration_cast<std::chrono::nanoseconds>(now - oldTime).count();
oldTime = now;
std::cout << delta << std::endl;
}
delta is suppose to show how much ns took to compute,insert and show result of simple equation, however some of results are equals 0.. how entire iteration can take 0ns ??
There are a few errors with this.
You cannot measure in NS timescale. Most computers don't support that. Instead test with ms since that IS supported by all modern computers.
Printing data to the console screen is a slow process. Since you are printing the data in the loop, it is going to slow the entire iteration way down, by a good 6-20 ms per loop instead of being almost instantaneous. Move the print call outside the loop for the most accurate results Just divide the total time by the number of iterations to see the time required for each loop.