C++ Calculating shipping cost based on weight - c++

Part of a program that I'm working on implements a function that takes in the package weight as an argument and calculates the shipping cost based on that weight. The criteria for the cost/lb is as follows:
Package Weight Cost
-------------- ----
25 lbs & under $5.00 (flat rate)
26 - 50 lbs above rate + 0.10/lb over 25
50 + lbs above rate + 0.07/lb over 50
I used an if-if else-if to make the calculations, but feel like its a bit repetitive:
const int TIER_2_WEIGHT = 25;
const int TIER_3_WEIGHT = 50;
const float TIER_1_RATE = 5.00;
const float TIER_2_RATE = 0.10;
const float TIER_3_RATE = 0.07;
float shipPriceF;
if(shipWeightF <= TIER_2_WEIGHT)
{
shipPriceF = TIER_1_RATE;
}
else if(shipWeightF <= TIER_3_WEIGHT)
{
shipPriceF = ((shipWeightF - TIER_2_WEIGHT) * TIER_2_RATE) +
TIER_1_RATE;
}
else
{
shipPriceF = ((shipWeightF - TIER_3_WEIGHT) * TIER_3_RATE) +
((TIER_3_WEIGHT - TIER_2_WEIGHT) * TIER_2_RATE) +
TIER_1_RATE;
}
return shipPriceF;
So, the question is... is this the best way to accomplish this task, or should I be looking for a different solution?

First at all, you code looks clear and ok as it is.
Of course, you could deduplicate the redundant parts of the formulas by using a cumulative approach:
float shipPriceF = TIER_1_RATE; // to be paid anyway
if (shipWeightF > TIER_2_WEIGHT) // add the tier 2 if necessary
{
shipPriceF += (min(shipWeightF, TIER_3_WEIGHT) - TIER_2_WEIGHT) * TIER_2_RATE;
}
if(shipWeightF > TIER_3_WEIGHT) // add the tier 3 if really necessary
{
shipPriceF += (shipWeightF - TIER_3_WEIGHT) * TIER_3_RATE);
}
Well, this could even be simplified further:
float shipPriceF = TIER_1_RATE
+ max(min(shipWeightF,TIER_3_WEIGHT)-TIER_2_WEIGHT,0) * TIER_2_RATE
+ max(shipWeightF-TIER_3_WEIGHT,0) * TIER_3_RATE;
For 3 scales, it's probably ok with this synthetic formula. If you want more flexibility however, you could think of iterating throug a vector of rates instead of using constants. This would allow for a variable number of scales. If you're sure that the formula is always progressive (eg. "above + new unit price for what's exceding") use then the cumulative approach.

I think there are a lot of nearly identical lines in the code but not real duplicates. If you add more rates you can easily copy the wrong macro definitions or mix the values from the wrong rate.
My code itself removes the if/else replications and avoid the need of using the correct global definition. If you add a new rate to my code you simply add a raw to the table.
Only to give an idea what else can be done:
#include <iostream>
#include <functional>
#include <limits>
// first we define a entry of a table. This table contains the limit to which the ratio is valid and
// a function which calculates the price for that part of the weight.
struct RateTableEntry
{
double max;
std::function<double(double, double)> func;
};
// only to shrink the table width :-)
constexpr double MAX = std::numeric_limits<double>::max();
// and we define a table with the limits and the functions which calculates the price
RateTableEntry table[]=
{
// first is flate rate up to 25
{ 25, [](double , double )->double{ double ret= 5.00; return ret; }},
// next we have up to 50 the rate of 0.10 ( use min to get only the weight up to next limit
{ 50, [](double max, double weight)->double{ double ret= std::min(weight,max)*0.10; return ret; }},
// the same for next ratio. std::min not used, bedause it is the last entry
{ MAX, [](double , double weight)->double{ double ret= weight *0.07; return ret; }}
};
double CalcRate(double weight)
{
std::cout << "Price for " << weight;
double price = 0;
double offset = 0;
for ( auto& step: table )
{
// call each step, until there is no weight which must be calculated
price+=step.func(step.max- offset, weight);
// reduce the weight for that amount which allready is charged for
weight-=step.max-offset;
// make the table more readable, if not used this way, we have no max values but amount per step value
offset+=step.max;
if ( weight <= 0 ) break; // stop if all the weight was paid for
}
std::cout << " is " << price << std::endl;
return price;
}
int main()
{
CalcRate( 10 );
CalcRate( 26 );
CalcRate( 50 );
CalcRate( 51 );
CalcRate( 52 );
CalcRate( 53 );
}
If C++11 is not available, you also can use normal functions and function pointers instead of lambdas and std::function.

Related

Difference between logspace generators

Looking through ncmpcpp's spectrum visualizer code, I found a method that generates a "logspace," a vector used to group frequencies into log-scaled bins after applying an fft.
Here is the (isolated) code:
// Lowest frequency in display
const double HZ_MIN = 20;
// Highest frequency in display
const double HZ_MAX = 20000;
// Number of bars in spectrum
const size_t width = 100;
std::vector<double> dft_logspace;
void GenLogspace() {
// Calculate number of extra bins needed between 0 HZ and HZ_MIN
const size_t left_bins = (log10(HZ_MIN) - width*log10(HZ_MIN)) / (log10(HZ_MIN) - log10(HZ_MAX));
// Generate logspaced frequencies
dft_logspace.resize(width);
const double log_scale = log10(HZ_MAX) / (left_bins + dft_logspace.size() - 1);
for (size_t i = left_bins; i < dft_logspace.size() + left_bins; ++i) {
dft_logspace[i - left_bins] = pow(10, i * log_scale);
}
}
I spent a while trying to understand how this works... and it seems to be an awfully complicated way to get the same result as the following function, which works the way you'd expect:
Given limits a and b so that a < b, divide the interval [log10(a), log10(b)] into equal subintervals and exponential-map your way back.
// a = HZ_MIN, and
// b = HZ_MAX
void my_GenLogspace() {
dft_logspace.resize(width);
// Generate log-scaled frequency bins between HZ_MAX and HZ_MIN
for (size_t i = 0; i < width; i++) {
dft_logspace[i] = HZ_MIN * pow((HZ_MAX/HZ_MIN), ((double) i/(width-1)));
}
}
I'm fairly sure that these are mathematically identical.
Are they? Is there any reason to use original method over my rewrite? Does the author of the commit that introduced this code know something I don't?
Edit: (width-1), per Bob__'s suggestion
Got it. If anyone happens to need this later...
// Generate log-scaled vector of frequencies from HZ_MIN to HZ_MAX
void GenLogspace() {
// Prepare vector
dft_logspace.resize(width);
// Calculate number of extra bins needed between 0 HZ and HZ_MIN
// In logspace, divide the region between MAX and MIN into
// w - 1 equal segments (by fencepost, this gives us w seperators)
const double d = (
(log10(HZ_MAX) - log10(HZ_MIN))
/
(width - 1)
);
// Count how many of these segments will fit between
// 0 and MIN (note that we're still in logspace).
// This is how many log-scaled intervals are outside
// our desired range of frequencies.
const size_t skip_bins = log10(HZ_MIN) / d;
// Calculate log scale size.
// We can't use the value of d here, because d is "anchored" to both MIN and MAX.
// The last bin should be equal to MAX, but there may not be a bin that is equal to MIN.
//
// So, we re-partition our logspace:
// Divide the distance between 0 and MAX into equal partitions.
const double log_scale = log10(HZ_MAX) / (skip_bins + width - 1);
// Exponential-map bins out of logspace, skipping those that are outside our range.
// Note that the first (skipped) bin is ALWAYS 1, since 10^0 = 1.
// The last bin ALWAYS equals MAX.
for (size_t i = skip_bins; i < width + skip_bins; ++i) {
dft_logspace[i - skip_bins] = pow(10, i * log_scale);
}
}

Calculate p value of a t - statistic using the student_t_distribution

I wanted to calculate p-values of a t-statistic for a two tailed test with 5% level of significance. And I wanted to do this with the standard library. I was wondering if this was possible using the student_t_distribution from the < random > module.
My code currently is as following
#include <iostream>
int main(){
double t_stat = 0.0267; // t-statistic
double alpha_los = 0.05; // level of significance
double dof = 30; // degrees of freedom
// calculate P > |t| and compare with alpha_los
return 0;
}
Thank you
The <random> header just provides you with the ability to get random numbers from different distributions.
If you are able to use boost you can do the following:
#include <boost/math/distributions/students_t.hpp>
int main() {
double t_stat = 0.0267; // t-statistic
double alpha_los = 0.05; // level of significance
double dof = 30; // degrees of freedom
boost::math::students_t dist(dof);
double P_x_greater_t = 1.0 - boost::math::cdf(dist, t_stat);
double P_x_smaller_negative_t = boost::math::cdf(dist, -t_stat);
if(P_x_greater_t + P_x_smaller_negative_t < alpha_los) {
} else {
}
}

How to measure the rate of rise of a variable

I am reading in a temperature value every 1 second/minute (this rate is not crucial). I want to measure this temperature so that if it begins to rise rapidly above a certain threshold I perform an action.
If the temperature rises above 30 degrees ( at any rate ) I increase the fan speed.
I think I must do something like set old temperature to new temp and then each time it loops set old temp to the current temp of the engine. But I am not sure if I need to use arrays for the engine temp or not.
Of course you can store just one old sample, then check difference like in:
bool isHot(int sample) {
static int oldSample = sample;
return ((sample > 30) || (sample - oldSample > threshold));
}
It's OK from C point of view, but very bad from metrology point of view. You should consider some conditioning of your signal (in this case temperature) to smothen out any spikes.
Of course you can add signal conditioning letter on. For (easy) example look at Simple Moving Avarage: https://en.wikipedia.org/wiki/Moving_average
If you want control the fan speed "right way" you should consider learning a bit about PID controller: https://en.wikipedia.org/wiki/PID_controller
Simple discrete PID:
PidController.h:
class PidController
{
public:
PidController();
double sim(double y);
void UpdateParams(double kp, double ki, double kd);
void setSP(double setPoint) { m_setPoint = setPoint; } //set current value of r(t)
private:
double m_setPoint; //current value of r(t)
double m_kp;
double m_ki;
double m_kd;
double m_outPrev;
double m_errPrev[2];
};
PidController.cpp
#include "PidController.h"
PidController::PidController():ControllerObject()
{
m_errPrev[0] = 0;
m_errPrev[1] = 0;
m_outPrev = 0;
}
void PidController::UpdateParams(double kp, double ki, double kd)
{
m_kp = kp;
m_ki = ki;
m_kd = kd;
}
//calculates PID output
//y - sample of y(t)
//returns sample of u(t)
double PidController::sim(double y)
{
double out; //u(t) sample
double e = m_setPoint - y; //error
out = m_outPrev + m_kp * (e - m_errPrev[0] + m_kd * (e - 2 * m_errPrev[0] + m_errPrev[1]) + m_ki * e);
m_outPrev = out; //store previous output
//store previous errors
m_errPrev[1] = m_errPrev[0];
m_errPrev[0] = e;
return out;
}

Fast percentile in C++

My program calculates a Monte Carlo simulation for the value-at-risk metric. To simplify as much as possible, I have:
1/ simulated daily cashflows
2/ to get a sample of a possible 1-year cashflow,
I need to draw 365 random daily cashflows and sum them
Hence, the daily cashflows are an empirically given distrobution function to be sampled 365 times. For this, I
1/ sort the daily cashflows into an array called *this->distro*
2/ calculate 365 percentiles corresponding to random probabilities
I need to do this simulation of a yearly cashflow, say, 10K times to get a population of simulated yearly cashflows to work with. Having the distribution function of daily cashflows prepared, I do the sampling like...
for ( unsigned int idxSim = 0; idxSim < _g.xSimulationCount; idxSim++ )
{
generatedVal = 0.0;
for ( register unsigned int idxDay = 0; idxDay < 365; idxDay ++ )
{
prob = (FLT_TYPE)fastrand(); // prob [0,1]
dIdx = prob * dMaxDistroIndex; // scale prob to distro function size
// to get an index into distro array
_floor = ((FLT_TYPE)(long)dIdx); // fast version of floor
_ceil = _floor + 1.0f; // 'fast' ceil:)
iIdx1 = (unsigned int)( _floor );
iIdx2 = iIdx1 + 1;
// interpolation per se
generatedVal += this->distro[iIdx1]*(_ceil - dIdx );
generatedVal += this->distro[iIdx2]*(dIdx - _floor);
}
this->yearlyCashflows[idxSim] = generatedVal ;
}
The code inside of both for cycles does linear interpolation. If, say USD 1000 corresponds to prob=0.01, USD 10000 corresponds to prob=0.1 then if I don't have an empipirical number for p=0.05 I want to get USD 5000 by interpolation.
The question: this code runs correctly, though the profiler says that the program spends cca 60% of its runtime on the interpolation per se. So my question is, how can I make this task faster? Sample runtimes reported by VTune for specific lines are as follows:
prob = (FLT_TYPE)fastrand(); // 0.727s
dIdx = prob * dMaxDistroIndex; // 1.435s
_floor = ((FLT_TYPE)(long)dIdx); // 0.718s
_ceil = _floor + 1.0f; // -
iIdx1 = (unsigned int)( _floor ); // 4.949s
iIdx2 = iIdx1 + 1; // -
// interpolation per se
generatedVal += this->distro[iIdx1]*(_ceil - dIdx ); // -
generatedVal += this->distro[iIdx2]*(dIdx - _floor); // 12.704s
Dashes mean the profiler reports no runtimes for those lines.
Any hint will be greatly appreciated.
Daniel
EDIT:
Both c.fogelklou and MSalters have pointed out great enhancements. The best code in line with what c.fogelklou said is
converter = distroDimension / (FLT_TYPE)(RAND_MAX + 1)
for ( unsigned int idxSim = 0; idxSim < _g.xSimulationCount; idxSim++ )
{
generatedVal = 0.0;
for ( register unsigned int idxDay = 0; idxDay < 365; idxDay ++ )
{
dIdx = (FLT_TYPE)fastrand() * converter;
iIdx1 = (unsigned long)dIdx);
_floor = (FLT_TYPE)iIdx1;
generatedVal += this->distro[iIdx1] + this->diffs[iIdx1] *(dIdx - _floor);
}
}
and the best I have along MSalter's lines is
normalizer = 1.0/(FLT_TYPE)(RAND_MAX + 1);
for ( unsigned int idxSim = 0; idxSim < _g.xSimulationCount; idxSim++ )
{
generatedVal = 0.0;
for ( register unsigned int idxDay = 0; idxDay < 365; idxDay ++ )
{
dIdx = (FLT_TYPE)fastrand()* normalizer ;
iIdx1 = fastrand() % _g.xDayCount;
generatedVal += this->distro[iIdx1];
generatedVal += this->diffs[iIdx1]*dIdx;
}
}
The second code is approx. 30 percent faster. Now, of 95s of total runtime, the last line consumes 68s. The last but one line consumes only 3.2s hence the double*double multiplication must be the devil. I thought of SSE - saving the last three operands into an array and then carry out a vector multiplication of this->diffs[i]*dIdx[i] and add this to this->distro[i] but this code ran 50 percent slower. Hence, I think I hit the wall.
Many thanks to all.
D.
This is a proposal for a small optimization, removing the need for ceil, two casts, and one of the multiplies. If you are running on a fixed point processor, that would explain why the multiplies and casts between float and int are taking so long. In that case, try using fixed point optimizations or turning on floating point in your compiler if the CPU supports it!
for ( unsigned int idxSim = 0; idxSim < _g.xSimulationCount; idxSim++ )
{
generatedVal = 0.0;
for ( register unsigned int idxDay = 0; idxDay < 365; idxDay ++ )
{
prob = (FLT_TYPE)fastrand(); // prob [0,1]
dIdx = prob * dMaxDistroIndex; // scale prob to distro function size
// to get an index into distro array
iIdx1 = (long)dIdx;
_floor = (FLT_TYPE)iIdx1; // fast version of floor
iIdx2 = iIdx1 + 1;
// interpolation per se
{
const FLT_TYPE diff = this->distro[iIdx2] - this->distro[iIdx1];
const FLT_TYPE interp = this->distro[iIdx1] + diff * (dIdx - _floor);
generatedVal += interp;
}
}
this->yearlyCashflows[idxSim] = generatedVal ;
}
I would recommend to fix fastrand. Floating-point code isn't the fastest in the world, but what is especially slow is the switching between floating point and integer code. Since you need an integer index, use an integer random function.
It may even be advantageous to pre-generate all 365 random values in a loop. Since you need only log2(dMaxDistroIndex) bits of randomness per value, you may be able to reduce the number of RNG calls.
You would subsequently pick a random number between 0 and 1 for the interpolation fraction.

Arduino mega queue

I wrote this simple code which reads a length from the Sharp infrared sensor, end presents the average meter in cm (unit) by serial.
When write this code for the Arduino Mega board, the Arduino starts a blinking LED (pin 13) and the program does nothing. Where is the bug in this code?
#include <QueueList.h>
const int ANALOG_SHARP = 0; //Set pin data from sharp.
QueueList <float> queuea;
float cm;
float qu1;
float qu2;
float qu3;
float qu4;
float qu5;
void setup() {
Serial.begin(9600);
}
void loop() {
cm = read_gp2d12_range(ANALOG_SHARP); //Convert to cm (unit).
queuea.push(cm); //Add item to queue, when I add only this line Arduino crash.
if ( 5 <= queuea.peek()) {
Serial.println(average());
}
}
float read_gp2d12_range(byte pin) { //Function converting to cm (unit).
int tmp;
tmp = analogRead(pin);
if (tmp < 3)
return -1; // Invalid value.
return (6787.0 /((float)tmp - 3.0)) - 4.0;
}
float average() { //Calculate average length
qu1 += queuea.pop();
qu2 += queuea.pop();
qu3 += queuea.pop();
qu4 += queuea.pop();
qu5 += queuea.pop();
float aver = ((qu1+qu2+qu3+qu4+qu5)/5);
return aver;
}
I agree with the peek() -> count() error listed by vhallac. But I'll also point out that you should consider averaging by powers of 2 unless there is a strong case to do otherwise.
The reason is that on microcontrollers, division is slow. By averaging over a power of 2 (2,4,8,16,etc.) you can simply calculate the sum and then bitshift it.
To calculate the average of 2: (v1 + v2) >> 1
To calculate the average of 4: (v1 + v2 + v3 + v4) >> 2
To calculate the average of n values (where n is a power of 2) just right bitshift the sum right by [log2(n)].
As long as the datatype for your sum variable is big enough and won't overflow, this is much easier and much faster.
Note: this won't work for floats in general. In fact, microcontrollers aren't optimized for floats. You should consider converting from int (what I'm assuming you're ADC is reading) to float at the end after the averaging rather than before.
By converting from int to float and then averaging floats you are losing more precision than averaging ints than converting the int to a float.
Other:
You're using the += operator without initializing the variables (qu1, qu2, etc.) -- it's good practice to initialize them if you're going to use += but it looks as if = would work fine.
For floats, I'd have written the average function as:
float average(QueueList<float> & q, int n)
{
float sum = 0;
for(int i=0; i<n; i++)
{
sum += q.pop();
}
return (sum / (float) n);
}
And called it: average(queuea, 5);
You could use this to average any number of sensor readings and later use the same code to later average floats in a completely different QueueList. Passing the number of readings to average as a parameter will really come in handy in the case that you need to tweak it.
TL;DR:
Here's how I would have done it:
#include <QueueList.h>
const int ANALOG_SHARP=0; // set pin data from sharp
const int AvgPower = 2; // 1 for 2 readings, 2 for 4 readings, 3 for 8, etc.
const int AvgCount = pow(2,AvgPow);
QueueList <int> SensorReadings;
void setup(){
Serial.begin(9600);
}
void loop()
{
int reading = analogRead(ANALOG_SHARP);
SensorReadings.push(reading);
if(SensorReadings.count() > AvgCount)
{
int avg = average2(SensorReadings, AvgPower);
Serial.println(gpd12_to_cm(avg));
}
}
float gp2d12_to_cm(int reading)
{
if(reading <= 3){ return -1; }
return((6787.0 /((float)reading - 3.0)) - 4.0);
}
int average2(QueueList<int> & q, int AvgPower)
{
int AvgCount = pow(2, AvgPower);
long sum = 0;
for(int i=0; i<AvgCount; i++)
{
sum += q.pop();
}
return (sum >> AvgPower);
}
You are using queuea.peek() to obtain the count. This will only return the last element in queue. You should use queuea.count() instead.
Also you might consider changing the condition tmp < 3 to tmp <= 3. If tmp is 3, you divide by zero.
Great improvement jedwards, however the first question I have is why use queuelist instead of an int array.
As an example I would do the following:
int average(int analog_reading)
{
#define NUM_OF_AVG 5
static int readings[NUM_OF_AVG];
static int next_position;
static int sum;
if (++next_position >= NUM_OF_AVG)
{
next_position=0;
}
reading[next_position]=analog_reading;
for(int i=0; i<NUM_OF_AVG; i++)
{
sum += reading[i];
}
average = sum/NUM_OF_AVG
}
Now I compute a new rolling average with every reading and it eliminates all the issues related to dynamic memory allocation (memory fragmentation, no available memory, memory leaks) in a embedded device.
I appreciate and understand the use of shifting for a division by 2,4 or 8, however I would stay away from that technique for two reasons.
I think readability and maintainability of the source code is more important then saving a little bit of time with a shift instead of a divide unless you can test and verify the divide is a bottleneck.
Second, I believe most current optimizing compilers will do a shift if possible, I know GCC does.
I will leave refactoring out the for loop for the next guy.