Generating a random number from a lognormal distribution - c++

This shouldn't be too difficult, but for some reason this sampling random numbers from a distribution is really tripping me up.
I know the best options for generating random numbers from a distribution are boost/C++11 libraries...unfortunately, I can't get this code to compile with c++0x, and anyway, I would preferably keep compatibility on a server that I'm also using, which is running gcc 4.1.2 - ancient, I know, doesn't support newer C++. Frustrations. And as always, time crunch means I need to do the best I can with a quick fix.
Taking the exponent of a random number from the box muller equations is my next option, but I'm not getting a lognormal distribution with the parameters I specify. I don't understand why this is not working.
Any help would be hugely appreciated!
void testRNG(){
int mean = 5000;
int std = 50;
ofstream out("./Output/normal_samples.out");
RunningStats normal;
for (int i=0;i<2000;++i){
double sample = randomSample(mean, std, NORMAL);//call function with box muller transformation to return a number from a normal distriubtion
out<<sample<<endl;
normal.Push(sample);//keep a running average of sampled numbers
}
cout<<"Normal Mean = "<<normal.Mean()<<endl;
cout<<"Normal Std = "<<normal.StandardDeviation()<<endl;
RunningStats lognormal;
for (int i=0;i<2000;++i){
double sample = randomSample(mean, std, LOGNORMAL);
out<<sample<<endl;
lognormal.Push(sample);
}
cout<<"Lognormal Mean = "<<lognormal.Mean()<<endl;
cout<<"Lognormal Std = "<<lognormal.StandardDeviation()<<endl;
}
The sampling functions, which I didn't write, go first to a case thing from randomSample(), and then call:
EDIT - I noticed that it actually did call a function to find the lognormal parameters. Added in.
double randNormal(double mean, double stdev) {
static long numSamples = 0;
static double Z2;
if ((numSamples++ & 1) == 0) {
double Z1, U1, U2;
do { U1 = randUniform(0, 1); } while (U1 <= 0 || U1 >= 1);
do { U2 = randUniform(0, 1); } while (U1 <= 0 || U1 >= 1);
Z1 = sqrt(-2 * log(U1)) * cos(6.28318531 * U2);
Z2 = sqrt(-2 * log(U1)) * sin(6.28318531 * U2);
return mean + stdev * Z1;
} else {
return mean + stdev * Z2;
}
}
double randLognormal(double mu, double sigma) {
return exp(randNormal(mu, sigma));
}
double randLognormalMeanStdev(double mean, double stdev) {
return randLognormal( log(mean) - 0.5 * log(1 + (stdev * stdev) / (mean * mean)) , log(1 + (stdev * stdev) / (mean * mean)));
}
So the output I get is:
Normal Mean = 4998.72 //I said 5000
Normal Std = 49.7054 //I said 50
Lognormal Mean = 4999.74
Lognormal Std = 0.492766 //this is the part that is not working
What am I missing to get the lognormal std to be what I want?
Other options would also be appreciated - maybe there is something else I am missing.
Thanks in advance!
Edit - I realized I should have made it clear that I need to sample from a lognormal distribution

Related

C++ boost library to generate negative binomial random variables

I'm new to C++ and I'm using the boost library to generate random variables. I want to generate random variables from a negative binomial distribution.
The first parameter of boost::random::negative_binomial_distribution<int> freq_nb(r, p); has to be an integer. I want to expand that to a real value. Therefore I would like to use a poisson-gamma mixture, but I fail to.
Here's an excerpt from my code:
int nr_sim = 1000000;
double mean = 2.0;
double variance = 15.0;
double r = mean * mean / (variance - mean);
double p = mean / variance;
double beta = (1 - p) / p;
typedef boost::mt19937 RNGType;
RNGType rng(5);
boost::random::gamma_distribution<double> my_gamma(r, beta);
boost::random::poisson_distribution<int> my_poi(my_gamma(rng));
int simulated_mean = 0;
for (int i = 0; i < nr_sim; i++) {
simulated_mean += my_poi(rng);
}
double my_result = (double)simulated_mean / (double)nr_sim;
With my_result == 0.5 there is definitly something wrong. Is it my_poi(my_gamma(rng))? If so, what is the correct way to solve that problem?

Applying a peak detection algorithm to a realtime data

I have a function to detect the peak of real-time data. The algorithm is mentioned in this thread. which looks like this:
std::vector<int> smoothedZScore(std::vector<float> input)
{
//lag 5 for the smoothing functions
int lag = 5;
//3.5 standard deviations for signal
float threshold = 3.5;
//between 0 and 1, where 1 is normal influence, 0.5 is half
float influence = .5;
if (input.size() <= lag + 2)
{
std::vector<int> emptyVec;
return emptyVec;
}
//Initialise variables
std::vector<int> signal(input.size(), 0.0);
std::vector<float> filteredY(input.size(), 0.0);
std::vector<float> avgFilter(input.size(), 0.0);
std::vector<float> stdFilter(input.size(), 0.0);
std::vector<float> subVecStart(input.begin(), input.begin() + lag);
double sum = std::accumulate(std::begin(subVecStart), std::end(subVecStart), 0.0);
double mean = sum / subVecStart.size();
double accum = 0.0;
std::for_each (std::begin(subVecStart), std::end(subVecStart), [&](const double d) {
accum += (d - mean) * (d - mean);
});
double stdev = sqrt(accum / (subVecStart.size()-1));
//avgFilter[lag] = mean(subVecStart);
avgFilter[lag] = mean;
//stdFilter[lag] = stdDev(subVecStart);
stdFilter[lag] = stdev;
for (size_t i = lag + 1; i < input.size(); i++)
{
if (std::abs(input[i] - avgFilter[i - 1]) > threshold * stdFilter[i - 1])
{
if (input[i] > avgFilter[i - 1])
{
signal[i] = 1; //# Positive signal
}
else
{
signal[i] = -1; //# Negative signal
}
//Make influence lower
filteredY[i] = influence* input[i] + (1 - influence) * filteredY[i - 1];
}
else
{
signal[i] = 0; //# No signal
filteredY[i] = input[i];
}
//Adjust the filters
std::vector<float> subVec(filteredY.begin() + i - lag, filteredY.begin() + i);
// avgFilter[i] = mean(subVec);
// stdFilter[i] = stdDev(subVec);
}
return signal;
}
In my code, I'm reading real-time 3 axis accelerometer values from IMU sensor and displaying it as a graph. I need to detect the peak of the signal using the above algorithm. I added the function to my code.
Let's say the realtime valuees are following:
double x = sample->acceleration_g[0];
double y = sample->acceleration_g[1];
double z = sample->acceleration_g[2];
How do I pass this value to the above function and detect the peak.
I tried calling this:
smoothedZScore(x)
but gives me an error:
settings.cpp:230:40: error: no matching function for call to 'smoothedZScore'
settings.cpp:92:18: note: candidate function not viable: no known conversion from 'double' to 'std::vector<float>' for 1st argument
EDIT
The algorithm needs a minimum of 7 samples to feed in. So I guess I may need to store my realtime data in a buffer.
But I've difficulty understanding how to store samples in a buffer and apply to the peak detection algorithm.
can you show me a possible solution to this?
You will need to rewrite the algorithm. Your problem isn't just a realtime problem, you also need a causal solution. The function you have is not causal.
Practically speaking, you will need a class, and that class will need to incrementally calculate the standard deviation.

Getting "-nan(ind)" when trying to generate random variates

I am trying generate random variates by trying to generate two standard normal variates r1, r2, by using polar coordinates along with a mean and sigma value. However when I run my code, I keep getting a "-nan(ind)" as my output.
What am I doing wrong here? The code is as follows:
static double saveNormal;
static int NumNormals = 0;
static double PI = 3.1415927;
double fRand(double fMin, double fMax)
{
double f = (double)rand() / RAND_MAX;
return fMin + f * (fMax - fMin);
}
static double normal(double r, double mean, double sigma) {
double returnNormal;
if (NumNormals == 0) {
//to get next double value
double r1 = fRand(0, 20);
double r2 = fRand(0, 20);
returnNormal = sqrt(-2 * log(r1)) * cos(2 * PI*r2);
saveNormal = sqrt(-2 * log(r1)) * sin(2 * PI*r2);
}
else {
NumNormals = 0;
returnNormal = saveNormal;
}
return returnNormal*sigma + mean;
}
So, you're using the Box–Muller method to pseudo randomly sample a normal random variate. For this transform to work, r1 and r2 must be uniformly distributed independent variates in [0,1].
Instead, your r1/r2 are [0,20] supported, resulting in a negative sqrt argument when >1, this will give you nans. Replace with
double r1 = fRand(0, 1);
double r2 = fRand(0, 1);
Moreover, you should use C++11 <random> for better pseudorandom number generation; as of now, your fRand has poor quality due to rand()-to-double conversion and possible spurious correlations between adjacent calls. Moreover, your function lacks some basic error checking and badly depends on global variables and is inherently thread unsafe.
FYI, this is what a C++11 version might look like
#include <random>
#include <iostream>
int main()
{
auto engine = std::default_random_engine{ std::random_device{}() };
auto variate = std::normal_distribution<>{ /*mean*/0., /*stddev*/ 1. };
while(true) // a lot of normal samples ...
std::cout << variate(engine) << std::endl;
}
r1 can be zero, making log(r1) undefined.
furthermore, don't use rand() except when you need your numbers to look random to a human in a hurry. Use <random> instead

Ineffective "Peel/Remainder" Loop in my code

I have this function:
bool interpolate(const Mat &im, float ofsx, float ofsy, float a11, float a12, float a21, float a22, Mat &res)
{
bool ret = false;
// input size (-1 for the safe bilinear interpolation)
const int width = im.cols-1;
const int height = im.rows-1;
// output size
const int halfWidth = res.cols >> 1;
const int halfHeight = res.rows >> 1;
float *out = res.ptr<float>(0);
const float *imptr = im.ptr<float>(0);
for (int j=-halfHeight; j<=halfHeight; ++j)
{
const float rx = ofsx + j * a12;
const float ry = ofsy + j * a22;
#pragma omp simd
for(int i=-halfWidth; i<=halfWidth; ++i, out++)
{
float wx = rx + i * a11;
float wy = ry + i * a21;
const int x = (int) floor(wx);
const int y = (int) floor(wy);
if (x >= 0 && y >= 0 && x < width && y < height)
{
// compute weights
wx -= x; wy -= y;
int rowOffset = y*im.cols;
int rowOffset1 = (y+1)*im.cols;
// bilinear interpolation
*out =
(1.0f - wy) * ((1.0f - wx) * imptr[rowOffset+x] + wx * imptr[rowOffset+x+1]) +
( wy) * ((1.0f - wx) * imptr[rowOffset1+x] + wx * imptr[rowOffset1+x+1]);
} else {
*out = 0;
ret = true; // touching boundary of the input
}
}
}
return ret;
}
halfWidth is very random: it can be 9, 84, 20, 95, 111...I'm only trying to optimize this code, I don't understand it in details.
As you can see, the inner for has been already vectorized, but Intel Advisor suggests this:
And this is the Trip Count analysis result:
To my understand this means that:
Vector length is 8, so it means that 8 floats can be processed at the same time for each loop. This would mean (if I'm not wrong) that data are 32 bytes aligned (even though as I explain here it seems that the compiler think that data is not aligned).
On average, 2 cycles are totally vectorized, while 3 cycles are remainder loops. The same goes for Min and Max. Otherwise I don't understand what ; means.
Now my question is: how can I follow Intel Advisor first suggestion? It says to "increase the size of objects and add iterations so the trip count is a multiple of vector length"...Ok, so it's simply sayin' "hey man do this so halfWidth*2+1 (since it goes from -halfWidth to +halfWidth is a multiple of 8)". But how can I do this? If I add random cycles, this would obviously break the algorithm!
The only solution that came to my mind is to add "fake" iterations like this:
const int vectorLength = 8;
const int iterations = halfWidth*2+1;
const int remainder = iterations%vectorLength;
for(int i=0; i<loop+length-remainder; i++){
//this iteration was not supposed to exist, skip it!
if(i>halfWidth)
continue;
}
Of course this code would not work since it goes from -halfWidth to halfWidth, but it's to make you understand my strategy of "fake" iterations.
About the second option ("Increase the size of static and automatic objects, and use a compiler option to add data padding") I have no idea how to implement this.
First, you have to check Vector Advisor Efficiency metric as well as relative time spent in Loop Remainder compared to Loop Body (see hotspots list in advisor). If efficiency is close to 100% (or time spent in Remainder is very small), then it is not worth effort (and money as MSalters mentioned in comments).
If it is << 100% (and there are no other penalties reported by the tool), then you can either refactor the code to "add fake iterations" (rare users can afford it) or you should try #pragma loop_count for most typical #iterations values (depending on typical halfWidth value).
If halfWIdth is totally random (no common or average values), then there is nothing you can really do with this issue.

can anyone look over some simple gradient descent code?

I'm trying to implement a very simple 1-dimensional gradient descent algorithm. The code I have does not work at all. Basically depending on my alpha value, the end parameters will either be wildly huge (like ~70 digits), or basically zero (~ 0.000). I feel like a gradient descent should not be nearly this sensitive in alpha (I'm generating small data in [0.0,1.0], but I think the gradient itself should account for the scale of the data, no?).
Here's the code:
#include <cstdio>
#include <cstdlib>
#include <ctime>
#include <vector>
using namespace std;
double a, b;
double theta0 = 0.0, theta1 = 0.0;
double myrand() {
return double(rand()) / RAND_MAX;
}
double f(double x) {
double y = a * x + b;
y *= 0.1 * (myrand() - 0.5); // +/- 5% noise
return y;
}
double h(double x) {
return theta1 * x + theta0;
}
int main() {
srand(time(NULL));
a = myrand();
b = myrand();
printf("set parameters: a = %lf, b = %lf\n", a, b);
int N = 100;
vector<double> xs(N);
vector<double> ys(N);
for (int i = 0; i < N; ++i) {
xs[i] = myrand();
ys[i] = f(xs[i]);
}
double sensitivity = 0.008;
double d0, d1;
for (int n = 0; n < 100; ++n) {
d0 = d1 = 0.0;
for (int i = 0; i < N; ++i) {
d0 += h(xs[i]) - ys[i];
d1 += (h(xs[i]) - ys[i]) * xs[i];
}
theta0 -= sensitivity * d0;
theta1 -= sensitivity * d1;
printf("theta0: %lf, theta1: %lf\n", theta0, theta1);
}
return 0;
}
Changing the value of alpha can produce the algorithm to diverge, so that may be one of the causes of what is happening. You can check by computing the error in each iteration and see if is increasing or decreasing.
In adition, it is recommended to set randomly the values of theta at the beginning in stead of assigning them to zero.
Apart from that, you should divide by N when you update the value of theta as follows:
theta0 -= sensitivity * d0/N;
theta1 -= sensitivity * d1/N;
I had a quick look at your implementation and it looks fine to me.
The code I have does not work at all.
I wouldn't say that. It seems to behave correctly for small enough values of sensitivity, which is a value that you just have to "guess", and that is how the gradient descent is supposed to work.
I feel like a gradient descent should not be nearly this sensitive in alpha
If you struggle to visualize that, remember that you are using gradient descent to find the minimum of the cost function of linear regression, which is a quadratic function. If you plot the cost function you will see why the learning rate is so sensitive in these cases: intuitively, if the parabola is narrow, the algorithm will converge more quickly, which is good, but then the learning rate is more "sensitive" and the algorithm can easily diverge if you are not careful.