I need to implement a real-time synchronous quadrature detector. The detector receives a stream of input data (from PCI ADC) and returns the amplitude of the harmonics w. There is simpified C++ code:
double LowFreqFilter::process(double in)
{
avg = avg * a + in * (1 - a);
return avg;
}
class QuadroDetect
{
double wt;
const double wdt;
LowFreqFilter lf1;
LowFreqFilter lf2;
QuadroDetect(const double w, const double dt) : wt(0), wdt(w * dt)
{}
inline double process(const double in)
{
double f1 = lf1.process(in * sin(wt));
double f2 = lf2.process(in * cos(wt));
double out = sqrt(f1 * f1 + f2 * f2);
wt += wdt;
return out;
}
};
My problem is that sin and cos calculating takes too much time. I was advised to use a pre-calculated sin and cos table, but available ADC sampling frequencies is not multiple of w, so there is fragments stitching problem. Are there any fast alternatives for sin and cos calculations? I would be grateful for any advice on how to improve the performance of this code.
UPD
Unfortunately, I was wrong in the code, removing the filtering calls, the code has lost its meaning. Thanks Eric Postpischil.
I know a solution that can suit you. Recall the school formula of sine and cosine for the sum of angles:
sin(a + b) = sin(a) * cos(b) + cos(a) * sin(b)
cos(a + b) = cos(a) * cos(b) - sin(a) * sin(b)
Suppose that wdt is a small increment of the wtangle, then we get the recursive calculation formula for the sin and cos for next time:
sin(wt + wdt) = sin(wt) * cos(wdt) + cos(wt) * sin(wdt)
cos(wt + wdt) = cos(wt) * cos(wdt) - sin(wt) * sin(wdt)
We need to calculate the sin(wdt) and cos(wdt) values only once. For other computations we need only addition and multiplication operations. Recursion can be continued from any time moment, so we can replace the values with exactly calculated time by time to avoid indefinitely error accumulation.
There is final code:
class QuadroDetect
{
const double sinwdt;
const double coswdt;
const double wdt;
double sinwt = 0;
double coswt = 1;
double wt = 0;
QuadroDetect(double w, double dt) :
sinwdt(sin(w * dt)),
coswdt(cos(w * dt)),
wdt(w * dt)
{}
inline double process(const double in)
{
double f1 = in * sinwt;
double f2 = in * coswt;
double out = sqrt(f1 * f1 + f2 * f2);
double tmp = sinwt;
sinwt = sinwt * coswdt + coswt * sinwdt;
coswt = coswt * coswdt - tmp * sinwdt;
// Recalculate sinwt and coswt to avoid indefinitely error accumulation
if (wt > 2 * M_PI)
{
wt -= 2 * M_PI;
sinwt = sin(wt);
coswt = cos(wt);
}
wt += wdt;
return out;
}
};
Please note that such recursive calculations provides less accurate results than sin(wt) cos(wt), but I used it and it worked well.
If you can use std::complex the implementation becomes much simpler. Technical its the same solution as from #Dmytro Dadyka as complex numbers are working this way. If the optimiser works well it should be run the same time.
class QuadroDetect
{
public:
std::complex<double> wt;
std::complex <double> wdt;
LowFreqFilter lf1;
LowFreqFilter lf2;
QuadroDetect(const double w, const double dt)
: wt(1.0, 0.0)
, wdt(std::polar(1.0, w * dt))
{
}
inline double process(const double in)
{
auto f = in * wt;
f.imag(lf1.process(f.imag()));
f.real(lf2.process(f.real()));
wt *= wdt;
return std::abs(f);
}
};
Related
I am using GATE(which uses Geant4) to do MC studies on dosimetric output. I am using a cylindrical cobalt source at 80 cm SAD to measure the PDD in a water phantom and dose at depth of 10 cm.
I now want to simulate a smaller source (say, r/2 and h/2) and compare the dosimetric output at a depth of 10 cm. Besides the geometry, I see that I am able to control the number of particles and time of the simulation. What would be the best way to change these two parameters to mimic the lower output from a smaller source? Or is there any other parameter that can be changed to mimic a smaller source? I am trying to calculate the output factor of the smaller source w.r.t. to the original source.
Not sure if it helps, this is cylindrical source with Co60
Source::Source():
_particleGun{nullptr},
_sourceMessenger{nullptr},
_radius{-1.0},
_halfz{-1.0},
_nof_particles{10}
{
_particleGun = new G4ParticleGun( 1 );
G4ParticleTable* particleTable = G4ParticleTable::GetParticleTable();
G4String particleName = "gamma"; // "geantino"
_particleGun->SetParticleDefinition(particleTable->FindParticle(particleName));
_particleGun->SetParticlePosition(G4ThreeVector(0., 0., 0.));
_particleGun->SetParticleMomentumDirection(G4ThreeVector(0., 0., 1.));
_particleGun->SetParticleEnergy(1000.0*MeV);
_sourceMessenger = new SourceMessenger(this);
}
Source::~Source()
{
delete _particleGun;
delete _sourceMessenger;
}
troika Source::sample_direction()
{
double phi = 2.0 * M_PI * G4UniformRand();
double cos_z = 2.0 * G4UniformRand() - 1.0;
double sin_z = sqrt( (1.0 - cos_z) * (1.0 + cos_z) );
return troika{ sin_z * cos(phi), sin_z * sin(phi), cos_z };
}
double Source::sample_energy()
{
return (G4UniformRand() < P_lo) ? E_lo : E_hi;
}
void Source::GeneratePrimaries(G4Event* anEvent)
{
for(int k = 0; k != _nof_particles; ++k) // we generate _nof_particles at once
{
// here we sample spatial decay vertex uniformly in the cylinder
double z = _halfz * ( 2.0*G4UniformRand() - 1.0 );
double phi = 2.0 * M_PI * G4UniformRand();
double r = _radius * sqrt(G4UniformRand());
auto x = r * cos(phi);
auto y = r * sin(phi);
_particleGun->SetParticlePosition(G4ThreeVector(x, y, z));
// now uniform-on-the-sphere direction
auto dir = sample_direction();
_particleGun->SetParticleMomentumDirection(G4ThreeVector(dir._wx, dir._wy, dir._wz));
// energy 50/50 1.17 or 1.33
auto e = sample_energy();
_particleGun->SetParticleEnergy(e);
// all together in a vertex
_particleGun->GeneratePrimaryVertex(anEvent);
}
}
I don't know much about multi-threading and I have no idea why this is happening so I'll just get to the point.
I'm processing an image and divide the image in 4 parts and pass each part to each thread(essentially I pass the indices of the first and last pixel rows of each part). For example, if the image has 1000 rows, each thread will process 250 of them. I can go in details about my implementation and what I'm trying to achieve in case it can help you. For now I provide the code executed by the threads in case you can detect why this is happening. I don't know if it's relevant but in both cases(1 thread or 4 threads) the process takes around 15ms and pfUMap and pbUMap are unordered maps.
void jacobiansThread(int start, int end,vector<float> &sJT,vector<float> &sJTJ) {
uchar* rgbPointer;
float* depthPointer;
float* sdfPointer;
float* dfdxPointer; float* dfdyPointer;
float fov = radians(45.0);
float aspect = 4.0 / 3.0;
float focal = 1 / (glm::tan(fov / 2));
float fu = focal * cols / 2 / aspect;
float fv = focal * rows / 2;
float strictFu = focal / aspect;
float strictFv = focal;
vector<float> pixelJacobi(6, 0);
for (int y = start; y <end; y++) {
rgbPointer = sceneImage.ptr<uchar>(y);
depthPointer = depthBuffer.ptr<float>(y);
dfdxPointer = dfdx.ptr<float>(y);
dfdyPointer = dfdy.ptr<float>(y);
sdfPointer = sdf.ptr<float>(y);
for (int x = roiX.x; x <roiX.y; x++) {
float deltaTerm;// = deltaPointer[x];
float raw = sdfPointer[x];
if (raw > 8.0)continue;
float dirac = (1.0f / float(CV_PI)) * (1.2f / (raw * 1.44f * raw + 1.0f));
deltaTerm = dirac;
vec3 rgb(rgbPointer[x * 3], rgbPointer[x * 3+1], rgbPointer[x * 3+2]);
vec3 bin = rgbToBin(rgb, numberOfBins);
int indexOfColor = bin.x * numberOfBins * numberOfBins + bin.y * numberOfBins + bin.z;
float s3 = glfwGetTime();
float pF = pfUMap[indexOfColor];
float pB = pbUMap[indexOfColor];
float heavisideTerm;
heavisideTerm = HEAVISIDE(raw);
float denominator = (heavisideTerm * pF + (1 - heavisideTerm) * pB) + 0.000001;
float commonFirstTerm = -(pF - pB) / denominator * deltaTerm;
if (pF == pB)continue;
vec3 pixel(x, y, depthPointer[x]);
float dfdxTerm = dfdxPointer[x];
float dfdyTerm = -dfdyPointer[x];
if (pixel.z == 1) {
cv::Point c = findClosestContourPoint(cv::Point(x, y), dfdxTerm, -dfdyTerm, abs(raw));
if (c.x == -1)continue;
pixel = vec3(c.x, c.y, depthBuffer.at<float>(cv::Point(c.x, c.y)));
}
vec3 point3D = pixel;
pixelToViewFast(point3D, cols, rows, strictFu, strictFv);
float Xc = point3D.x; float Xc2 = Xc * Xc; float Yc = point3D.y; float Yc2 = Yc * Yc; float Zc = point3D.z; float Zc2 = Zc * Zc;
pixelJacobi[0] = dfdyTerm * ((fv * Yc2) / Zc2 + fv) + (dfdxTerm * fu * Xc * Yc) / Zc2;
pixelJacobi[1] = -dfdxTerm * ((fu * Xc2) / Zc2 + fu) - (dfdyTerm * fv * Xc * Yc) / Zc2;
pixelJacobi[2] = -(dfdyTerm * fv * Xc) / Zc + (dfdxTerm * fu * Yc) / Zc;
pixelJacobi[3] = -(dfdxTerm * fu) / Zc;
pixelJacobi[4] = -(dfdyTerm * fv) / Zc;
pixelJacobi[5] = (dfdyTerm * fv * Yc) / Zc2 + (dfdxTerm * fu * Xc) / Zc2;
float weightingTerm = -1.0 / log(denominator);
for (int i = 0; i < 6; i++) {
pixelJacobi[i] *= commonFirstTerm;
sJT[i] += pixelJacobi[i];
}
for (int i = 0; i < 6; i++) {
for (int j = i; j < 6; j++) {
sJTJ[i * 6 + j] += weightingTerm * pixelJacobi[i] * pixelJacobi[j];
}
}
}
}
}
This is the part where I call each thread:
vector<std::thread> myThreads;
float step = (roiY.y - roiY.x) / numberOfThreads;
vector<vector<float>> tsJT(numberOfThreads, vector<float>(6, 0));
vector<vector<float>> tsJTJ(numberOfThreads, vector<float>(36, 0));
for (int i = 0; i < numberOfThreads; i++) {
int start = roiY.x+i * step;
int end = start + step;
if (end > roiY.y)end = roiY.y;
myThreads.push_back(std::thread(&pwp3dV2::jacobiansThread, this,start,end,std::ref(tsJT[i]), std::ref(tsJTJ[i])));
}
vector<float> sJT(6, 0);
vector<float> sJTJ(36, 0);
for (int i = 0; i < numberOfThreads; i++)myThreads[i].join();
Other Notes
To measure time I used glfwGetTime() before and right after the second code snippet. The measurements vary but the average is about 15ms as I mentioned, for both implementations.
Starting a thread has significant overhead, which might not be worth the time if you have only 15 milliseconds worth of work.
The common solution is to keep threads running in the background and send them data when you need them, instead of calling the std::thread constructor to create a new thread every time you have some work to do.
Pure spectaculation but two things might be preventing the full power of parallelization.
Processing speed is limited by the memory bus. Cores will wait until data is loaded before continuing.
Data sharing between cores. Some caches are core specific. If memory is shared between cores, data must traverse down to shared cache before loading.
On Linux you can use Perf to check for cache misses.
if you wanna better time you need to split a cycle runs from a counter, for this you need to do some preprocessing. some fast stuff like make an array of structures with headers for each segment or so. if say you can't mind anything better you can just do vector<int> with values of a counter. Then do for_each(std::execution::par,...) on that. way much faster.
for timings there's
auto t2 = std::chrono::system_clock::now();
std::chrono::milliseconds f = std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1);
So right now, I'm doing a homework assignment for a C++ class. The assignment says that I need to calculate the distance between two places using the haversine formula based on an input of degrees given by the user for longitude and latitude. Then the haversine formula calculates the distance between the two places.
The problem I'm having is when using the test values given by the instructor I get a greater answer than what he gets.
I'm calculating the distance between 33.9425/N 118.4081/W (LA Airport) and 20.8987/N 156.4305/W (Kahului Airport), where the LA Airport is the starting location.
His answer for the distance is 2483.3 miles. My answer is 2052.1 miles.
Here is my code for the haversine formula:
double haversine(double lat1, double lat2, double lon1, double lon2) {
// get differences first
double dlon = difference(lon1, lon2); // difference for longitude
double dlat = difference(lat1, lat2); // difference for latitude
// part a of algorithm
double a = pow(sin(dlat/2), 2) + cos(lat1) * cos(lat2) * pow(sin(dlon/2), 2);
// part b of algorithm
double b = 2 * atan2(sqrt(a), sqrt(1 - a));
// our result, or the great-circle distance between two locations
double result = EARTH_RADIUS * b;
return result;
}
And difference just returns y - x in this case. What seems to be going wrong in my calculations? As far as I know, my parentheses while calculating everything seem to be OK, so I'm not really sure why I'm getting a different answer.
Update: Fixed the problem by converting longitude and latitude in radians. In C++ I did this by defining PI = 3.14159265 and for each trig function I used, multiplying whatever was in there * PI/180. (ie. pow(sin((dlat/2) * (PI/180)),2)
You need to convert your lats/longs to radians, because the trig functions in Java accept radian arguments.
public static double haversine(double lat1, double lon1, double lat2, double lon2) {
double dLat = Math.toRadians(lat2 - lat1);
double dLon = Math.toRadians(lon2 - lon1);
lat1 = Math.toRadians(lat1);
lat2 = Math.toRadians(lat2);
And this line:
// part b of algorithm
double b = 2 * atan2(sqrt(a), sqrt(1 - a));
is the same thing as
// part b of algorithm
double b = 2 * Math.asin(Math.sqrt(a));
Now your function should work. (:
Also, just for future reference:
// our result, or the great-circle distance between two locations
double result = EARTH_RADIUS * b;
return result;
Should be shortened to:
// our result, or the great-circle distance between two locations
return EARTH_RADIUS * b;
You should always be concise!
This was something I wrote up using radians if you are allowed to not use the exact thing your teacher gave you. I had a project like this a while back and pulled the formula off different sites.
#include <iostream>
using namespace std;
static const double DEG_TO_RAD = 0.017453292519943295769236907684886;
static const double EARTH_RADIUS_IN_METERS = 6372797.560856;
static const double EARTH_RADIUS_IN_MILES = 3959;
struct Position {
Position(double lat, double lon) : _lat(lat), _lon(lon) {}
void lat(double lat) { _lat = lat; }
double lat() const { return _lat; }
void lon(double lon) { _lon = lon; }
double lon() const { return _lon; }
private:
double _lat;
double _lon;
};
double haversine(const Position& from, const Position& to) {
double lat_arc = (from.lat() - to.lat()) * DEG_TO_RAD;
double lon_arc = (from.lon() - to.lon()) * DEG_TO_RAD;
double lat_h = sin(lat_arc * 0.5);
lat_h *= lat_h;
double lon_h = sin(lon_arc * 0.5);
lon_h *= lon_h;
double tmp = cos(from.lat()*DEG_TO_RAD) * cos(to.lat()*DEG_TO_RAD);
return 2.0 * asin(sqrt(lat_h + tmp*lon_h));
}
double distance_in_meters(const Position& from, const Position& to) {
return EARTH_RADIUS_IN_METERS*haversine(from, to);
}
double distance_in_miles(const Position& from, const Position& to)
{
return EARTH_RADIUS_IN_MILES*haversine(from, to);
}
int main()
{
double meters = distance_in_meters(Position(33.9425, 118.4081), Position(20.8987, 156.4305));
double miles = distance_in_miles(Position(33.9425, 118.4081), Position(20.8987, 156.4305));
cout << "\nDistance in meters is: " << meters;
cout << "\nDistance in miles is: " << miles;
cout << endl;
system("PAUSE");
return 0;
}
how do I get a new coordinate in geodetic (Lat/Lon) from a reference point (which is in geodetic) after some translation (in meters) on earth surface, and also I need to do the calculation using true earth ellipsoid model such as WGS84.
for example:
suppose I have reference point of 10.32E, -4.31N
then I do translation of (3000,-2000) meters ( which is move the point 3000 meters to east and 2000 meters to south on earth surface.
then I need the coordinate of new point in geodetic.
thank you
Have a look at the open-source library PROJ.4 which you can use to accurately translate geographic coordinates (lat/long) to projected coordinates (metres), and back again. In your case you can project into WGS 84 / World Mercator (EPSG:3395), perform the translation in metres, then un-project back to geographic.
found the answer :
http://www.movable-type.co.uk/scripts/latlong-vincenty-direct.html
from:
Vincenty direct formula - T Vincenty, "Direct and Inverse Solutions of Geodesics on the
Ellipsoid with application of nested equations", Survey Review, vol XXII no 176, 1975
http://www.ngs.noaa.gov/PUBS_LIB/inverse.pdf
This code calculates the distance (N and E) between two points given lat/lon coordinates. You can easily reverse it for your purposes.
Take a look at function
u8 GPS_CalculateDeviation()
in
http://svn.mikrokopter.de/filedetails.php?repname=NaviCtrl&path=/tags/V0.15c/GPS.c
You either find some geo-library, or do the trigonometry yourself.
In any case you should formulate your question more exactly. In particular you say:
then I do translation of (3000,-2000) meters ( which is move the point 3000 meters to east and 2000 meters to south on earth surface.
You should note that moving by 3km to east and then 2km to south differs from moving 2km to south and then 3km to east. Those are not commutative actions. So that calling this by offsetting (3000, -2000) is incorrect.
Below is C++ code slighly modified from original version from ETH Zurich. The file only has dependency on Eigen library (which can be eliminated with some trivial work if required by writing matrix multiplication function yourself). You can use geodetic2ned() function to convert latitude, longitude, altitude to NED frame.
//GeodeticConverter.hpp
#ifndef air_GeodeticConverter_hpp
#define air_GeodeticConverter_hpp
#include <math>
#include <eigen3/Eigen/Dense>
class GeodeticConverter
{
public:
GeodeticConverter(double home_latitude = 0, double home_longitude = 0, double home_altitude = 0)
: home_latitude_(home_latitude), home_longitude_(home_longitude)
{
// Save NED origin
home_latitude_rad_ = deg2Rad(latitude);
home_longitude_rad_ = deg2Rad(longitude);
home_altitude_ = altitude;
// Compute ECEF of NED origin
geodetic2Ecef(latitude, longitude, altitude, &home_ecef_x_, &home_ecef_y_, &home_ecef_z_);
// Compute ECEF to NED and NED to ECEF matrices
double phiP = atan2(home_ecef_z_, sqrt(pow(home_ecef_x_, 2) + pow(home_ecef_y_, 2)));
ecef_to_ned_matrix_ = nRe(phiP, home_longitude_rad_);
ned_to_ecef_matrix_ = nRe(home_latitude_rad_, home_longitude_rad_).transpose();
}
void getHome(double* latitude, double* longitude, double* altitude)
{
*latitude = home_latitude_;
*longitude = home_longitude_;
*altitude = home_altitude_;
}
void geodetic2Ecef(const double latitude, const double longitude, const double altitude, double* x,
double* y, double* z)
{
// Convert geodetic coordinates to ECEF.
// http://code.google.com/p/pysatel/source/browse/trunk/coord.py?r=22
double lat_rad = deg2Rad(latitude);
double lon_rad = deg2Rad(longitude);
double xi = sqrt(1 - kFirstEccentricitySquared * sin(lat_rad) * sin(lat_rad));
*x = (kSemimajorAxis / xi + altitude) * cos(lat_rad) * cos(lon_rad);
*y = (kSemimajorAxis / xi + altitude) * cos(lat_rad) * sin(lon_rad);
*z = (kSemimajorAxis / xi * (1 - kFirstEccentricitySquared) + altitude) * sin(lat_rad);
}
void ecef2Geodetic(const double x, const double y, const double z, double* latitude,
double* longitude, double* altitude)
{
// Convert ECEF coordinates to geodetic coordinates.
// J. Zhu, "Conversion of Earth-centered Earth-fixed coordinates
// to geodetic coordinates," IEEE Transactions on Aerospace and
// Electronic Systems, vol. 30, pp. 957-961, 1994.
double r = sqrt(x * x + y * y);
double Esq = kSemimajorAxis * kSemimajorAxis - kSemiminorAxis * kSemiminorAxis;
double F = 54 * kSemiminorAxis * kSemiminorAxis * z * z;
double G = r * r + (1 - kFirstEccentricitySquared) * z * z - kFirstEccentricitySquared * Esq;
double C = (kFirstEccentricitySquared * kFirstEccentricitySquared * F * r * r) / pow(G, 3);
double S = cbrt(1 + C + sqrt(C * C + 2 * C));
double P = F / (3 * pow((S + 1 / S + 1), 2) * G * G);
double Q = sqrt(1 + 2 * kFirstEccentricitySquared * kFirstEccentricitySquared * P);
double r_0 = -(P * kFirstEccentricitySquared * r) / (1 + Q)
+ sqrt(
0.5 * kSemimajorAxis * kSemimajorAxis * (1 + 1.0 / Q)
- P * (1 - kFirstEccentricitySquared) * z * z / (Q * (1 + Q)) - 0.5 * P * r * r);
double U = sqrt(pow((r - kFirstEccentricitySquared * r_0), 2) + z * z);
double V = sqrt(
pow((r - kFirstEccentricitySquared * r_0), 2) + (1 - kFirstEccentricitySquared) * z * z);
double Z_0 = kSemiminorAxis * kSemiminorAxis * z / (kSemimajorAxis * V);
*altitude = U * (1 - kSemiminorAxis * kSemiminorAxis / (kSemimajorAxis * V));
*latitude = rad2Deg(atan((z + kSecondEccentricitySquared * Z_0) / r));
*longitude = rad2Deg(atan2(y, x));
}
void ecef2Ned(const double x, const double y, const double z, double* north, double* east,
double* down)
{
// Converts ECEF coordinate position into local-tangent-plane NED.
// Coordinates relative to given ECEF coordinate frame.
Vector3d vect, ret;
vect(0) = x - home_ecef_x_;
vect(1) = y - home_ecef_y_;
vect(2) = z - home_ecef_z_;
ret = ecef_to_ned_matrix_ * vect;
*north = ret(0);
*east = ret(1);
*down = -ret(2);
}
void ned2Ecef(const double north, const double east, const double down, double* x, double* y,
double* z)
{
// NED (north/east/down) to ECEF coordinates
Vector3d ned, ret;
ned(0) = north;
ned(1) = east;
ned(2) = -down;
ret = ned_to_ecef_matrix_ * ned;
*x = ret(0) + home_ecef_x_;
*y = ret(1) + home_ecef_y_;
*z = ret(2) + home_ecef_z_;
}
void geodetic2Ned(const double latitude, const double longitude, const double altitude,
double* north, double* east, double* down)
{
// Geodetic position to local NED frame
double x, y, z;
geodetic2Ecef(latitude, longitude, altitude, &x, &y, &z);
ecef2Ned(x, y, z, north, east, down);
}
void ned2Geodetic(const double north, const double east, const double down, double* latitude,
double* longitude, double* altitude)
{
// Local NED position to geodetic coordinates
double x, y, z;
ned2Ecef(north, east, down, &x, &y, &z);
ecef2Geodetic(x, y, z, latitude, longitude, altitude);
}
void geodetic2Enu(const double latitude, const double longitude, const double altitude,
double* east, double* north, double* up)
{
// Geodetic position to local ENU frame
double x, y, z;
geodetic2Ecef(latitude, longitude, altitude, &x, &y, &z);
double aux_north, aux_east, aux_down;
ecef2Ned(x, y, z, &aux_north, &aux_east, &aux_down);
*east = aux_east;
*north = aux_north;
*up = -aux_down;
}
void enu2Geodetic(const double east, const double north, const double up, double* latitude,
double* longitude, double* altitude)
{
// Local ENU position to geodetic coordinates
const double aux_north = north;
const double aux_east = east;
const double aux_down = -up;
double x, y, z;
ned2Ecef(aux_north, aux_east, aux_down, &x, &y, &z);
ecef2Geodetic(x, y, z, latitude, longitude, altitude);
}
private:
// Geodetic system parameters
static double kSemimajorAxis = 6378137;
static double kSemiminorAxis = 6356752.3142;
static double kFirstEccentricitySquared = 6.69437999014 * 0.001;
static double kSecondEccentricitySquared = 6.73949674228 * 0.001;
static double kFlattening = 1 / 298.257223563;
typedef Eigen::Vector3d Vector3d;
typedef Eigen::Matrix<double, 3, 3> Matrix3x3d;
inline Matrix3x3d nRe(const double lat_radians, const double lon_radians)
{
const double sLat = sin(lat_radians);
const double sLon = sin(lon_radians);
const double cLat = cos(lat_radians);
const double cLon = cos(lon_radians);
Matrix3x3d ret;
ret(0, 0) = -sLat * cLon;
ret(0, 1) = -sLat * sLon;
ret(0, 2) = cLat;
ret(1, 0) = -sLon;
ret(1, 1) = cLon;
ret(1, 2) = 0.0;
ret(2, 0) = cLat * cLon;
ret(2, 1) = cLat * sLon;
ret(2, 2) = sLat;
return ret;
}
inline double rad2Deg(const double radians)
{
return (radians / M_PI) * 180.0;
}
inline double deg2Rad(const double degrees)
{
return (degrees / 180.0) * M_PI;
}
double home_latitude_rad_, home_latitude_;
double home_longitude_rad_, home_longitude_;
double home_altitude_;
double home_ecef_x_;
double home_ecef_y_;
double home_ecef_z_;
Matrix3x3d ecef_to_ned_matrix_;
Matrix3x3d ned_to_ecef_matrix_;
}; // class GeodeticConverter
#endif
I'm doing some specific signal analysis, and I am in need of a method that would smooth out a given bell-shaped distribution curve. A running average approach isn't producing the results I desire. I want to keep the min/max, and general shape of my fitted curve intact, but resolve the inconsistencies in sampling.
In short: if given a set of data that models a simple quadratic curve, what statistical smoothing method would you recommend?
If possible, please reference an implementation, library, or framework.
Thanks SO!
Edit: Some helpful data
(A possible signal graph)
The dark colored quadratic is my "fitted" curve of the light colored connected data points.
The sample # -44 (approx.), is a problem in my graph (i.e. a potential sample inconsistency). I need this curve to "fit" the distribution better, and overcome the values that do not trend accordingly. Hope this helps!
A "quadratic" curve is one thing; "bell-shaped" usually means a Gaussian normal distribution. Getting a best-estimate Gaussian couldn't be easier: you compute the sample mean and variance and your smooth approximation is
y = exp(-squared(x-mean)/variance)
If, on the other hand, you want to approximate a smooth curve with a quadradatic, I'd recommend computing a quadratic polynomial with minimum square error. I can nenver remember the formulas for this, but if you've had differential calculus, write the formula for the total square error (pointwise) and differentiate with respect to the coefficients of your quadratic. Set the first derivatives to zero and solve for the best approximation. Or you could look it up.
Finally, if you just want a smooth-looking curve to approximate a set of points, cubic splines are your best bet. The curves won't necessarily mean anything, but you'll get a nice smooth approximation.
#include <iostream>
#include <math.h>
struct WeightedData
{
double x;
double y;
double weight;
};
void findQuadraticFactors(WeightedData *data, double &a, double &b, double &c, unsigned int const datasize)
{
double w1 = 0.0;
double wx = 0.0, wx2 = 0.0, wx3 = 0.0, wx4 = 0.0;
double wy = 0.0, wyx = 0.0, wyx2 = 0.0;
double tmpx, tmpy;
double den;
for (unsigned int i = 0; i < datasize; ++i)
{
double x = data[i].x;
double y = data[i].y;
double w = data[i].weight;
w1 += w;
tmpx = w * x;
wx += tmpx;
tmpx *= x;
wx2 += tmpx;
tmpx *= x;
wx3 += tmpx;
tmpx *= x;
wx4 += tmpx;
tmpy = w * y;
wy += tmpy;
tmpy *= x;
wyx += tmpy;
tmpy *= x;
wyx2 += tmpy;
}
den = wx2 * wx2 * wx2 - 2.0 * wx3 * wx2 * wx + wx4 * wx * wx + wx3 * wx3 * w1 - wx4 * wx2 * w1;
if (den == 0.0)
{
a = 0.0;
b = 0.0;
c = 0.0;
}
else
{
a = (wx * wx * wyx2 - wx2 * w1 * wyx2 - wx2 * wx * wyx + wx3 * w1 * wyx + wx2 * wx2 * wy - wx3 * wx * wy) / den;
b = (-wx2 * wx * wyx2 + wx3 * w1 * wyx2 + wx2 * wx2 * wyx - wx4 * w1 * wyx - wx3 * wx2 * wy + wx4 * wx * wy) / den;
c = (wx2 * wx2 * wyx2 - wx3 * wx * wyx2 - wx3 * wx2 * wyx + wx4 * wx * wyx + wx3 * wx3 * wy - wx4 * wx2 * wy) / den;
}
}
double findY(double const a, double const b, double const c, double const x)
{
return a * x * x + b * x + c;
};
int main(int argc, char* argv[])
{
WeightedData data[9];
data[0].weight=1; data[0].x=1; data[0].y=-52.0;
data[1].weight=1; data[1].x=2; data[1].y=-48.0;
data[2].weight=1; data[2].x=3; data[2].y=-43.0;
data[3].weight=1; data[3].x=4; data[3].y=-44.0;
data[4].weight=1; data[4].x=5; data[4].y=-35.0;
data[5].weight=1; data[5].x=6; data[5].y=-31.0;
data[6].weight=1; data[6].x=7; data[6].y=-32.0;
data[7].weight=1; data[7].x=8; data[7].y=-43.0;
data[8].weight=1; data[8].x=9; data[8].y=-52.0;
double a=0.0, b=0.0, c=0.0;
findQuadraticFactors(data, a, b, c, 9);
std::cout << " x \t y" << std::endl;
for (int i=0; i<9; ++i)
{
std::cout << " " << data[i].x << ", " << findY(a,b,c,data[i].x) << std::endl;
}
}
How about a simple digital low-pass filter?
y[0] = x[0];
for (i = 1; i < len; ++i)
y[i] = a * x[i] + (1.0 - a) * y[i - 1];
In this case, x[] is your input data and y[] is the filtered output. The a coefficient is a value between 0 and 1 that you should tweak. An a value of 1 reproduces the input and the cut-off frequency decreases as a approaches 0.
Perhaps the parameters for your running average are set wrong (sample window too small or large)?
Is it just noise superimposed on your bell curve? How close is the noise frequency to that of the signal you're trying to retrieve? A picture of what you're trying to extract might help us identify a solution.
You could try some sort of fitting algorithm using a least squares fit if you can make a reasonable guess of the function parameters. Those sorts of techniques often have some immunity to noise.