Memory leakage C++ threading - c++

I have a problem, probably, with memory leaking in C++ threads. I receive a runtime error with code 11. I am writing an optimization algorithm, which aims to optimize parameters of 2D reactors. It generates instances of reforming function, which creates Reformer objects. The reformers have 2 different parameters, which can differ locally in a single reformer and are passed to the reforming function from the main function. To specify, each reformer is divided into a specified number of zones (same dimensions and locations in each reformer), and each zone can have different parameters. Therefore, size of each of 2 vectors is equal to [NUMBER OF REFORMERS] * [NUMBER OF ZONES]. Then, the reforming function creates Segment objects, which number is equal to the number of zones.
I assume that the issue here is that threads try to access the same vector simultaneously and I would really appreciate a solution for that matter.
Remarks:
If I change the main.cpp to substitute the threads with a usual loop, no error is returned.
If I comment out the setProp method in the set_segments functions, no error is returned (with threads).
Threads are highly recommended here, due to long computation time of a single Reformer, and I have an access to a multi-core computing units.
To clarify, I will explain everything with a minimal reproducible example:
input.h
#include <iostream>
#include <fstream>
#include <vector>
#include <thread>
int reactor_no = 2; // number of reformers
int zones_X = 5; // number of zones in a single reformer, X direction
int zones_Y = 2; // number of zones in a single reformer, Y direction
double dim_X = 0.5; // reactor's length
double dim_Y = 0.2; // reactor's height
double wall_t = 0.1; // thickness of the reactor wall
size_t zones = zones_X * zones_Y;
Reformer.h:
#include "input.h"
class Reformer {
public:
Reformer() {}
Reformer(const double& L, const double& Y, const double& wall_t,
const int& zones_X = 1, const int& zones_Y = 1) {
length_ = L;
height_ = Y;
zonesX_ = zones_X;
zonesY_ = zones_Y;
wall_thickness_ = wall_t;
dx_ = length_ / static_cast<double> (zonesX_);
dr_ = height_ / static_cast<double> (zonesY_);
}
private:
double wall_thickness_; // wall thickness (m)
double length_; // recactor length (m)
double height_; // reactor height (m) (excluding wall thickness)
int zonesX_; // number of segments in the X direction
int zonesY_; // number of segments in the Y direction
double dx_; // segment width (m)
double dr_; // segment height (m)
}
Segment.h:
#include "input.h"
class Segment{
public:
Segment() : Segment(0, 0) {}
Segment(int i, int j) {
i_ = i;
j_ = j;
}
void setXR(const double& dx, const double& dr, const int& SL, const int& SR) {
x0_ = i_ * dx;
x1_ = x0_ + dx;
r0_ = j_ * dr;
r1_ = r0_ + dr;
if (i_ == SL - 1) {
x1_ = length;
}
if (j_ == SR - 1) {
r1_ = radius;
}
}
void setWall() {
x0_ = 0;
x1_ = length;
r0_ = radius;
r1_ = radius + wall_t;
}
void setProp(const double& por, const double& por_s, const bool& cat) {
porosity_ = por;
catalyst_ = cat;
}
private:
size_t i_; //segment column no.
size_t j_; //segment row no.
double x0_; //beginning of segment - x coordinate (m)
double x1_; //ending of segment - x coordinate (m)
double r0_; //beginning of segment - r coordinate (m)
double r1_; //ending of segment - r coordinate (m)
int catalyst_; //1 - catalytic, 0 - non-catalytic
double porosity_; //porosity (-)
};
main.cpp:
#include "input.h"
int main() {
int zones = zones_X * zones_Y;
size_t pop_size = reactor_no * zones;
std::vector<int> cat;
cat.reserve(pop_size);
std::vector<double> porosity;
porosity.reserve(pop_size); // the values in the vectors are not important, therefore I will just fill them with 1s
for (int i = 0; i < pop_size; i++) {
cat[i] = 1;
porosity[i] = 1.0;
}
std::vector<std::thread> Ref;
Ref.reserve(reactor_no);
for (k = 0; k < reactor_no; k++) {
Ref.emplace_back(reforming, k, cat, porosity);
}
for (auto &X : Ref) { X.join(); }
}
reforming.cpp:
#include "input.h"
void reforming(const int m, const std::vector<int>& cat_check, const std::vector<double>& por) {
Reformer reactor(length, radius, wall_t, zonesX, zonesY);
std::vector<Segment> seg; // vector holding segment objects
seg.reserve(zones);
set_segments(seg, reactor, zones, m, por, por_s, cat_check);
}
set_segments function:
#include "input.h"
void set_segments(std::vector<Segment> &seg, Reformer &reac, const int m,
const std::vector<double> &por, const std::vector<int> &check) {
int i, j, k, n;
double dx = dim_X / static_cast<double> (zones_X);
double dy = dim_Y / static_cast<double> (zones_Y);
std::vector<Segment*> ptr_seg;
ptr_seg.reserve(zones);
k = 0;
for (i = 0; i < zones_X; i++) {
for (j = 0; j < zones_Y; j++) {
n = m * zones + (i * zones_Y + j);
seg.emplace_back(Segment(i, j));
seg[k].setProp(por[n], check[n]);
seg[k].setXR(dx, dy, zones_X, zones_Y);
k++;
}
}
}

Adding std::ref() to the reforming function call parameters solved the problem.
for (k = 0; k < spec_max; k++) {
Ref.emplace_back(reforming, k, std::ref(cat), std::ref(porosity));
}

Related

Improving speed of affine transformation of an array using intrinsics

In a performance sensitive code, I have to perform am affine transformation of a vector:
Y=a*X+b
where Y and X are vectors and a and b are scalars.
As a quick-and-dirty way to improve the speed of the computation, I delegated parallelization to openMP
#pragma omp simd directive. Having some spare time, lately I tried to implement it directly using intrinsics, getting more or less the same performance as the omp solution.
Is there a way to beat the OMP vectorization? I can use up AVX2 instructions.
The code below is tested under windows 10, compiled with VS 2019.
#include <iostream>
#include <armadillo>
#include <chrono>
#include <immintrin.h>
///Computes y=alpha*x+beta
inline void SumAndSetOmp(
arma::Col<double>& y /**< Result*/,
const arma::Col<double>& x /**< Input*/,
const double& alpha /**< Coefficient*/,
const double& beta /**< Offset*/)
{
auto* __restrict lhs = y.memptr();
const auto* __restrict add_rhs = x.memptr();
const auto& n = x.n_elem;
#pragma omp simd
for (arma::uword i = 0; i < n; ++i)
{
lhs[i] = add_rhs[i] * alpha + beta;
}
}
inline void SumAndSetSerial(
arma::Col<double>& y /**< Result*/,
const arma::Col<double>& x /**< Input*/,
const double& alpha /**< Coefficient*/,
const double& beta /**< Offset*/)
{
auto* lhs = y.memptr();
const auto* add_rhs = x.memptr();
const auto& n = x.n_elem;
for (arma::uword i = 0; i < n; ++i)
{
lhs[i] = add_rhs[i] * alpha + beta;
}
}
inline void SumAndSetAVX(arma::Col<double>& y /**< Result*/,
const arma::Col<double>& x /**< Input*/,
const double& alpha /**< Coefficient*/,
const double& beta /**< Offset*/)
{
//Allocate coefficients
const auto alphas = _mm256_set1_pd(alpha);
const auto betas = _mm256_set1_pd(beta);
//Extracting memory addresses
auto* __restrict pos_lhs = y.memptr();
const auto* __restrict pos_rhs = x.memptr();
//Computing sizes
const unsigned int length_array = 4;
const unsigned long long n_aligned = x.n_elem / length_array;
const unsigned int remainder = x.n_elem % length_array;
//Performing AVX instruction
for (unsigned long long i = 0; i < n_aligned; i++) {
const __m256d x_avx = _mm256_loadu_pd(pos_rhs);
const __m256d y_avx = _mm256_fmadd_pd(x_avx, alphas, betas);
_mm256_storeu_pd(pos_lhs, y_avx);
pos_rhs += length_array;
pos_lhs += length_array;
}
//Process the rest serially
for (unsigned int i = 0; i < remainder; i++) {
pos_lhs[i] = alpha * pos_rhs[i] + beta;
}
}
enum method
{
serial,
omp,
avx
};
arma::vec perform_test(const arma::vec& x, const method mtd, int trials = 100, const double alpha = 3.0, const double beta = 5.0)
{
arma::Col<double> res(x.n_elem);
const auto beg = std::chrono::steady_clock::now();
switch (mtd) {
case serial:
for (int i = 0; i < trials; i++)
SumAndSetSerial(res, x, alpha, beta);
break;
case omp:
for (int i = 0; i < trials; i++)
SumAndSetOmp(res, x, alpha, beta);
break;
case avx:
for (int i = 0; i < trials; i++)
SumAndSetAVX(res, x, alpha, beta);
break;
}
std::cout << "time:" << std::chrono::duration<double>(std::chrono::steady_clock::now() - beg).count() << "s\n";
return res;
}
//Benchmarking
double test_fun(long long int n,int trials=100, const double alpha = 3.0, const double beta = 5.0)
{
const arma::Col<double> x(n, arma::fill::randn);
const arma::Col<double> reference = alpha*x + beta;
std::cout << "Serial: ";
const auto res_serial = perform_test(x, method::serial, trials, alpha, beta);
std::cout << "OMP: ";
const auto res_omp = perform_test(x, method::omp, trials, alpha, beta);
std::cout << "AVX: ";
const auto res_avx = perform_test(x, method::avx, trials, alpha, beta);
// errors wrt the reference
const double err_serial = arma::max(arma::abs(reference - res_serial));
const double err_avx = arma::max(arma::abs(reference - res_avx));
const double err_omp = arma::max(arma::abs(reference - res_omp));
//Largest error
const double error = std::max(std::max(err_serial, err_avx), err_omp);
if (error> 1e-6)
{
throw std::runtime_error("Something is wrong!");
}
return error;
}
int main()
{
test_fun(10000000);
}
I had a go at improving on your solutions, but I think they are optimal, so you're best sticking with omp.
The following is speculative and goes beyond your question:
If it's an option you could try omp's multithreading, I'm always surprised at how low the number of elements at which you get a reasonable boost is, though 1000 elements with a simple affine transform I expect would be too low. If other parts of your algorithm are parallelisable then this is more likely to be helpful.
If you can afford to change your problem and don't need double precision you could work with floats.

Ising model simulation offset critical temperature

I'm writing a simulation of the Ising model in 2D. The model behaves as predicted except for one thing: the critical temperature is roughly 3.5 while it should be near 2/ln(2 + sqrt (2)).
The project is a C++ program that generates the data, and a shell script that exercises the program. The full code can be found here. Also here's lattice.cpp
#include <iostream>
#include "include/lattice.h"
using namespace std;
/*
Copy assignment operator, too long to include in the header.
*/
lattice &lattice::operator=(const lattice &other) {
size_ = other.size_;
spins_ = other.spins_;
J_ = other.J_;
H_ = other.H_;
delete spins_;
return *this;
}
void lattice::print() {
unsigned int area = size_ * size_;
for (unsigned int i = 0; i < area; i++) {
cout << to_symbol(spins_->at(i));
if (i % size_ == size_ - 1)
cout << endl;
}
cout << endl;
}
/*
Computes the energy associated with a spin at the given point.
It is explicitly float as that would allow the compiler to make use of multiple
registers instead of keeping track of unneeded precision. (typically J, H ~ 1).
*/
float lattice::compute_point_energy(int row, int col) {
int accumulator = get(row + 1, col) + get(row - 1, col) + get(row, col - 1) +
get(row, col + 1);
return -get(row, col) * (accumulator * J_ + H_);
}
/*
Computes total magnetisation in O(n^2). Thread safe
*/
int lattice::total_magnetisation() {
int sum = 0;
#pragma omp parallel for reduction(+ : sum)
for (unsigned int i = 0; i < size_ * size_; i++) {
sum += spins_->at(i);
}
return sum;
}
int inline to_periodic(int row, int col, int size) {
if (row < 0 || row >= size)
row = abs(size - abs(row));
if (col < 0 || col >= size)
col = abs(size - abs(col));
return row * size + col;
}
with lattice.h
#ifndef lattice_h
#define lattice_h
#include <cmath>
#include <vector>
/* Converts spin up/down to easily printable symbols. */
char inline to_symbol(int in) { return in == -1 ? '-' : '+'; }
/* Converts given pair of indices to those with periodic boundary conditions. */
int inline to_periodic(int row, int col, int size) {
if (row < 0 || row >= size)
row = abs(size - abs(row));
if (col < 0 || col >= size)
col = abs(size - abs(col));
return row * size + col;
}
class lattice {
private:
unsigned int size_;
// vector<bool> would be more space efficient, but it would not allow
// multithreading
std::vector<short> *spins_;
float J_;
float H_;
public:
lattice() noexcept : size_(0), spins_(NULL), J_(1.0), H_(0.0) {}
lattice(int new_size, double new_J, double new_H) noexcept
: size_(new_size), spins_(new std::vector<short>(size_ * size_, 1)),
J_(new_J), H_(new_H) {}
lattice(const lattice &other) noexcept
: lattice(other.size_, other.J_, other.H_) {
#pragma omp parallel for
for (unsigned int i = 0; i < size_ * size_; i++)
spins_->at(i) = other.spins_->at(i);
}
lattice &operator=(const lattice &);
~lattice() { delete spins_; }
void print();
short get(int row, int col) {
return spins_->at(to_periodic(row, col, size_));
}
unsigned int get_size() { return size_; }
void flip(int row, int col) { spins_->at(to_periodic(row, col, size_)) *= -1; }
int total_magnetisation();
float compute_point_energy(int row, int col);
};
#endif
and simulation.cpp
#include <iostream>
#include <math.h>
#include "include/simulation.h"
using namespace std;
/*
Advances the simulation a given number of steps, and updates/prints the statistics
into the given file pointer.
Defaults to stdout.
The number of time_steps is explcitly unsigned, so that linters/IDEs remind
the end user of the file that extra care needs to be taken, as well as to allow
advancing the simulation a larger number of times.
*/
void simulation::advance(unsigned int time_steps, FILE *output) {
unsigned int area = spin_lattice_.get_size() * spin_lattice_.get_size();
for (unsigned int i = 0; i < time_steps; i++) {
// If we don't update mean_energy_ every time, we might get incorrect
// thermodynamic behaviour.
total_energy_ = compute_energy(spin_lattice_);
double temperature_delta = total_energy_/area - mean_energy_;
if (abs(temperature_delta) < 1/area){
cerr<<temperature_delta<<"! Reached equilibrium "<<endl;
}
temperature_ += temperature_delta;
mean_energy_ = total_energy_ / area;
if (time_ % print_interval_ == 0) {
total_magnetisation_ = spin_lattice_.total_magnetisation();
mean_magnetisation_ = total_magnetisation_ / area;
print_status(output);
}
advance();
}
}
/*
Advances the simulation a single step.
DOES NOT KEEP TRACK OF STATISTICS. Hence private.
*/
void simulation::advance() {
#pragma omp parallel for collapse(2)
for (unsigned int row = 0; row < spin_lattice_.get_size(); row++) {
for (unsigned int col = 0; col < spin_lattice_.get_size(); col++) {
double dE = compute_dE(row, col);
double p = r_.random_uniform();
float rnd = rand() / (RAND_MAX + 1.);
if (exp(-dE / temperature_) > rnd) {
spin_lattice_.flip(row, col);
}
}
}
time_++;
}
/*
Computes change in energy due to flipping one single spin.
The function returns a single-precision floating-point number, as data cannot under
most circumstances make use of greater precision than that (save J is set to a
non-machine-representable value).
The code modifies the spin lattice, as an alternative (copying the neighborhood
of a given point), would make the code run slower by a factor of 2.25
*/
float simulation::compute_dE(int row, int col) {
float e_0 = spin_lattice_.compute_point_energy(row, col);
return -4*e_0;
}
/*
Computes the total energy associated with spins in the spin_lattice_.
I originally used this function to test the code that tracked energy as the lattice
itself was modified, but that code turned out to be only marginally faster, and
not thread-safe. This is due to a race condition: when one thread uses a neighborhood
of a point, while another thread was computing the energy of one such point in
the neighborhood of (row, col).
*/
double simulation::compute_energy(lattice &other) {
double energy_sum = 0;
unsigned int max = other.get_size();
#pragma omp parallel for reduction(+ : energy_sum)
for (unsigned int i = 0; i < max; i++) {
for (unsigned int j = 0; j < max; j++) {
energy_sum += other.compute_point_energy(i, j);
}
}
return energy_sum;
}
void simulation::set_to_chequerboard(int step){
if (time_ !=0){
return;
}else{
for (unsigned int i=0; i< spin_lattice_.get_size(); ++i){
for (unsigned int j=0; j<spin_lattice_.get_size(); ++j){
if ((i/step)%2-(j/step)%2==0){
spin_lattice_.flip(i, j);
}
}
}
}
}
with simulation.h
#ifndef simulation_h
#define simulation_h
#include "lattice.h"
#include "rng.h"
#include <gsl/gsl_rng.h>
/*
The logic of the entire simulation of the Ising model of magnetism.
This simulation will run and print statistics at a given time interval.
A simulation can be advanced a single time step, or many at a time,
*/
class simulation {
private:
unsigned int time_ = 0; // Current time of the simulation.
rng r_ = rng();
lattice spin_lattice_;
double temperature_;
double mean_magnetisation_ = 1;
double mean_energy_;
double total_magnetisation_;
double total_energy_;
unsigned int print_interval_ = 1;
void advance();
public:
void set_print_interval(unsigned int new_print_interval) { print_interval_ = new_print_interval; }
simulation(int new_size, double new_temp, double new_J, double new_H)
: time_(0), spin_lattice_(lattice(new_size, new_J, new_H)), temperature_(new_temp),
mean_energy_(new_J * (-4)), total_magnetisation_(new_size * new_size),
total_energy_(compute_energy(spin_lattice_)) {}
void print_status(FILE *f) {
f = f==NULL? stdout : f;
fprintf(f, "%4d\t%e \t%e\t%e\n", time_, mean_magnetisation_,
mean_energy_, temperature_);
}
void advance(unsigned int time_steps, FILE *output);
double compute_energy(lattice &other);
float compute_dE(int row, int col);
void set_to_chequerboard(int step);
void print_lattice(){
spin_lattice_.print();
};
// void load_custom(const lattice& custom);
};
#endif
The output right now looks something like this:
while it should be a step down near 2.26
I have found a few issues in your code:
The compute_dE method returns the wrong energy, as the factor of 2 shouldn't be there. The Hamiltonian of the Ising system is
While you are effectively using
The compute_energy method returns the wrong energy. The method should iterate over each spin pair only once. Something like this should do the trick:
for (unsigned int i = 0; i < max; i++) {
for (unsigned int j = i + 1; j < max; j++) {
energy_sum += other.compute_point_energy(i, j);
}
}
You use a temperature that is updated on the fly instead of using the target temperature. I do not really understand the purpose of that.

2D Poisson-disk sampling in a specific square (not a unit square) with specific minimum distance

Is there any way I can modify the poisson-disk points generator finding here.I need to generate new poisson points using the coordinates of points in the textfile.txt to improve the distribution. below the c++ code of poisson-disk sampling in a unit square.
poissonGenerator.h:
#include <vector>
#include <random>
#include <stdint.h>
#include <time.h>
namespace PoissoGenerator
{
class DefaultPRNG
{
public:
DefaultPRNG()
: m_Gen(std::random_device()())
, m_Dis(0.0f, 1.f)
{
// prepare PRNG
m_Gen.seed(time(nullptr));
}
explicit DefaultPRNG(unsigned short seed)
: m_Gen(seed)
, m_Dis(0.0f, 1.f)
{
}
double RandomDouble()
{
return static_cast <double>(m_Dis(m_Gen));
}
int RandomInt(int Max)
{
std::uniform_int_distribution<> DisInt(0, Max);
return DisInt(m_Gen);
}
private:
std::mt19937 m_Gen;
std::uniform_real_distribution<double> m_Dis;
};
struct sPoint
{
sPoint()
: x(0)
, y(0)
, m_valid(false){}
sPoint(double X, double Y)
: x(X)
, y(Y)
, m_valid(true){}
double x;
double y;
bool m_valid;
//
bool IsInRectangle() const
{
return x >= 0 && y >= 0 && x <= 1 && y <= 1;
}
//
bool IsInCircle() const
{
double fx = x - 0.5f;
double fy = y - 0.5f;
return (fx*fx + fy*fy) <= 0.25f;
}
};
struct sGridPoint
{
sGridPoint(int X, int Y)
: x(X)
, y(Y)
{}
int x;
int y;
};
double GetDistance(const sPoint& P1, const sPoint& P2)
{
return sqrt((P1.x - P2.x)*(P1.x - P2.x) + (P1.y - P2.y)*(P1.y - P2.y));
}
sGridPoint ImageToGrid(const sPoint& P, double CellSize)
{
return sGridPoint((int)(P.x / CellSize), (int)(P.y / CellSize));
}
struct sGrid
{
sGrid(int W, int H, double CellSize)
: m_W(W)
, m_H(H)
, m_CellSize(CellSize)
{
m_Grid.resize((m_H));
for (auto i = m_Grid.begin(); i != m_Grid.end(); i++){ i->resize(m_W); }
}
void Insert(const sPoint& P)
{
sGridPoint G = ImageToGrid(P, m_CellSize);
m_Grid[G.x][G.y] = P;
}
bool IsInNeighbourhood(sPoint Point, double MinDist, double CellSize)
{
sGridPoint G = ImageToGrid(Point, CellSize);
//number of adjacent cell to look for neighbour points
const int D = 5;
// Scan the neighbourhood of the Point in the grid
for (int i = G.x - D; i < G.x + D; i++)
{
for (int j = G.y - D; j < G.y + D; j++)
{
if (i >= 0 && i < m_W && j >= 0 && j < m_H)
{
sPoint P = m_Grid[i][j];
if (P.m_valid && GetDistance(P, Point) < MinDist){ return true; }
}
}
}
return false;
}
private:
int m_H;
int m_W;
double m_CellSize;
std::vector< std::vector< sPoint> > m_Grid;
};
template <typename PRNG>
sPoint PopRandom(std::vector<sPoint>& Points, PRNG& Generator)
{
const int Idx = Generator.RandomInt(Points.size() - 1);
const sPoint P = Points[Idx];
Points.erase(Points.begin() + Idx);
return P;
}
template <typename PRNG>
sPoint GenerateRandomPointAround(const sPoint& P, double MinDist, PRNG& Generator)
{
// Start with non-uniform distribution
double R1 = Generator.RandomDouble();
double R2 = Generator.RandomDouble();
// radius should be between MinDist and 2 * MinDist
double Radius = MinDist * (R1 + 1.0f);
//random angle
double Angle = 2 * 3.141592653589f * R2;
// the new point is generated around the point (x, y)
double X = P.x + Radius * cos(Angle);
double Y = P.y + Radius * sin(Angle);
return sPoint(X, Y);
}
// Return a vector of generated points
// NewPointsCount - refer to bridson-siggraph07-poissondisk.pdf
// for details (the value 'k')
// Circle - 'true' to fill a circle, 'false' to fill a rectangle
// MinDist - minimal distance estimator, use negative value for default
template <typename PRNG = DefaultPRNG>
std::vector<sPoint> GeneratePoissonPoints(rsize_t NumPoints, PRNG& Generator, int NewPointsCount = 30,
bool Circle = true, double MinDist = -1.0f)
{
if (MinDist < 0.0f)
{
MinDist = sqrt(double(NumPoints)) / double(NumPoints);
}
std::vector <sPoint> SamplePoints;
std::vector <sPoint> ProcessList;
// create the grid
double CellSize = MinDist / sqrt(2.0f);
int GridW = (int)(ceil)(1.0f / CellSize);
int GridH = (int)(ceil)(1.0f / CellSize);
sGrid Grid(GridW, GridH, CellSize);
sPoint FirstPoint;
do
{
FirstPoint = sPoint(Generator.RandomDouble(), Generator.RandomDouble());
} while (!(Circle ? FirstPoint.IsInCircle() : FirstPoint.IsInRectangle()));
//Update containers
ProcessList.push_back(FirstPoint);
SamplePoints.push_back(FirstPoint);
Grid.Insert(FirstPoint);
// generate new points for each point in the queue
while (!ProcessList.empty() && SamplePoints.size() < NumPoints)
{
#if POISSON_PROGRESS_INDICATOR
// a progress indicator, kind of
if (SamplePoints.size() % 100 == 0) std::cout << ".";
#endif // POISSON_PROGRESS_INDICATOR
sPoint Point = PopRandom<PRNG>(ProcessList, Generator);
for (int i = 0; i < NewPointsCount; i++)
{
sPoint NewPoint = GenerateRandomPointAround(Point, MinDist, Generator);
bool Fits = Circle ? NewPoint.IsInCircle() : NewPoint.IsInRectangle();
if (Fits && !Grid.IsInNeighbourhood(NewPoint, MinDist, CellSize))
{
ProcessList.push_back(NewPoint);
SamplePoints.push_back(NewPoint);
Grid.Insert(NewPoint);
continue;
}
}
}
#if POISSON_PROGRESS_INDICATOR
std::cout << std::endl << std::endl;
#endif // POISSON_PROGRESS_INDICATOR
return SamplePoints;
}
}
and the main program is:
poisson.cpp
#include "stdafx.h"
#include <vector>
#include <iostream>
#include <fstream>
#include <memory.h>
#define POISSON_PROGRESS_INDICATOR 1
#include "PoissonGenerator.h"
const int NumPoints = 20000; // minimal number of points to generate
int main()
{
PoissonGenerator::DefaultPRNG PRNG;
const auto Points =
PoissonGenerator::GeneratePoissonPoints(NumPoints,PRNG);
std::ofstream File("Poisson.txt", std::ios::out);
File << "NumPoints = " << Points.size() << std::endl;
for (const auto& p : Points)
{
File << " " << p.x << " " << p.y << std::endl;
}
system("PAUSE");
return 0;
}
Suppose you have a point in the space [0,1] x [0,1], in the form of a std::pair<double, double>, but desire points in the space [x,y] x [w,z].
The function object
struct ProjectTo {
double x, y, w, z;
std::pair<double, double> operator(std::pair<double, double> in)
{
return std::make_pair(in.first * (y - x) + x, in.second * (z - w) + w);
}
};
will transform such an input point into the desired output point.
Suppose further you have a std::vector<std::pair<double, double>> points, all drawn from the input distribution.
std::copy(points.begin(), points.end(), points.begin(), ProjectTo{ x, y, w, z });
Now you have a vector of points in the output space.

Is it possible to use CUDA parallelizing this nested for loop?

I want to speed up this nested for loop, just start learn CUDA, how could I use CUDA to parallel this c++ code ?
#define PI 3.14159265
using namespace std;
int main()
{
int nbint = 2;
int hits = 20;
int nbinp = 2;
float _theta, _phi, _l, _m, _n, _k = 0, delta = 5;
float x[20],y[20],z[20],a[20],t[20];
for (int i = 0; i < hits; ++i)
{
x[i] = rand() / (float)(RAND_MAX / 100);
}
for (int i = 0; i < hits; ++i)
{
y[i] = rand() / (float)(RAND_MAX / 100);
}
for (int i = 0; i < hits; ++i)
{
z[i] = rand() / (float)(RAND_MAX / 100);
}
for (int i = 0; i < hits; ++i)
{
a[i] = rand() / (float)(RAND_MAX / 100);
}
float maxforall = 1e-6;
float theta0;
float phi0;
for (int i = 0; i < nbint; i++)
{
_theta = (0.5 + i)*delta;
for (int j = 0; j < nbinp; j++)
{
_phi = (0.5 + j)*delta / _theta;
_l = sin(_theta* PI / 180.0)*cos(_phi* PI / 180.0);
_m = sin(_theta* PI / 180.0)*sin(_phi* PI / 180.0);
_n = cos(_theta* PI / 180.0);
for (int k = 0; k < hits; k++)
{
_k = -(_l*x[k] + _m*y[k] + _n*z[k]);
t[k] = a[k] - _k;
}
qsort(t, 0, hits - 1);
float max = t[0];
for (int k = 0; k < hits; k++)
{
if (max < t[k])
max = t[k];
}
if (max > maxforall)
{
maxforall = max;
}
}
}
return 0;
}
I want to put innermost for loop and the sort part(maybe the whole nested loop) into parallel. After sort those array I found the maximum of all arrays. I use maximum to simplify the code. The reason I need sort is that maximum represent
here is a continuous time information(all arrays contain time information). The sort part make those time from lowest to highest. Then I compare the a specific time interval(not a single value). The compare process almost like I choose maximum but with a continuous interval not a single value.
Your 3 nested loops calculate nbint*nbinp*hits values. Since each of those values is independent from each other, all values can be calculated in parallel.
You stated in your comments that you have a commutative and associative "filter condition" which reduces the output to a single scalar value. This can be exploited to avoid sorting and storing the temporary values. Instead, we can calculate the values on-the-fly and then apply a parallel reduction to determine the end result.
This can be done in "raw" CUDA, below I implemented this idea using thrust. The main idea is to run grid_op nbint*nbinp*hits times in parallel. In order to find out the three original "loop indices" from the single scalar index which is passed to grid_op the algorithm from this SO question is used.
thrust::transform_reduce performs the on-the-fly transformation and the subsequent parallel reduction (here thrust::maximum is used as a substitute).
#include <cmath>
#include <thrust/device_vector.h>
#include <thrust/functional.h>
#include <thrust/transform_reduce.h>
#include <thrust/iterator/counting_iterator.h>
#include <thrust/tuple.h>
// ### BEGIN utility for demo ####
#include <iostream>
#include <thrust/random.h>
thrust::host_vector<float> random_vector(const size_t N)
{
thrust::default_random_engine rng;
thrust::uniform_real_distribution<float> u01(0.0f, 1.0f);
thrust::host_vector<float> temp(N);
for(size_t i = 0; i < N; i++) {
temp[i] = u01(rng);
}
return temp;
}
// ### END utility for demo ####
template <typename... Iterators>
thrust::zip_iterator<thrust::tuple<Iterators...>> zip(Iterators... its)
{
return thrust::make_zip_iterator(thrust::make_tuple(its...));
}
template <typename ZipIterator>
class grid_op
{
public:
grid_op(ZipIterator zipIt, std::size_t dim1, std::size_t dim2) : zipIt(zipIt), dim1(dim1), dim2(dim2){}
__host__ __device__
float operator()(std::size_t index) const
{
const auto coords = unflatten_3d_index(index, dim1, dim2);
const auto values = zipIt[thrust::get<2>(coords)];
const float delta = 5;
const float _theta = (0.5f + thrust::get<0>(coords))*delta;
const float _phi = (0.5f + thrust::get<1>(coords))*delta / _theta;
const float _l = sin(_theta* M_PI / 180.0)*cos(_phi* M_PI / 180.0);
const float _m = sin(_theta* M_PI / 180.0)*sin(_phi* M_PI / 180.0);
const float _n = cos(_theta* M_PI / 180.0);
const float _k = -(_l*thrust::get<0>(values) + _m*thrust::get<1>(values) + _n*thrust::get<2>(values));
return (thrust::get<3>(values) - _k);
}
private:
__host__ __device__
thrust::tuple<std::size_t, std::size_t, std::size_t>
unflatten_3d_index(std::size_t index, std::size_t dim1, std::size_t dim2) const
{
// taken from https://stackoverflow.com/questions/29142417/4d-position-from-1d-index
std::size_t x = index % dim1;
std::size_t y = ( ( index - x ) / dim1 ) % dim2;
std::size_t z = ( ( index - y * dim1 - x ) / (dim1 * dim2) );
return thrust::make_tuple(x,y,z);
}
ZipIterator zipIt;
std::size_t dim1;
std::size_t dim2;
};
template <typename ZipIterator>
grid_op<ZipIterator> make_grid_op(ZipIterator zipIt, std::size_t dim1, std::size_t dim2)
{
return grid_op<ZipIterator>(zipIt, dim1, dim2);
}
int main()
{
const int nbint = 3;
const int nbinp = 4;
const int hits = 20;
const std::size_t N = nbint * nbinp * hits;
thrust::device_vector<float> d_x = random_vector(hits);
thrust::device_vector<float> d_y = random_vector(hits);
thrust::device_vector<float> d_z = random_vector(hits);
thrust::device_vector<float> d_a = random_vector(hits);
auto zipIt = zip(d_x.begin(), d_y.begin(), d_z.begin(), d_a.begin());
auto countingIt = thrust::counting_iterator<std::size_t>(0);
auto unary_op = make_grid_op(zipIt, nbint, nbinp);
auto binary_op = thrust::maximum<float>();
const float init = 0;
float max = thrust::transform_reduce(
countingIt, countingIt+N,
unary_op,
init,
binary_op
);
std::cout << "max = " << max << std::endl;
}

Trying to create a neural network in c++

I'm trying to implement a neural network into c++, but all I have to show for it are lots of unknown errors. I've already searched and found other post such as (C++ class has no member named), however this has been no help to me. Can please help me figure out how to resolve all the errors I've been getting.
Here's the code
#include <iostream>
#include <vector>
#include <cstdlib>
#include <assert.h>
#include <math.h>
using namespace std;
struct Connection
{
double weight;
double deltaWeight;
};
class Neuron {};
typedef vector<Neuron> Layer;
// ************************* class Neuron *************************
class Neuron
{
public:
Neuron(unsigned numOutputs, unsigned myIndex);
void setOutputVal(double val)
{
m_outputVal = val;
};
double getOutputVal(void) const
{
return m_outputVal;
};
void feedForward(const Layer &prevLayer);
void calcOutputGradients(double targetVal);
void calcHiddenGradients(const Layer &nextLayer);
void updateInputWeights(Layer &prevLayer);
private:
static double eta; // [0.0..1.0] overall net training rate
static double alpha; // [0.0..n] multiplier of last weight change (momentum)
static double transferFunction(double x);
static double transferFunctionDerivative(double x);
static double randomWeight(void)
{
return rand() / double(RAND_MAX);
};
double sumDOW(const Layer &nextLayer) const;
double m_outputVal;
vector<Connection> m_outputWeights;
unsigned m_myIndex;
double m_gradient;
};
double Neuron::eta = 0.15; // overall net learning rate, [0.0..1.0]
double Neuron::alpha = 0.5; // momentum, multiplier of last deltaWeight [0.0..n]
void Neuron::updateInputWeights(Layer &prevLayer)
{
// The weight are updated in the Connection container
// in the neurons in the preceding layer
for (unsigned n = 0; n < prevLayer.size(); ++n)
{
Neuron &neuron = prevLayer[n];
double oldDeltaWeight = neuron.m_outputWeights[m_myIndex].deltaWeight;
double newDeltaWeight =
eta
* neuron_getOutputVal()
* m_gradient
+ alpha
* oldDeltaWeight;
neuron.m_outputWeights[m_myIndex].deltaWeight = newDeltaWeight;
neuron.m_outputWeights[m_myIndex].weight += newDeltaWeight;
}
}
double Neuron::sumDOW(const Layer &nextLayer) const
{
double sum = 0.0;
// Sum our contributions of the errors at the nodes we feed
for (unsigned n = 0; nextLayer.size() - 1; ++n)
{
sum += m_outputWeights[n].weight * nextLayer[n].m_gradient;
}
return sum;
}
void Neuron::calcHiddenGradients(const Layer &nextLayer)
{
double dow = sumDOW(nextLayer);
m_gradient = dow * Neuron::transferFunctionDerivative(m_outputVal);
}
void Neuron::calcOutputGradients(double targetVal)
{
double delta = targetVal - m_outputVal;
m_gradient = delta * Neuron::transferFunctionDerivative(m_outputVal);
}
double Neuron::transferFunction(double x)
{
// tanh - output range [-1.0..1.0]
return tanh(x);
}
double Neuron::transferFunctionDerivative(double x)
{
// tanh derivative
return 1.0 - x * x;
}
void Neuron::feedForward(const Layer &prevLayer)
{
double sum = 0.0;
// Sum the previous layer's outputs (which are our inputs)
// Include the bias node from the previous layer
for (unsigned n = 0; n < prevLayer.size(); ++n)
{
sum += prevLayer[n].getOutputVal() *
prevLayer[n].m_outputWeights[m_myIndex].weight;
}
m_outputVal = Neuron::transferFunction(sum);
}
Neuron::Neuron(unsigned numOutputs, unsigned myIndex)
{
for (unsigned c = 0; c < numOutputs; ++c)
{
m_outputWeights.push_back(Connection());
m_outputWeights.back().weight = randomWeight();
}
m_myIndex = myIndex;
}
// ************************* class Net *************************
class Net
{
public:
Net(const vector<unsigned> &topology);
void feedForward(const vector<double> &inputVals);
void backProp(const vector<double> &targetVals);
void getResults(vector<double> &resultVals) const;
private:
vector<Layer> m_layers; // m_layers{layerNum][neuronNum]
double m_error;
double m_recentAverageError;
double m_recentAverageSmoothingFactor;
};
void Net::getResults(vector<double> &resultVals) const
{
resultVals.clear();
for (unsigned n = 0; n < m_layers.back().size() - 1; ++n)
{
resultVals.push_back(m_layers.back()[n].getOutputVals());
}
}
void Net::backProp(const vector<double> &targetVals)
{
// Calculate overall net error (RMS of output errors)
Layer &outputLayer = m_layers.back();
m_error = 0.0;
for (unsigned n = 0; n < outputLayer.size() - 1; ++n)
{
double delta = targetVals[n] - outputLayer[n].getOutputVal();
m_error += delta * delta;
}
m_error /= outputLayer.size() - 1; // get average error squared
m_error = sqrt(m_error); // RMS
// Implement a recent average measurement:
m_recentAverageError =
(m_recentAverageError * m_recentAverageSmoothingFactor + m_error)
/ (m_recentAverageSmoothingFactor + 1.0);
// Calculate output layer gradients
for (unsigned n = 0; n < outputLayer.size() - 1; ++n)
{
outputLayer[n].calcOutputGradients(targetVals[n]);
}
// Calculate gradients on hidden layers
for (unsigned layerNum = m_layers.size() - 2; layerNum > 0; --layerNum)
{
Layer &hiddenLayer = m_layers[layerNum];
Layer &nextLayer = m_layers[layerNum + 1];
for (unsigned n = 0; n < hiddenLayer.size(); ++n)
{
hiddenLayer[n].calcHiddenGradients(nextLayer);
}
}
// For all layers from output to first hidden layer.
// update connection weights
for (unsigned layerNum = m_layers.size() - 1; layerNum > 0; --layerNum)
{
Layer &layer = m_layers[layerNum];
Layer &prevLayer = m_layers[layerNum - 1];
for (unsigned n = 0; n < layer.size() - 1; ++n)
{
layer[n].updateInputWeights(prevLayer);
}
}
}
void Net::feedForward(const vector<double> &inputVals)
{
assert(inputVals.size() == m_layers[0].size() - 1);
// Assign (latch) the values into the input neurons
for (unsigned i = 0; i < inputVals.size(); ++i)
{
m_layers[0][i].setOutputVal(inputVals[i]);
}
// Forward propagate
for (unsigned layerNum = 1; layerNum = m_layers.size(); ++layerNum)
{
Layer &prevLayer = m_layers[layerNum - 1];
for (unsigned n = 0; n < m_layers[layerNum].size() - 1; ++n)
{
m_layers[layerNum][n].feedForward(prevLayer);
}
}
}
Net::Net(const vector<unsigned> &topology)
{
unsigned numLayers = topology.size();
for (unsigned layerNum = 0; layerNum < numLayers; ++layerNum)
{
m_layers.push_back(Layer());
unsigned numOutputs = layerNum == topology.size() - 1 ? 0 : topology[layerNum + 1];
// We have made a new layer, now fill it with neurons, and
// add a bias neuron to the layer:
for (unsigned neuronNum = 0; neuronNum <= topology[layerNum]; ++neuronNum)
{
m_layers.back().push_back(Neuron(numOutputs, neuronNum));
cout << "Made a Neuron!" << endl;
}
}
}
int main()
{
// e.g.. { 3, 2, 1 }
// THIS IS FOR THE NUMBER OF NEURONS THAT YOU WANT!!
vector<unsigned> topology;
topology.push_back(3);
topology.push_back(2);
topology.push_back(1);
Net myNet(topology);
vector<double> inputVals;
myNet.feedForward(inputVals);
vector<double> targetVals;
myNet.backProp(targetVals);
vector<double> resultVals;
myNet.getResults(resultVals);
system("pause");
}
I've been getting errors such as:
Error: class "Neuron" has no member "feedForward"
Error: class "Neuron" has no member "setOutputVal"
'neuron_OutputVal': identifier not found
class Neuron {};
Here your file defined a class called Neuron. It's a class with no members, and no methods. A completely empty class.
A few lines later:
class Neuron
{
public:
// ...
Why, here's another class called Neuron. However, in C++, all classes must have unique names. So, your C++ compiler will completely reject this class declaration, and refuse to process it. Or, take some other, unspecified action.
There are three issues in your code.
First, as others mentioned fix your forward declaration.
class Neuron;
note this doesnt have {} as in your code. You dont need to move the typedef down, since your Neuron class uses the typedef 'Layer'.
Second, on line 70,
neuron.getOutputVal
instead of neuron_getOutputVal.
Third on line 167 just drop the s from getOutputVal (s).
class Neuron {}; is not a valid forward declaration. You can't use a forward declaration anyway, because declaration of Layer requires full knowledge of Neuron.
You'll have to remove the forward declaration class Neuron {}; entirely and move your declaration of typedef vector<Neuron> Layer; down. Put it right above your declaration of class Net.