Multi-threaded Simulated Annealing - c++

I wrote a multithreaded simulated annealing program but its not running. I am not sure if the code is correct or not. The code is able to compile but when i run the code it crashes. Its just a run time error.
#include <stdio.h>
#include <time.h>
#include <iostream>
#include <stdlib.h>
#include <math.h>
#include <string>
#include <vector>
#include <algorithm>
#include <fstream>
#include <ctime>
#include <windows.h>
#include <process.h>
using namespace std;
typedef vector<double> Layer; //defines a vector type
typedef struct {
Layer Solution1;
double temp1;
double coolingrate1;
int MCL1;
int prob1;
}t;
//void SA(Layer Solution, double temp, double coolingrate, int MCL, int prob){
double Rand_NormalDistri(double mean, double stddev) {
//Random Number from Normal Distribution
static double n2 = 0.0;
static int n2_cached = 0;
if (!n2_cached) {
// choose a point x,y in the unit circle uniformly at random
double x, y, r;
do {
// scale two random integers to doubles between -1 and 1
x = 2.0*rand()/RAND_MAX - 1;
y = 2.0*rand()/RAND_MAX - 1;
r = x*x + y*y;
} while (r == 0.0 || r > 1.0);
{
// Apply Box-Muller transform on x, y
double d = sqrt(-2.0*log(r)/r);
double n1 = x*d;
n2 = y*d;
// scale and translate to get desired mean and standard deviation
double result = n1*stddev + mean;
n2_cached = 1;
return result;
}
} else {
n2_cached = 0;
return n2*stddev + mean;
}
}
double FitnessFunc(Layer x, int ProbNum)
{
int i,j,k;
double z;
double fit = 0;
double sumSCH;
if(ProbNum==1){
// Ellipsoidal function
for(j=0;j< x.size();j++)
fit+=((j+1)*(x[j]*x[j]));
}
else if(ProbNum==2){
// Schwefel's function
for(j=0; j< x.size(); j++)
{
sumSCH=0;
for(i=0; i<j; i++)
sumSCH += x[i];
fit += sumSCH * sumSCH;
}
}
else if(ProbNum==3){
// Rosenbrock's function
for(j=0; j< x.size()-1; j++)
fit += 100.0*(x[j]*x[j] - x[j+1])*(x[j]*x[j] - x[j+1]) + (x[j]-1.0)*(x[j]-1.0);
}
return fit;
}
double probl(double energychange, double temp){
double a;
a= (-energychange)/temp;
return double(min(1.0,exp(a)));
}
int random (int min, int max){
int n = max - min + 1;
int remainder = RAND_MAX % n;
int x;
do{
x = rand();
}while (x >= RAND_MAX - remainder);
return min + x % n;
}
//void SA(Layer Solution, double temp, double coolingrate, int MCL, int prob){
void SA(void *param){
t *args = (t*) param;
Layer Solution = args->Solution1;
double temp = args->temp1;
double coolingrate = args->coolingrate1;
int MCL = args->MCL1;
int prob = args->prob1;
double Energy;
double EnergyNew;
double EnergyChange;
Layer SolutionNew(50);
Energy = FitnessFunc(Solution, prob);
while (temp > 0.01){
for ( int i = 0; i < MCL; i++){
for (int j = 0 ; j < SolutionNew.size(); j++){
SolutionNew[j] = Rand_NormalDistri(5, 1);
}
EnergyNew = FitnessFunc(SolutionNew, prob);
EnergyChange = EnergyNew - Energy;
if(EnergyChange <= 0){
Solution = SolutionNew;
Energy = EnergyNew;
}
if(probl(EnergyChange ,temp ) > random(0,1)){
//cout<<SolutionNew[i]<<endl;
Solution = SolutionNew;
Energy = EnergyNew;
cout << temp << "=" << Energy << endl;
}
}
temp = temp * coolingrate;
}
}
int main ()
{
srand ( time(NULL) ); //seed for getting different numbers each time the prog is run
Layer SearchSpace(50); //declare a vector of 20 dimensions
//for(int a = 0;a < 10; a++){
for (int i = 0 ; i < SearchSpace.size(); i++){
SearchSpace[i] = Rand_NormalDistri(5, 1);
}
t *arg1;
arg1 = (t *)malloc(sizeof(t));
arg1->Solution1 = SearchSpace;
arg1->temp1 = 1000;
arg1->coolingrate1 = 0.01;
arg1->MCL1 = 100;
arg1->prob1 = 3;
//cout << "Test " << ""<<endl;
_beginthread( SA, 0, (void*) arg1);
Sleep( 100 );
//SA(SearchSpace, 1000, 0.01, 100, 3);
//}
return 0;
}
Please help.
Thanks
Avinesh

As leftaroundabout pointed out, you're using malloc in C++ code. This is the source of your crash.
Malloc will allocate a block of memory, but since it was really designed for C, it doesn't call any C++ constructors. In this case, the vector<double> is never properly constructed. When
arg1->Solution1 = SearchSpace;
Is called, the member variable "Solution1" has an undefined state and the assignment operator crashes.
Instead of malloc try
arg1 = new t;
This will accomplish roughly the same thing but the "new" keyword also calls any necessary constructors to ensure the vector<double> is properly initialized.
This also brings up another minor issue, that this memory you've newed also needs to be deleted somewhere. In this case, since arg1 is passed to another thread, it should probably be cleaned up like
delete args;
by your "SA" function after its done with the args variable.

While I don't know the actual cause for your crashes I'm not really surprised that you end up in trouble. For instance, those "cached" static variables in Rand_NormalDistri are obviously vulnerable to data races. Why don't you use std::normal_distribution? It's almost always a good idea to use standard library routines when they're available, and even more so when you need to consider multithreading trickiness.
Even worse, you're heavily mixing C and C++. malloc is something you should virtually never use in C++ code – it doesn't know about RAII, which is one of the few intrinsically safe things you can cling onto in C++.

Related

return 2D array in C++

I am kind of new to C++ and I was doing a physics simulation in python which was taking forever to finish so I decided to switch to C++, and I don t understand how to make a function which will return a 2D array (or 3D array)
#include <iostream>
#include <cmath>
// #include <complex> //
using namespace std;
double** psiinit(int L, int n, double alpha){
double yj[400][400] = {};
for (int j = 0; j < n; j++)
{
double xi[400] = {};
for (int i = 0; i < n; i++)
{
xi[i] = exp(-(pow((i-(L/4)), 2) + (pow((j-(L/4)), 2)))/alpha) / (sqrt(2)*3.14159*alpha);
};
yj[j] = xi;
};
return yj;
}
int main(){
int L = 10;
int n = 400;
int nt = 200*n;
double alpha = 1;
double m = 1;
double hbar = 1;
double x[n] = {};
double y[n] = {};
double t[nt] = {};
double psi[nt][n][n] = {};
psi[0] = psiinit(L, n, alpha);
cout << psi <<endl;
return 0;
}
I have look for answers but it doesn't seems to be for my kind of problems
Thanks
If you're new to c++ you should read about the concepts of heap and stack, and about stack frames. There are a ton of good resources for that.
In short, when you declare a C-style array (such as yj), it is created in the stack frame of the function, and therefore there are no guarantees about it once you exit the frame, and your program invokes undefined behavior when it references that returned array.
There are 3 options:
Pass the array to the function as an output parameter (very C-style and not recommended).
Wrap the array in a class (like std::array already does for you), in which case it remains on the stack and is copied to the calling frame when returned, but then its size has to be known at compile time.
Allocate the array on the heap and return it, which seems to me to best suit your case. std::vector does that for you:
std::vector<std::vector<double>> psiinit(int L, int n, double alpha){
std::vector<std::vector<double>> yj;
for (int j = 0; j < n; j++)
{
std::vector<double> xi;
for (int i = 0; i < n; i++)
{
const int value = exp(-(pow((i-(L/4)), 2) + (pow((j-(L/4)), 2)))/alpha) / (sqrt(2)*3.14159*alpha);
xi.push_back(value);
}
yj.push_back(xi);
}
return yj;
}
If you're concerned with performance and all of your inner vectors are of a fixed size N, it might be better to use std::vector<std::array<double, N>>.
Either make a wrapper as said above, or use a vector of vectors.
#include <vector>
#include <iostream>
auto get_2d_array()
{
// use std::vector since it will allocate (the large amount of) data on the heap
// construct a vector of 400 vectors with 400 doubles each
std::vector<std::vector<double>> arr(400, std::vector<double>(400));
arr[100][100] = 3.14;
return arr;
}
int main()
{
auto arr = get_2d_array();
std::cout << arr[100][100];
}
Your understanding of arrays, pointers and return values is incomplete. I cannot write you a whole tutorial on the topic but I recommend you read up on this.
In the mean time, I recommend you use std::vector instead of C-style arrays and treat your multidimensional arrays as 1D vectors with proper indexing, e.g. cell = vector[row * cols + col]
Something like this:
#include <cmath>
// using std::exp, M_PI, M_SQRT2
#include <vector>
std::vector<double> psiinit(int L, int n, double alpha) {
std::vector<double> yj(n * n);
double div = M_SQRT2 * M_PI * alpha;
for (int j = 0; j < n; j++)
{
double jval = j - L/4;
jval = jval * jval;
for (int i = 0; i < n; i++)
{
double ival = i - L/4;
ival = ival * ival;
yj[j * n + i] = std::exp(-(ival + jval) / alpha) / div;
}
}
return yj;
}
Addendum: There are also specialized libraries to support matrices better and faster. For example Eigen
https://eigen.tuxfamily.org/dox/GettingStarted.html
heap allocating and returning that pointer will also work...
instead of
double yj[400][400] = {};
do,
double** yj;
yj = new double*[400];
yj[i] = new double[400];
then just,
return yj;

GSL thread safety problems

Does GSL have thread-safety issues when it comes to using function pointers? The attached openmp code integrates f(x)=-(c+x)^{-1} over the range 1<=x<=2 for various values of c using gsl's gsl_integration_qng function. The parallel version runs much slower than the serial version. I suspect that this has to do with the function pointer &fx. Does anyone have prior experience with this problem? Thanks in advance!
#include<cstdlib>
#include <gsl/gsl_integration.h>
#include<cstdio>
#include<omp.h>
double fx(double x,void *p);
double evalintegral(double c);
using namespace std;
int main(int argc, char *argv[])
{
// numerically integrate the function f(x) = -(c+x)^{-1} between 1 and 2
int Ncs = atoi(argv[1]);
int Nreps = atoi(argv[2]);
printf("Ncs=%d, Nreps=%d.\n",Ncs,Nreps);
int i, j;
double tempF;
double *cs = new double[Ncs];
double dc = 1 / (double)(Ncs-1);
for (i = 0; i < Ncs; i++)
{
cs[i] = dc*(double)i;
}
printf("Began integrations.\n");
#pragma omp parallel for default(none)\
shared(Nreps,Ncs,cs)\
private(i,j,tempF)
for (i = 0; i < Nreps; i++)
{
for (j = 0; j < Ncs; j++)
{
tempF = evalintegral(cs[j]);
}
}
delete[] cs;
printf("Finished integrations.\n");
return 0;
}
double fx(double x, void *p)
{
double *c = (double*) p;
return -1 / (*c + x);
}
double evalintegral(double c)
{
double *ptr_c = new double[1];
ptr_c[0] = c;
gsl_function Fquad;
Fquad.params = ptr_c;
Fquad.function = &fx;
size_t quadneval;
double quadres, quaderr;
gsl_integration_qng(&Fquad,1,2,1e-10,1e-6,&quadres,&quaderr,&quadneval);
delete[] ptr_c;
return quadres;
}
I faced the same issue with an GSL integral routine. As #mrx_hk mentioned the problem was reusing the workspace object.
In my case I stored:
gsl_integration_workspace * w_ = gsl_integration_workspace_alloc(1200);
as a class member variable and reused the storage on every integration call, which caused a race condition.
The solution was to create a workspace object for every thread, in my case for every integration call.

c++ "no matching function error" returning 2D array

I'm new to c++, and fairly sure the error is in the variables I'm passing into the function. Essentially, I have two functions defined in a double_arrays.cpp file - first is to determine the Euclidian distance between two vectors (passed in as arrays with 10 values, this function works great). The other (closestPair) is to find which two vectors (again, each being defined as an array with 10 values) are closest in distance. When I am calling this function in my "main.cpp" file, I am getting a "no matching function called closestPair" error.
I am fairly sure that the error is either in the values I am passing into the function, or in the way I am trying to return it (by printing the values to the console).
DISCLAIMER - THIS IF FOR AN ASSIGNMENT :), so hints towards a solution will be more than welcome!
Here are my files:
main.cpp :
#include <iostream>
#include "double_arrays.h"
int main(int argc, const char * argv[])
{
//Define test values to test vectDistance
double first[10] = {
0.595500, 0.652927, 0.606763, 0.162761, 0.980752, 0.964772, 0.319322, 0.611325, 0.012422, 0.393489
};
double second[10] = {
0.416132, 0.778858, 0.909609, 0.094812, 0.380586, 0.512309, 0.638184, 0.753504, 0.465674, 0.674607
};
//call vectDistance with test values, should equal 1.056238
std::cout << "Euclidian distance is " << vectDistance(first, second) << std::endl;
std::cout << "Should equal ~1.056238" << std::endl;
//Define test values for closestPair
double a[10] = {
0.183963, 0.933146, 0.476773, 0.086125, 0.566566, 0.728107, 0.837345, 0.885175, 0.600559, 0.142238
};
double b[10] = {
0.086523, 0.025236, 0.252289, 0.089437, 0.382081, 0.420934, 0.038498, 0.626125, 0.468158, 0.247754
};
double c[10] = {
0.969345, 0.127753, 0.736213, 0.264992, 0.518971, 0.216767, 0.390992, 0.242241, 0.516135, 0.990155
};
//create 2D array to send to closestPair
double** test = new double*[3];
test[0] = a;
test[1] = b;
test[2] = c;
//output the values of the two vectors which are closest, in Euclidian distance
std::cout << closestPair(test) << std::endl;
return 0;
}
double_arrays.cpp :
#include "double_arrays.h"
#include <iostream>
#include <vector>
#include <math.h>
double vectDistance( double first[], double second[]) {
int i = 0;
double distance = 0.0;
for (int j = 0; j < 10; ++j) {
distance += pow((first[j] - second[i]), 2);
++i;
}
return distance = pow(distance, 0.5);
}
double** closestPair(double arrays[][10]) {
double** closest = new double*[2];
closest[0] = new double[10];
closest[0] = arrays[0];
closest[1] = new double[10];
closest[1] = arrays[1];
double minDistance = vectDistance(arrays[0], arrays[1]);
for (int i = 0; i < 9; ++i){
for (int j = i + 1; j < 10; ++j) {
if (vectDistance(arrays[i], arrays[j]) < minDistance) {
minDistance = vectDistance(arrays[i], arrays[j]);
closest[0] = arrays[i];
closest[1] = arrays[j];
}
}
}
return closest;
}
And, finally, the header file, header.h:
#ifndef double_arrays_double_arrays_h
#define double_arrays_double_arrays_h
double vectDistance( double first[], double second[]);
double** closestPair(double arrays[][10]);
#endif
The type double** is not compatible with the argument type of closestPair.
Valid arguments to closestPair:
const int N = 20;
double arrays[N][10]; // Can use array to call closestPair
or
const int N = 20;
double (*arrays)[10] = new double[N][10]; // Can use array to call closestPair
Your header files does not match, it should be:
#ifndef _DOUBLE_ARRAYS_H_
#define _DOUBLE_ARRAYS_H_
Then on your double_arrays.cpp file call:
#include "double_arrays.h"
To call your declared functions from header file on your .cpp file try:
double ** Double_Arrays::closestPair(double arrays[][10]) {
/... your code.../
return closest;
}

Checking which of the modules is the closest

Welcome. My problem is that I have given an array of numbers which I need to calculate the average (that part I did), but then I have to find the array element (module), which is closer to the average. Below paste the code (a form of main () imposed)
#include <iostream>
using namespace std;
double* aver(double* arr, size_t size, double& average){
double count;
for(int p = 0; p < size; p++)
count += arr[p];
count /= size;
double * pointer;
pointer = &count;
average = *pointer;
}
int main() {
double arr[] = {1,2,3,4,5,7};
size_t size = sizeof(arr)/sizeof(arr[0]);
double average = 0;
double* p = aver(arr,size,average);
cout << p << " " << average << endl;
}
The program should give a result
4 3.66667
I have no idea how to check which element is nearest to another, and substitute it into *p
I will be very grateful for any help.
Okay, this is not the answer to your problem, since you already got couple of them
How about trying something new ?
Use std::accumulate, std::sort and std::partition to achieve same goal.
#include<algorithm>
//...
struct comp
{
double avg;
comp(double x):avg(x){}
bool operator()(const double &x) const
{
return x < avg;
}
};
std::sort(arr,arr+size);
average =std::accumulate(arr, arr+size, 0.0) / size;
double *p= std::partition(arr, arr+size, comp(average));
std::cout<<"Average :"<<average <<" Closest : "<<*p<<std::endl;
This algorithm is based on the fact that std::map keeps its elements sorted (using operator<):
#include <map>
#include <iostream>
#include <math.h>
using namespace std;
double closest_to_avg(double* arr, size_t size, double avg) {
std::map<double,double> disturbances;
for(int p = 0; p < size; p++) {
disturbances[fabs(avg-arr[p])]=arr[p]; //if two elements are equally
} //distant from avg we take
return disturbances.begin()->second; //a new one
}
Since everybody is doing the kids homework...
#include <iostream>
using namespace std;
double min(double first, double second){
return first < second ? first : second;
}
double abs(double first){
return 0 < first ? first : -first;
}
double* aver(double* arr, size_t size, double& average){
double count;
for(int p = 0; p < size; p++)
count += arr[p];
average = count/size;
int closest_index = 0;
for(int p = 0; p < size; p++)
if( abs(arr[p] - average) <
abs(arr[closest_index] - average) )
closest_index = p;
return &arr[closest_index];
}
int main() {
double arr[] = {1,2,3,4,5,7};
size_t size = sizeof(arr)/sizeof(arr[0]);
double average = 0;
double* p = aver(arr,size,average);
cout << *p << " " << average << endl;
//Above ^^ gives the expected behavior,
//Without it you'll get nothing but random memory
}
I insist that you need the * before the p, it gives the value that the pointer is pointing too. Without the * then the value is the address of the memory location, which is indeterminate in this case. Ask your professor/teacher whether the specification is correct, because it isn't.
Try and understand the style and functions involved - it isn't complicated, and writing like this can go a long ways to making your graders job easier.
Also that interface is a very leaky one, in real work - consider some of the standard library algorithms and containers instead.

Rounding errors giving incorrect tesults in DFT?

I have been beating my head against the wall on this DFT. It should print out: 8,0,0,0,0,0,0,0 but instead I get 8 and then very very tiny numbers. Are these rounding errors? Is there anything I can do? My Radix2 FFT gives correct results, it seems silly a DFT could not also work.
I started with complex numbers so I know there is a good bit missing, I tried to strip it down to illustrate the problem.
#include <cstdlib>
#include <math.h>
#include <iostream>
#include <complex>
#include <cassert>
#define SIZE 8
#define M_PI 3.14159265358979323846
void fft(const double src[], double dst[], const unsigned int n)
{
for(int i=0; i < SIZE; i++)
{
const double ph = -(2*M_PI) / n;
const int gid = i;
double res = 0.0f;
for (int k = 0; k < n; k++) {
double t = src[k];
const double val = ph * k * gid;
double cs = cos(val);
double sn = sin(val);
res += ((t * cs) - (t * sn));
int a = 1;
}
dst[i] = res;
std::cout << dst[i] << std::endl;
}
}
int main(void)
{
double array1[SIZE];
double array2[SIZE];
for(int i=0; i < SIZE; i++){
array1[i] = 1;
array2[i] = 0;
}
fft(array1, array2, SIZE);
return 666;
}
An FFT can actually produce more accurate results than a straight DFT calculation, as the fewer arithmetic ops usually allow fewer opportunities for arithmetic quantization errors to accumulate. There's a paper by one of the FFTW authors on this topic.
Since the DFT/FFT deal with a transcendental basis function, the results will never (except perhaps in a few special cases, or by lucky accident) be exactly correct using any non-symbolic and finite computer number format. So values very close (within a few LSB) to zero should simply be ignored as noise, or considered to be the same as zero.