R and C++ iteration - c++

I'm trying to write a function that runs a loop in C++ from R using Rcpp.
I have a matrix Z which is one row shorter than the matrix OUT that the function is supposed to return because each position of first row of OUT will be given by the scalar sigma_0.
The function is supposed to implement a differential equation. Each iteration depends on a value from the matrix Z as well as a previously generated value of the matrix OUT.
What I've got is this:
cppFunction('
NumericMatrix sim(NumericMatrix Z, long double sigma_0, long double delta, long double omega, long double gamma) {
int nrow = Z.nrow() + 1, ncol = Z.ncol();
NumericMatrix out(nrow, ncol);
for(int q = 0; q < ncol; q++) {
out(0, q) = sigma_0;
}
for(int i = 0; i < ncol; i++) {
for(int j = 1; j < nrow; j++) {
long double z = Z(j - 1, i);
long double sigma = out(j - 1, i);
out(j, i) = pow(abs(z * sigma) - gamma * z * sigma, delta);
}
}
return out;
}
')
Unfortunately I'm fairly certain it doesn't work. The function runs but the values calculated are incorrect - I've checked with simple examples in Excel and plain R-coding. I've stripped the main differentialequation apart trying to build it up step by step to see when the implementation i Excel and R using C++ starts to differ. Which seems to be when I start using the abs() function and power() function but I simply can't narrow the problem down. Any help would be greatly appreciated - also I might mention this is the first time for me using C++ and C++ along with R.

I think you want fabs rather than abs. abs operates on ints, while fabs operates on doubles / floats.

Related

What is wrong with my 2D Array Gaussian Blur function in C++?

I am making a simple Gaussian blur function for a 2D array that is supposed to represent an image. The function just prints out the array values at the end (no actual image processing going on here). I was pretty sure that I had implemented everything correct, but the values I am getting for (N=3, sigma=1.5) are much lower than expected based on this calculator: http://dev.theomader.com/gaussian-kernel-calculator/
I am following this equation:
void gaussian_filter(int N, double sigma) {
double k[N][N];
for(int i=0; i<N; i++) { //Initialize kernal to 0
for(int j=0; j<N; j++) {
k[i][j] = 0;
}
}
double sum = 0.0; //There is an issue somewhere in this block of code
int change = (N/2);
double r, s = change * sigma * sigma;
for (int x = -change; x <= change; x++) {
for(int y = -change; y <= change; y++) {
r = sqrt(x*x + y*y);
k[x + change][y + change] = (exp(-(r*r)/s))/(M_PI * s);
sum += k[x + change][y + change];
}
}
for(int i = 0; i < N; ++i) { //Normalize
for(int j = 0; j < N; ++j) {
k[i][j] /= sum;
}
}
for(int i = 0; i < N; ++i) { //Print out array
for (int j = 0; j < N; ++j)
cout<<k[i][j]<<"\t";
}
cout<<endl;
}
}
Here is the expected output for N=3 and Sigma=1.5
Here is the current broken output for N=3 and Sigma=1.5
Why does s depend on change? I think you should do:
double r, s = 2 * sigma * sigma;
// instead of
// double r, s = change * sigma * sigma;
That website computes Gaussian kernels in an unorthodox manner:
The weights are calculated by numerical integration of the continuous gaussian distribution over each discrete kernel tap.
That is, it samples a continuous Gaussian kernel that has been convolved with a uniform (“box”) filter of 1 pixel wide. The resulting Gaussian is wider than advertised. I advise against this method.
The proper way to create a Gaussian kernel is to just sample the Gaussian function at given integer locations, for example x = [-3, -2, -1, 0, 1, 2, 3].
Do note that a 3-pixel kernel is not wide enough to represent a Gaussian. It is important to sample the tail of the curve, without it, the kernel doesn’t have the good properties of the Gaussian kernel. I recommend sampling up to 3 sigma to each side, leading to 2*ceil(3*sigma)+1 pixels. 2 sigma is the bare minimum, useful only when speed is more important than good results.
Do also note that the Gaussian is separable, you can apply two 1D kernels in succession, rather than a single 2D kernel. For the 9x9 kernel you get for sigma=1.5, this translates to 9+9=18 multiplications and additions, compared to 9x9=81 for the 2D kernel. This is a significant saving!

How do I compare two IloNumArrays in Cplex C++ API?

I want to add constraints to my Cplex model, that ensures that a bunch of arrays are pairwise different. That is, at least one entry should differ in the two.
(To clarify: The IloNumVarArray h represents an n x m matrix and the constraints should ensure that no two rows are identical)
My code below has two errors (at least) that I can't seem to solve:
- First, there is 'no suitable conversion function from IloNumVar to IloNum',
- Second, it is not allowed to use the != operator to compare IloNumArrays.
IloNumVarArray h(env, n*m);
IloNumArray temp1(env, m);
IloNumArray temp2(env, m);
for (int i = 0; i < n - 1; i++) {
temp1.clear();
temp2.clear();
for (int k = 0; k < n - i; k++)
for (int j = 0; j < m; j++) {
temp1[j] = h[j + i * m];
temp2[j] = h[j + (i + k) * m];
}
model.add(temp1 != temp2);
}
So how can I change temp1 and temp2 such that it is possible to copy from h, and compare the two?
(or do it completely different)
I am quite new to Cplex and I would appreciate any help/suggestions
you could use logical constraints.
Let me give you an example in OPL CPLEX that you could adapt to C++
int n=3;
int m=2;
range N=1..n;
range M=1..m;
float epsilon=0.0001;
dvar float temp1[N][M] in 0..10;
dvar float temp2[N][M] in 0..10;
minimize sum(i in N,j in M) (temp1[i][j]+temp2[i][j]);
subject to
{
// at least for one (i,j) the 2 arrays are different
1<=sum(i in N,j in M) (abs(temp1[i][j]-temp2[i][j])>=epsilon);
}

my code is very slow compile when i set large number n. i do not know how to set loops

c++ compile very slow
for 2d vector
std::vector< vector<double> > V(n, vector<double> (n));
double sum2=0;
for(int i=0; i<n; i++)
{
double xai=xa1+i*dxa;
double dxr=(double)(xr2-xr1)/n;
double sum1=0;
for(int j=0; j<n; j++){
double xri=xr1+dxr*j;
V[i][j]=fun(xri,xai);
double rect1=V[i][j]*dxr;
sum1+=rect1;
}
double rect2=sum1*dxa;
sum2+=rect2;
}
return sum2;
this code is integrate 2dimension [ (1/2*pi)*exp(-xr^2/2)*exp(-xa^2/2)].
the integral for this equation equal to 1 at infinity limits so in c++ we have to increase limits and n to get the result equal to 1 as theory.
If we apply Newton–Cotes quadrature to the infinite integral
, we need to cut off the lower and upper boundary of this integral.
The integrand must be negligibly small at the cut-off points.
Which value did you selected ?
The integrand of your problem is Gaussian and is rapidly decreasing like this,
exp(-10*10/2) ~ 1.93 * 10^(-22)
which would be negligible in the present integration.
Thus, if we cut off lower and upper boundary by -10 and +10, respectively, and set enough points in this range, we should get precise result.
I actually got well precise result with 100x100 points using the following trapezoidal quadrature.
This quadrature is most simple one.
My test code is here.
1 dimensional integration:
template<typename F>
double integrate_trapezoidal(F func, std::size_t n, double lowerBnd, double upperBnd)
{
if(lowerBnd == upperBnd){
return 0.0;
}
auto integral = 0.0;
auto x = lowerBnd;
const auto dx = (upperBnd - lowerBnd)/n;
auto left = func(x);
for(std::size_t i = 0; i<n; ++i)
{
x += dx;
const auto right = func(x);
integral += (left + right);
left = right;
}
integral *= (0.5*dx);
return integral;
}
2 dimensional integration:
template<typename F>
double integrate_trapezoidal_2dim(
F func_2dim,
std::size_t n,
double x_lowerBnd, double x_upperBnd,
double y_lowerBnd, double y_upperBnd)
{
auto func = [&](double x)
{
return integrate_trapezoidal(
std::bind(func_2dim, x, std::placeholders::_1),
n, y_lowerBnd, y_upperBnd);
};
return integrate_trapezoidal(func, n, x_lowerBnd, x_upperBnd);
}
I am worried that you set finite but very large upper and lower boundary.
In that case, you need to set many points to increase the number of pints in the range of -10 < x < +10.
Finally, there are various quadratures for numerical integrations.
If you insert something function into this Gaussian integrand, then Hermite quadrature or the fast Gaussian transformation (FGT) should be recommended.

Right way to compute cosine similarity between two arrays?

I am working on a project that detects some features of two input images(handwritten signatures) and compares those two features using cosine similarity. Here When I mean two input images, one is an original image, and other is duplicate image.
Say I am extracting 15 such features of one image(original image) and storing it in one array(Say, Array_ORG), and features of other image is stored in Array_DUP similarly.
Now, I am trying to calculate the cosine similarity between these two arrays. These arrays are of double datatype.
I am listing down two methods that I followed:
1)Manual calculation of cosine similarity:
main(){
for(int i=0;i<15;i++)
sum_org += (Array_org[i]*Array_org[i]);
for(int i=0;i<15;i++)
sum_dup += (Array_dup[i]*Array_dup[i]);
double magnitude = sqrt(sum_org +sum_dup );
double cosine_similarity = dot_product(Array_org, Array_dup, sizeof(Array_org)/sizeof(Array_org[0]))/magnitude;
}
double dot_product(double *a, double* b, size_t n){
double sum = 0;
size_t i;
for (i = 0; i < n; i++) {
sum += a[i] * b[i];
}
return sum;
}
2)Storing the values into a Mat and calling dot function:
Mat A = Mat(1,15,CV_32FC1,&Array_org);
Mat B = Mat(1,15,CV_32FC1,&Array_dup);
double similarity = cal_theta(A,B);
double cal_theta(Mat A, Mat B){
double ab = A.dot(B);
double aa = A.dot(A);
double bb = B.dot(B);
return -ab / sqrt(aa*bb);
}
I have read that cosine similarity value ranges from -1 to 1, with -1 saying both are exxactly opposite, and 1, saying both are equal. But first function gives me values in 1000's and second function gives me values more than 1.
Please guide me which process is right, and why?
Also how do I infer the similarity if cosine similarity values are more than 1?
The correct definition of cosine similarity is :
Your code does not compute the denominator, hence the values are wrong.
double cosine_similarity(double *A, double *B, unsigned int Vector_Length)
{
double dot = 0.0, denom_a = 0.0, denom_b = 0.0 ;
for(unsigned int i = 0u; i < Vector_Length; ++i) {
dot += A[i] * B[i] ;
denom_a += A[i] * A[i] ;
denom_b += B[i] * B[i] ;
}
return dot / (sqrt(denom_a) * sqrt(denom_b)) ;
}
Just adding a method that with Opencv(C++) to calculate to feature vectors cosine similarity:
float cosSim = f1.dot(f2) / (cv::norm(f1) * cv::norm(f2));
where f1 and f2 are both 1-dimension cv::Mat with size (1, xx).

How to multiply a number by a function passed by reference

I have the following code above my main method and all my other functions:
typedef double (*FUNC)(double);
double integrate(FUNC f, double a, double b){
double sum = 0;
for(int i=a; i<=b; i++){
sum = sum + (f * .0001); //error occurs here, red squiggly line under "f"
}
return sum;
}
In the Microsoft Visual Studio C++ compiler, I get an error: Expression must have arithmetic or enum type. I pointed out where the error comes from above in a comment. Can someone explain to me why I have this error and how I can resolve this error?
I take it you are trying to integrate f(x) for values of x from a to b?
In which case your code is quite incorrect.
Your 0.0001 seems to indicate that you are actually trying to use 10000 steps, in which case you would use something along the lines of:
const int steps = 10000;
double x = a;
double delta = (b - a) / steps;
for(int i = 0; i < steps; i++, x += delta)
You would then calculate use a call f(x) to call the function pointer, and sum that up.
Try using this instead
sum = sum + f(.0001);
Multiplying a function pointer by a fraction would not go so well.