I want to add constraints to my Cplex model, that ensures that a bunch of arrays are pairwise different. That is, at least one entry should differ in the two.
(To clarify: The IloNumVarArray h represents an n x m matrix and the constraints should ensure that no two rows are identical)
My code below has two errors (at least) that I can't seem to solve:
- First, there is 'no suitable conversion function from IloNumVar to IloNum',
- Second, it is not allowed to use the != operator to compare IloNumArrays.
IloNumVarArray h(env, n*m);
IloNumArray temp1(env, m);
IloNumArray temp2(env, m);
for (int i = 0; i < n - 1; i++) {
temp1.clear();
temp2.clear();
for (int k = 0; k < n - i; k++)
for (int j = 0; j < m; j++) {
temp1[j] = h[j + i * m];
temp2[j] = h[j + (i + k) * m];
}
model.add(temp1 != temp2);
}
So how can I change temp1 and temp2 such that it is possible to copy from h, and compare the two?
(or do it completely different)
I am quite new to Cplex and I would appreciate any help/suggestions
you could use logical constraints.
Let me give you an example in OPL CPLEX that you could adapt to C++
int n=3;
int m=2;
range N=1..n;
range M=1..m;
float epsilon=0.0001;
dvar float temp1[N][M] in 0..10;
dvar float temp2[N][M] in 0..10;
minimize sum(i in N,j in M) (temp1[i][j]+temp2[i][j]);
subject to
{
// at least for one (i,j) the 2 arrays are different
1<=sum(i in N,j in M) (abs(temp1[i][j]-temp2[i][j])>=epsilon);
}
Related
this is optimized implementation of matrix multiplication and this routine performs a matrix multiplication operation.
C := C + A * B (where A, B, and C are n-by-n matrices stored in column-major format)
On exit, A and B maintain their input values.
void matmul_optimized(int n, int *A, int *B, int *C)
{
// to the effective bitwise calculation
// save the matrix as the different type
int i, j, k;
int cij;
for (i = 0; i < n; ++i) {
for (j = 0; j < n; ++j) {
cij = C[i + j * n]; // the initialization into C also, add separate additions to the product and sum operations and then record as a separate variable so there is no multiplication
for (k = 0; k < n; ++k) {
cij ^= A[i + k * n] & B[k + j * n]; // the multiplication of each terms is expressed by using & operator the addition is done by ^ operator.
}
C[i + j * n] = cij; // allocate the final result into C }
}
}
how do I more speed up the multiplication of matrix based on above function/method?
this function is tested up to 2048 by 2048 matrix.
the function matmul_optimized is done with matmul.
#include <stdio.h>
#include <stdlib.h>
#include "cpucycles.c"
#include "helper_functions.c"
#include "matmul_reference.c"
#include "matmul_optimized.c"
int main()
{
int i, j;
int n = 1024; // Number of rows or columns in the square matrices
int *A, *B; // Input matrices
int *C1, *C2; // Output matrices from the reference and optimized implementations
// Performance and correctness measurement declarations
long int CLOCK_start, CLOCK_end, CLOCK_total, CLOCK_ref, CLOCK_opt;
long int COUNTER, REPEAT = 5;
int difference;
float speedup;
// Allocate memory for the matrices
A = malloc(n * n * sizeof(int));
B = malloc(n * n * sizeof(int));
C1 = malloc(n * n * sizeof(int));
C2 = malloc(n * n * sizeof(int));
// Fill bits in A, B, C1
fill(A, n * n);
fill(B, n * n);
fill(C1, n * n);
// Initialize C2 = C1
for (i = 0; i < n; i++)
for (j = 0; j < n; j++)
C2[i * n + j] = C1[i * n + j];
// Measure performance of the reference implementation
CLOCK_total = 0;
for (COUNTER = 0; COUNTER < REPEAT; COUNTER++)
{
CLOCK_start = cpucycles();
matmul_reference(n, A, B, C1);
CLOCK_end = cpucycles();
CLOCK_total = CLOCK_total + CLOCK_end - CLOCK_start;
}
CLOCK_ref = CLOCK_total / REPEAT;
printf("n=%d Avg cycle count for reference implementation = %ld\n", n, CLOCK_ref);
// Measure performance of the optimized implementation
CLOCK_total = 0;
for (COUNTER = 0; COUNTER < REPEAT; COUNTER++)
{
CLOCK_start = cpucycles();
matmul_optimized(n, A, B, C2);
CLOCK_end = cpucycles();
CLOCK_total = CLOCK_total + CLOCK_end - CLOCK_start;
}
CLOCK_opt = CLOCK_total / REPEAT;
printf("n=%d Avg cycle count for optimized implementation = %ld\n", n, CLOCK_opt);
speedup = (float)CLOCK_ref / (float)CLOCK_opt;
// Check correctness by comparing C1 and C2
difference = 0;
for (i = 0; i < n; i++)
for (j = 0; j < n; j++)
difference = difference + C1[i * n + j] - C2[i * n + j];
if (difference == 0)
printf("Speedup factor = %.2f\n", speedup);
if (difference != 0)
printf("Reference and optimized implementations do not match\n");
//print(C2, n);
free(A);
free(B);
free(C1);
free(C2);
return 0;
}
You can try algorithm like Strassen or Coppersmith-Winograd and here is also a good example.
Or maybe try Parallel computing like future::task or std::thread
Optimizing matrix-matrix multiplication requires careful attention to be paid to a number of issues:
First, you need to be able to use vector instructions. Only vector instructions can access parallelism inherent in the architecture. So, either your compiler needs to be able to automatically map to vector instructions, or you have to do so by hand, for example by calling the vector intrinsic library for AVX-2 instructions (for x86 architectures).
Next, you need to pay careful attention to the memory hierarchy. Your performance can easily drop to less than 5% of peak if you don't do this.
Once you do this right, you will hopefully have broken the computation up into small enough computational chunks that you can also parallelize via OpenMP or pthreads.
A document that carefully steps through what is required can be found at http://www.cs.utexas.edu/users/flame/laff/pfhp/LAFF-On-PfHP.html. (This is very much a work in progress.) At the end of it all, you will have an implementation that gets close to the performance attained by high-performance libraries like Intel's Math Kernel Library (MKL) or the BLAS-like Library Instantiation Software (BLIS).
(And, actually, you CAN then also effectively incorporate Strassen's algorithm. But that is another story, told in Unit 3.5.3 of these notes.)
You may find the following thread relevant: How does BLAS get such extreme performance?
I'm writing a library where I want to have some basic NxN matrix functionality that doesn't have any dependencies and it is a bit of a learning project. I'm comparing my performance to Eigen. I've been able to be pretty equal and even beat its performance on a couple front with SSE2 and with AVX2 beat it on quite a few fronts (it only uses SSE2 so not super surprising).
My issue is I'm using Gaussian Elimination to create an Upper Diagonalized matrix then multiplying the diagonal to get the determinant.I beat Eigen for N < 300 but after that Eigen blows me away and it just gets worse as the matrices get bigger. Given all the memory is accessed sequentially and the compiler dissassembly doesn't look terrible I don't think it is an optimization issue.
There is more optimization that can be done but the timings look much more like an algorithmic timing complexity issue or there is a major SSE advantage I'm not seeing. Simply unrolling the loops a bit hasn't done much for me when trying that.
Is there a better algorithm for calculating determinants?
Scalar code
/*
Warning: Creates Temporaries!
*/
template<typename T, int ROW, int COLUMN> MML_INLINE T matrix<T, ROW, COLUMN>::determinant(void) const
{
/*
This method assumes square matrix
*/
assert(row() == col());
/*
We need to create a temporary
*/
matrix<T, ROW, COLUMN> temp(*this);
/*We convert the temporary to upper triangular form*/
uint N = row();
T det = T(1);
for (uint c = 0; c < N; ++c)
{
det = det*temp(c,c);
for (uint r = c + 1; r < N; ++r)
{
T ratio = temp(r, c) / temp(c, c);
for (uint k = c; k < N; k++)
{
temp(r, k) = temp(r, k) - ratio * temp(c, k);
}
}
}
return det;
}
AVX2
template<> float matrix<float>::determinant(void) const
{
/*
This method assumes square matrix
*/
assert(row() == col());
/*
We need to create a temporary
*/
matrix<float> temp(*this);
/*We convert the temporary to upper triangular form*/
float det = 1.0f;
const uint N = row();
const uint Nm8 = N - 8;
const uint Nm4 = N - 4;
uint c = 0;
for (; c < Nm8; ++c)
{
det *= temp(c, c);
float8 Diagonal = _mm256_set1_ps(temp(c, c));
for (uint r = c + 1; r < N;++r)
{
float8 ratio1 = _mm256_div_ps(_mm256_set1_ps(temp(r,c)), Diagonal);
uint k = c + 1;
for (; k < Nm8; k += 8)
{
float8 ref = _mm256_loadu_ps(temp._v + c*N + k);
float8 r0 = _mm256_loadu_ps(temp._v + r*N + k);
_mm256_storeu_ps(temp._v + r*N + k, _mm256_fmsub_ps(ratio1, ref, r0));
}
/*We go Scalar for the last few elements to handle non-multiples of 8*/
for (; k < N; ++k)
{
_mm_store_ss(temp._v + index(r, k), _mm_sub_ss(_mm_set_ss(temp(r, k)), _mm_mul_ss(_mm256_castps256_ps128(ratio1),_mm_set_ss(temp(c, k)))));
}
}
}
for (; c < Nm4; ++c)
{
det *= temp(c, c);
float4 Diagonal = _mm_set1_ps(temp(c, c));
for (uint r = c + 1; r < N; ++r)
{
float4 ratio = _mm_div_ps(_mm_set1_ps(temp[r*N + c]), Diagonal);
uint k = c + 1;
for (; k < Nm4; k += 4)
{
float4 ref = _mm_loadu_ps(temp._v + c*N + k);
float4 r0 = _mm_loadu_ps(temp._v + r*N + k);
_mm_storeu_ps(temp._v + r*N + k, _mm_sub_ps(r0, _mm_mul_ps(ref, ratio)));
}
float fratio = _mm_cvtss_f32(ratio);
for (; k < N; ++k)
{
temp(r, k) = temp(r, k) - fratio*temp(c, k);
}
}
}
for (; c < N; ++c)
{
det *= temp(c, c);
float Diagonal = temp(c, c);
for (uint r = c + 1; r < N; ++r)
{
float ratio = temp[r*N + c] / Diagonal;
for (uint k = c+1; k < N;++k)
{
temp(r, k) = temp(r, k) - ratio*temp(c, k);
}
}
}
return det;
}
Algorithms to reduce an n by n matrix to upper (or lower) triangular form by Gaussian elimination generally have complexity of O(n^3) (where ^ represents "to power of").
There are alternative approaches for computing determinate, such as evaluating the set of eigenvalues (the determinant of a square matrix is equal to the product of its eigenvalues). For general matrices, computation of the complete set of eigenvalues is also - practically - O(n^3).
In theory, however, calculation of the set of eigenvalues has complexity of n^w where w is between 2 and 2.376 - which means for (much) larger matrices it will be faster than using Gaussian elimination. Have a look at an article "Fast linear algebra is stable" by James Demmel, Ioana Dumitriu, and Olga Holtz in Numerische Mathematik, Volume 108, Issue 1, pp. 59-91, November 2007. If Eigen uses an approach with complexity less than O(n^3) for larger matrices (I don't know, never having had reason to investigate such things) that would explain your observations.
The answer most places seem to use Block LU Factorization to create an Lower triangle and Upper triangle matrix in the same memory space. It is ~O(n^2.5) depending on the size of block you use.
Here is a power point from Rice University that explains the algorithm.
www.caam.rice.edu/~timwar/MA471F03/Lecture24.ppt
Division by a matrix means multiplication by its inverse.
The idea seems to be to increase the number of n^2 operations significantly but reduce the number m^3 which in effect lowers the complexity of the algorithm since m is of a fixed small size.
Going to take a little bit to write this up in an efficient manner since to do it efficiently requires 'in place' algorithms I don't have written yet.
I was trying to prove a point with OpenMP compared to MPICH, and I cooked up the following example to demonstrate how easy it was to do some high performance in OpenMP.
The Gauss-Seidel iteration is split into two separate runs, such that in each sweep every operation can be performed in any order, and there should be no dependency between each task. So in theory each processor should never have to wait for another process to perform any kind of synchronization.
The problem I am encountering, is that I, independent of problem size, find there is only a weak speed-up of 2 processors and with more than 2 processors it might even be slower.
Many other linear paralleled routine I can obtain very good scaling, but this one is tricky.
My fear is that I am unable to "explain" to the compiler that operation that I perform on the array, is thread-safe, such that it is unable to be really effective.
See the example below.
Anyone has any clue on how to make this more effective with OpenMP?
void redBlackSmooth(std::vector<double> const & b,
std::vector<double> & x,
double h)
{
// Setup relevant constants.
double const invh2 = 1.0/(h*h);
double const h2 = (h*h);
int const N = static_cast<int>(x.size());
double sigma = 0;
// Setup some boundary conditions.
x[0] = 0.0;
x[N-1] = 0.0;
// Red sweep.
#pragma omp parallel for shared(b, x) private(sigma)
for (int i = 1; i < N-1; i+=2)
{
sigma = -invh2*(x[i-1] + x[i+1]);
x[i] = (h2/2.0)*(b[i] - sigma);
}
// Black sweep.
#pragma omp parallel for shared(b, x) private(sigma)
for (int i = 2; i < N-1; i+=2)
{
sigma = -invh2*(x[i-1] + x[i+1]);
x[i] = (h2/2.0)*(b[i] - sigma);
}
}
Addition:
I have now also tried with a raw pointer implementation and it has the same behavior as using STL container, so it can be ruled out that it is some pseudo-critical behavior comming from STL.
First of all, make sure that the x vector is aligned to cache boundaries. I did some test, and I get something like a 100% improvement with your code on my machine (core duo) if I force the alignment of memory:
double * x;
const size_t CACHE_LINE_SIZE = 256;
posix_memalign( reinterpret_cast<void**>(&x), CACHE_LINE_SIZE, sizeof(double) * N);
Second, you can try to assign more computation to each thread (in this way you can keep cache-lines separated), but I suspect that openmp already does something like this under the hood, so it may be worthless with large N.
In my case this implementation is much faster when x is not cache-aligned.
const int workGroupSize = CACHE_LINE_SIZE / sizeof(double);
assert(N % workGroupSize == 0); //Need to tweak the code a bit to let it work with any N
const int workgroups = N / workGroupSize;
int j, base , k, i;
#pragma omp parallel for shared(b, x) private(sigma, j, base, k, i)
for ( j = 0; j < workgroups; j++ ) {
base = j * workGroupSize;
for (int k = 0; k < workGroupSize; k+=2)
{
i = base + k + (redSweep ? 1 : 0);
if ( i == 0 || i+1 == N) continue;
sigma = -invh2* ( x[i-1] + x[i+1] );
x[i] = ( h2/2.0 ) * ( b[i] - sigma );
}
}
In conclusion, you definitely have a problem of cache-fighting, but given the way openmp works (sadly I am not familiar with it) it should be enough to work with properly allocated buffers.
I think the main problem is about type of array structure you are using. Lets try comparing results with vectors and arrays. (Arrays = c-arrays using new operator).
Vector and array sizes are N = 10000000. I force the smoothing function to repeat in order to maintain runtime > 0.1secs.
Vector Time: 0.121007 Repeat: 1 MLUPS: 82.6399
Array Time: 0.164009 Repeat: 2 MLUPS: 121.945
MLUPS = ((N-2)*repeat/runtime)/1000000 (Million Lattice Points Update per second)
MFLOPS are misleading when it comes to grid calculation. A few changes in the basic equation can lead to consider high performance for the same runtime.
The modified code:
double my_redBlackSmooth(double *b, double* x, double h, int N)
{
// Setup relevant constants.
double const invh2 = 1.0/(h*h);
double const h2 = (h*h);
double sigma = 0;
// Setup some boundary conditions.
x[0] = 0.0;
x[N-1] = 0.0;
double runtime(0.0), wcs, wce;
int repeat = 1;
timing(&wcs);
for(; runtime < 0.1; repeat*=2)
{
for(int r = 0; r < repeat; ++r)
{
// Red sweep.
#pragma omp parallel for shared(b, x) private(sigma)
for (int i = 1; i < N-1; i+=2)
{
sigma = -invh2*(x[i-1] + x[i+1]);
x[i] = (h2*0.5)*(b[i] - sigma);
}
// Black sweep.
#pragma omp parallel for shared(b, x) private(sigma)
for (int i = 2; i < N-1; i+=2)
{
sigma = -invh2*(x[i-1] + x[i+1]);
x[i] = (h2*0.5)*(b[i] - sigma);
}
// cout << "In Array: " << r << endl;
}
if(x[0] != 0) dummy(x[0]);
timing(&wce);
runtime = (wce-wcs);
}
// cout << "Before division: " << repeat << endl;
repeat /= 2;
cout << "Array Time:\t" << runtime << "\t" << "Repeat:\t" << repeat
<< "\tMLUPS:\t" << ((N-2)*repeat/runtime)/1000000.0 << endl;
return runtime;
}
I didn't change anything in the code except than array type. For better cache access and blocking you should look into data alignment (_mm_malloc).
I'm trying to write a function that runs a loop in C++ from R using Rcpp.
I have a matrix Z which is one row shorter than the matrix OUT that the function is supposed to return because each position of first row of OUT will be given by the scalar sigma_0.
The function is supposed to implement a differential equation. Each iteration depends on a value from the matrix Z as well as a previously generated value of the matrix OUT.
What I've got is this:
cppFunction('
NumericMatrix sim(NumericMatrix Z, long double sigma_0, long double delta, long double omega, long double gamma) {
int nrow = Z.nrow() + 1, ncol = Z.ncol();
NumericMatrix out(nrow, ncol);
for(int q = 0; q < ncol; q++) {
out(0, q) = sigma_0;
}
for(int i = 0; i < ncol; i++) {
for(int j = 1; j < nrow; j++) {
long double z = Z(j - 1, i);
long double sigma = out(j - 1, i);
out(j, i) = pow(abs(z * sigma) - gamma * z * sigma, delta);
}
}
return out;
}
')
Unfortunately I'm fairly certain it doesn't work. The function runs but the values calculated are incorrect - I've checked with simple examples in Excel and plain R-coding. I've stripped the main differentialequation apart trying to build it up step by step to see when the implementation i Excel and R using C++ starts to differ. Which seems to be when I start using the abs() function and power() function but I simply can't narrow the problem down. Any help would be greatly appreciated - also I might mention this is the first time for me using C++ and C++ along with R.
I think you want fabs rather than abs. abs operates on ints, while fabs operates on doubles / floats.
I'm probably going to ask this incorrectly and make myself look very stupid but here goes:
I'm trying to do some audio manipulate and processing on a .wav file. Now, I am able to read all of the data (including the header) but need the data to be in frequency, and, in order to this I need to use an FFT.
I searched the internet high and low and found one, and the example was taken out of the "Numerical Recipes in C" book, however, I amended it to use vectors instead of arrays. Ok so here's the problem:
I have been given (as an example to use) a series of numbers and a sampling rate:
X = {50, 206, -100, -65, -50, -6, 100, -135}
Sampling Rate : 8000
Number of Samples: 8
And should therefore answer this:
0Hz A=0 D=1.57079633
1000Hz A=50 D=1.57079633
2000HZ A=100 D=0
3000HZ A=100 D=0
4000HZ A=0 D=3.14159265
The code that I re-wrote compiles, however, when trying to input these numbers into the equation (function) I get a Segmentation fault.. Is there something wrong with my code, or is the sampling rate too high? (The algorithm doesn't segment when using a much, much smaller sampling rate). Here is the code:
#include <iostream>
#include <math.h>
#include <vector>
using namespace std;
#define SWAP(a,b) tempr=(a);(a)=(b);(b)=tempr;
#define pi 3.14159
void ComplexFFT(vector<float> &realData, vector<float> &actualData, unsigned long sample_num, unsigned int sample_rate, int sign)
{
unsigned long n, mmax, m, j, istep, i;
double wtemp,wr,wpr,wpi,wi,theta,tempr,tempi;
// CHECK TO SEE IF VECTOR IS EMPTY;
actualData.resize(2*sample_rate, 0);
for(n=0; (n < sample_rate); n++)
{
if(n < sample_num)
{
actualData[2*n] = realData[n];
}else{
actualData[2*n] = 0;
actualData[2*n+1] = 0;
}
}
// Binary Inversion
n = sample_rate << 1;
j = 0;
for(i=0; (i< n /2); i+=2)
{
if(j > i)
{
SWAP(actualData[j], actualData[i]);
SWAP(actualData[j+1], actualData[i+1]);
if((j/2)<(n/4))
{
SWAP(actualData[(n-(i+2))], actualData[(n-(j+2))]);
SWAP(actualData[(n-(i+2))+1], actualData[(n-(j+2))+1]);
}
}
m = n >> 1;
while (m >= 2 && j >= m) {
j -= m;
m >>= 1;
}
j += m;
}
mmax=2;
while(n > mmax) {
istep = mmax << 1;
theta = sign * (2*pi/mmax);
wtemp = sin(0.5*theta);
wpr = -2.0*wtemp*wtemp;
wpi = sin(theta);
wr = 1.0;
wi = 0.0;
for(m=1; (m < mmax); m+=2) {
for(i=m; (i <= n); i += istep)
{
j = i*mmax;
tempr = wr*actualData[j-1]-wi*actualData[j];
tempi = wr*actualData[j]+wi*actualData[j-1];
actualData[j-1] = actualData[i-1] - tempr;
actualData[j] = actualData[i]-tempi;
actualData[i-1] += tempr;
actualData[i] += tempi;
}
wr = (wtemp=wr)*wpr-wi*wpi+wr;
wi = wi*wpr+wtemp*wpi+wi;
}
mmax = istep;
}
// determine if the fundamental frequency
int fundemental_frequency = 0;
for(i=2; (i <= sample_rate); i+=2)
{
if((pow(actualData[i], 2)+pow(actualData[i+1], 2)) > pow(actualData[fundemental_frequency], 2)+pow(actualData[fundemental_frequency+1], 2)) {
fundemental_frequency = i;
}
}
}
int main(int argc, char *argv[]) {
vector<float> numbers;
vector<float> realNumbers;
numbers.push_back(50);
numbers.push_back(206);
numbers.push_back(-100);
numbers.push_back(-65);
numbers.push_back(-50);
numbers.push_back(-6);
numbers.push_back(100);
numbers.push_back(-135);
ComplexFFT(numbers, realNumbers, 8, 8000, 0);
for(int i=0; (i < realNumbers.size()); i++)
{
cout << realNumbers[i] << "\n";
}
}
The other thing, (I know this sounds stupid) but I don't really know what is expected of the
"int sign" That is being passed through the ComplexFFT function, this is where I could be going wrong.
Does anyone have any suggestions or solutions to this problem?
Thank you :)
I think the problem lies in errors in how you translated the algorithm.
Did you mean to initialize j to 1 rather than 0?
for(i = 0; (i < n/2); i += 2) should probably be for (i = 1; i < n; i += 2).
Your SWAPs should probably be
SWAP(actualData[j - 1], actualData[i - 1]);
SWAP(actualData[j], actualData[i]);
What are the following SWAPs for? I don't think they're needed.
if((j/2)<(n/4))
{
SWAP(actualData[(n-(i+2))], actualData[(n-(j+2))]);
SWAP(actualData[(n-(i+2))+1], actualData[(n-(j+2))+1]);
}
The j >= m in while (m >= 2 && j >= m) should probably be j > m if you intended to do bit reversal.
In the code implementing the Danielson-Lanczos section, are you sure j = i*mmax; was not supposed to be an addition, i.e. j = i + mmax;?
Apart from that, there are a lot of things you can do to simplify your code.
Using your SWAP macro should be discouraged when you can just use std::swap... I was going to suggest std::swap_ranges, but then I realized you only need to swap the real parts, since your data is all reals (your time-series imaginary parts are all 0):
std::swap(actualData[j - 1], actualData[i - 1]);
You can simplify the entire thing using std::complex, too.
I reckon its down to the re-sizing of your vector.
One possibility: Maybe re-sizing will create temp objects on the stack before moving them back to heap i think.
The FFT in Numerical Recipes in C uses the Cooley-Tukey Algorithm, so in answer to your question at the end, the int sign being passed allows the same routine to be used to compute both the forward (sign=-1) and inverse (sign=1) FFT. This seems to be consistent with the way you are using sign when you define theta = sign * (2*pi/mmax).