I was trying to develop a program in R to estimate a Spearman correlation with Rcpp. I did it, but it only works with matrix with less of a range between 45 00 - 50 000 vectors. I don't know why, but it only works with that dimension. I suppose there's limit with that type of information, maybe if I work it like a data.frame? I would really appreciate if someone gives me insight.
Here i post my code. Ive been trying to limit the max integer number that i call "denominador", which exceeds it. Maybe you could help me.
cppFunction('double spearman(NumericMatrix x){
int nrow = x.nrow(), ncol = x.ncol();
int nrow1 = nrow - 1;
double out = 0;
double cont = 0;
double cont1 = 0;
double r = 0;
int denominador = ncol*(pow(ncol,2.0)-1)
for(int i = 0; i < nrow1; i++){
#Here i use every combination of vectors starting with the first one, and so on
for(int j = i +1; j < nrow; j++){
cont1 = 0;
for(int t = 0; t < ncol; t++){
cont = pow(x(i,t)-x(j,t), 2.0);
cont1 += cont;
}
#Here i begin to store the mean correlation, in order to a final mean of all the possible correlations
r = 2*(1-6*(cont1/denominador))/(nrow*nrow1);
out += r;
}
}
return out;
}')
To repeat more succintly:
You can have more than 2^31-1 elements in a vector.
Matrices are vectors with dim attributes.
You can have more than 2^31-1 elements in a matrix (ie n times k)
Your row and column index are still limited to 2^31.
Example of a big vector:
R> n <- .Machine$integer.max + 100
R> tmpVec <- 1:n
R> length(tmpVec)
[1] 2147483747
R> newVec <- sqrt(tmpVec)
R>
A couple caveats
Before we get started, I'm assuming:
R > 3.0.0
Long Vectors that allow for 2 ^ 52 elements are then supported
Rcpp > 0.12.0
Patch where thirdwing replaced instances of int and size_t with R_xlen_t and R_xlength. See release post for more details...
Constructing a large NumericMatrix
I think you may be running into a memory allocation issue...
As the following works on my 32gb machine:
Rcpp::cppFunction("NumericMatrix make_matrix(){
NumericMatrix m(50000, 50000);
return m;
}")
m = make_matrix()
object.size(m)
## 20000000200 bytes # about 20.0000002 gb
Running:
# Creates an 18.6gb matrix!!!
m = matrix(0, ncol = 50000, nrow = 50000)
Rcpp::cppFunction("void get_length(NumericMatrix m){
Rcout << m.nrow() << ' ' << m.ncol();
}")
get_length(m)
## 50000 50000
object.size(m)
## 20000000200 bytes # about 20.0000002 gb
Matrix Bounds
In theory, you are bounded by the total number of elements in the matrix being less than (2^31 - 1)^2 = 4,611,686,014,132,420,609 per:
Arrays (including matrices) can be based on long vectors provided each of their dimensions is at most 2^31 - 1: thus there are no 1-dimensional long arrays.
See Long Vector
Now, fitting into a matrix:
m = matrix(nrow=2^31, ncol=1)
Error in matrix(nrow = 2^31, ncol = 1) :
invalid 'nrow' value (too large or NA)
In addition: Warning message:
In matrix(nrow = 2^31, ncol = 1) :
NAs introduced by coercion to integer range
The limit both R and Rcpp adhere to regarding the column/row is:
.Machine$integer.max
## 2147483647
Note that by 1 number we have:
2^31 = 2,147,483,648 > 2,147,483,647 = .Machine$integer.max
Maximum Amount of Elements in a Vector
However, the limit associated with a pure atomic vector is given as 2^52 (even though it should be in the ballpark of 2 ^ 64 - 1). Thus, we have the following example which illustrates the ability to access 2^32 by concatenating two vectors of 2^31 + 2^31:
v = numeric(2^31)
length(v)
## [1] 2147483648
object.size(v)
## 17179869224 bytes # about 17.179869224 gb
v2 = c(v,v)
length(v2)
## 4294967296
object.size(v2)
## 34359738408 bytes # about 34.359738408 gb
Suggestions
Use bigmemory via Rcpp
Maintain your own stack of vectors.
If I am given a list of integers/floats, how would I find the two closest numbers using sorting?
Such a method will do what you want:
>>> def minDistance(lst):
lst = sorted(lst)
index = -1
distance = max(lst) - min(lst)
for i in range(len(lst)-1):
if lst[i+1] - lst[i] < distance:
distance = lst[i+1] - lst[i]
index = i
for i in range(len(lst)-1):
if lst[i+1] - lst[i] == distance:
print lst[i],lst[i+1]
In the first for loop we find out the minimum distance, and in the second loop, we print all the pairs with this distance. Works as below:
>>> lst = (1,2,3,6,12,9,1.4,145,12,83,53,12,3.4,2,7.5)
>>> minDistance(lst)
2 2
12 12
12 12
>>>
It could be more than one possibilities. Consider this list
[0,1, 20, 25, 30, 200, 201]
[0,1] and [200, 201] are equal closest.
Jose has a valid point. However, you could just consider these cases equal and not care about returning one or the other.
I don't think you need a sorting algorithm, per say, but maybe just a sort of 'champion' algorithm like this one:
def smallestDistance(self, arr):
championI = -1
championJ = -1
champDistance = sys.maxint
i = 0
while i < arr.length:
j = i + 1
while j < arr.length:
if math.fabs(arr[i] - arr[j]) < champDistance:
championI = i
championJ = j
champDistance = math.fabs(arr[i] - arr[j])
j += 1
i += 1
r = [arr[championI], arr[championJ]]
return r
This function will return a sub array with the two values that are closest together. Note that this will only work given an array of at least two long. Otherwise, you will throw some error.
I think the popular sorting algorithm known as bubble sort would do this quite well. Though running at possible O(n^2) time if that kind of thing matters to you...
Here is standard bubble sort based on the sorting of arrays by integer size.
def bubblesort( A ):
for i in range( len( A ) ):
for k in range( len( A ) - 1, i, -1 ):
if ( A[k] < A[k - 1] ):
swap( A, k, k - 1 )
def swap( A, x, y ):
tmp = A[x]
A[x] = A[y]
A[y] = tmp
You can just modify the algorithm slightly to fit your purposes if you insist on doing this using a sorting algorithm. However, I think the initial function works as well...
hope that helps.
I'm looking for an algorithm to find two integer values x,y such that their product is as close as possible to a given double k while their difference is low.
Example: The area of a rectangle is k=21.5 and I want to find the edges length of that rectangle with the constraint that they must be integer, in this case some of the possible solutions are (excluding permutations) (x=4,y=5),(x=3,y=7) and the stupid solution (x=21,y=1)
In fact for the (3,7) couple we have the same difference as for the (21,1) couple
21.5-3*7=0.5 = 21.5-21*1
while for the (4,5) couple
21.5-4*5=1.5
but the couple (4,5) is preferable because their difference is 1, so the rectangle is "more squared".
Is there a method to extract those x,y values for which the difference is minimal and the difference of their product to k is also minimal?
You have to look around square root of the number in question. For 21.5 sqrt(21.5) = 4.6368 and indeed the numbers you found are just around this value.
You want to minimize
the difference of the factors X and Y
the difference of the product X × Y and P.
You have provided an example where these objectives contradict each other. 3 × 7 is closer to 21 than 4 × 5, but the latter factors are more square. Thus, there cannot be any algorithm which minimizes both at the same time.
You can weight the two objectives and transform them into one, and then solve the problem via non-linear integer programming:
min c × |X × Y - P| + d × |X – Y|
subject to X, Y ∈ ℤ
X, Y ≥ 0
where c, d are non-negative numbers that define which objective you value how much.
Take the square root, floor one integer, ceil the other.
#include <iostream>
#include <cmath>
int main(){
double real_value = 21.5;
int sign = real_value > 0 ? 1 : -1;
int x = std::floor(std::sqrt(std::abs(real_value)));
int y = std::ceil(std::sqrt(std::abs(real_value)));
x *= sign;
std::cout << x << "*" << y << "=" << (x*y) << " ~~ " << real_value << "\n";
return 0;
}
Note that this approach only gives you a good distance between x and y, for example if real_value = 10 then x=3 and y=4, but the product is 12. If you want to achieve a better distance between the product and the real value you have to adjust the integers and increase their difference.
double best = DBL_MAX;
int a, b;
for (int i = 1; i <= sqrt(k); i++)
{
int j = round(k/i);
double d = abs(k - i*j);
if (d < best)
{
best = d;
a = i;
b = j;
}
}
Let given double be K.
Take floor of K, let it be F.
Take 2 integer arrays of size F*F. Let they be Ar1, Ar2.
Run loop like this
int z = 0 ;
for ( int i = 1 ; i <= F ; ++i )
{
for ( int j = 1 ; j <= F ; ++j )
{
Ar1[z] = i * j ;
Ar2[z] = i - j ;
++ z ;
}
}
You got the difference/product pairs for all the possible numbers now. Now assign some 'Priority value' for product being close to value K and some other to the smaller difference. Now traverse these arrays from 0 to F*F and find the pair you required by checking your condition.
For eg. Let being closer to K has priority 1 and being smaller in difference has priority .5. Consider another Array Ar3 of size F*F. Then,
for ( int i = 0 ; i <= F*F ; ++i )
{
Ar3[i] = (Ar1[i] - K)* 1 + (Ar2[i] * .5) ;
}
Traverse Ar3 to find the greatest value, that will be the pair you are looking for.
The CUDA NPP library supports filtering of image using the nppiFilter_8u_C1R command but keep getting errors. I have no problem getting the boxFilterNPP sample code up and running.
eStatusNPP = nppiFilterBox_8u_C1R(oDeviceSrc.data(), oDeviceSrc.pitch(),
oDeviceDst.data(), oDeviceDst.pitch(),
oSizeROI, oMaskSize, oAnchor);
But if I change it to use nppiFilter_8u_C1R instead, eStatusNPP return the error -24 (NPP_TEXTURE_BIND_ERROR). The code below is the alterations I made to the original boxFilterNPP sample.
NppiSize oMaskSize = {5,5};
npp::ImageCPU_32s_C1 hostKernel(5,5);
for(int x = 0 ; x < 5; x++){
for(int y = 0 ; y < 5; y++){
hostKernel.pixels(x,y)[0].x = 1;
}
}
npp::ImageNPP_32s_C1 pKernel(hostKernel);
Npp32s nDivisor = 1;
eStatusNPP = nppiFilter_8u_C1R(oDeviceSrc.data(), oDeviceSrc.pitch(),
oDeviceDst.data(), oDeviceDst.pitch(),
oSizeROI,
pKernel.data(),
oMaskSize, oAnchor,
nDivisor);
This have been tried on CUDA 4.2 and 5.0, with same result.
The code runs with the expected result when oMaskSize = {1,1}
Filter applies the mask extending upward and to the left, following the mathematical convention that the convolution between two functions reverses the direction of the second function.
The box filter mask extends downwards and to the right, which is probably more intuitive.
In any case, the problem is caused by the fact that the input image in the changed code would have to be sampled at what would effectively be SOURCE[-4, -4) in order to compute DESTINATION[0, 0]. Since the input image is being accessed via a texture sampler, binding the source image pointer offset by (-4, -4) causes the texture-bind error you're seeing.
Workaround: The simplest workaround for this issue would be to set the anchor point to (4, 4), which would effectively move the mask down and to the right. You still need to be aware that you'd want to invert the weights in the kernel array (i.e. K[-4, -4] -> K[0, 0], K[0, 0] -> K[-4, -4], etc.).
I had the same problem when I stored my kernel as an ImageCPU/ImageNPP.
A good solution is to store the kernel as a traditional 1D array on the device. I tried this, and it gave me good results (and none of those unpredictable or garbage images).
Thanks to Frank Jargstorff in this StackOverflow post for the 1D idea.
NppiSize oMaskSize = {5,5};
Npp32s hostKernel[5*5];
for(int x = 0 ; x < 5; x++){
for(int y = 0 ; y < 5; y++){
hostKernel[x*5+y] = 1;
}
}
Npp32s* pKernel; //just a regular 1D array on the GPU
cudaMalloc((void**)&pKernel, 5 * 5 * sizeof(Npp32s));
cudaMemcpy(pKernel, hostKernel, 5 * 5 * sizeof(Npp32s), cudaMemcpyHostToDevice);
Using this original image, here's the blurred result that I get from your code with the 1D kernel array:
Other parameters that I used:
Npp32s nDivisor = 25;
NppiPoint oAnchor = {4, 4};
Thank you for the help.
Got over the error, but I'm seeing some odd behavior. The image changes depending on what program I run just before and the image do not show what i am going fore.
The example that I am trying to mimic is the nppiFilterBox_8u_C1R with the use of nppiFilter_8u_C1R where i set the kernel to ones and the nDivisor to the sum of the kernel.
This code is still a alteration on the boxFilterNPP sample code.
NppiSize oMaskSize = {5,5};
npp::ImageCPU_32s_C1 hostKernel(5,5);
for(int x = 0 ; x < 5; x++){
for(int y = 0 ; y < 5; y++){
hostKernel.pixels(x,y)[0].x = 1;
}
}
npp::ImageNPP_32s_C1 pKernel(hostKernel);
Npp32s nDivisor = 25;
NppiPoint oAnchor = {4, 4};
eStatusNPP = nppiFilter_8u_C1R(oDeviceSrc.data(),oDeviceSrc.pitch(),
oDeviceDst.data(), oDeviceDst.pitch(),
oSizeROI,
pKernel.data(),
oMaskSize, oAnchor,
nDivisor);
Since the kernel is only ones the need to invert the weights should not be a issue.
The 5 different kinds of image this code return are show below. Mostly the last one is returned.
http://1ordrup.dk/kasper/image/Lena_boxFilter1.jpg
http://1ordrup.dk/kasper/image/Lena_boxFilter2.jpg
http://1ordrup.dk/kasper/image/Lena_boxFilter3.jpg
http://1ordrup.dk/kasper/image/Lena_boxFilter4.jpg
http://1ordrup.dk/kasper/image/Lena_boxFilter5.jpg
I think the reason this happens is that the kernel is not initilised correctly or no used, thus data with pseudo-random content is used for the kernel.
I am trying to do a project in sound processing and need to put the frequencies into another domain. Now, I have tried to implement an FFT, that didn't go well. I tried to understand the z-transform, that didn't go to well either. I read up and found DFT's a lot more simple to understand, especially the algorithm. So I coded the algorithm using examples but I do not know or think the output is right. (I don't have Matlab on here, and cannot find any resources to test it) and wondered if you guys knew if I was going in the right direction. Here is my code so far:
#include <iostream>
#include <complex>
#include <vector>
using namespace std;
const double PI = 3.141592;
vector< complex<double> > DFT(vector< complex<double> >& theData)
{
// Define the Size of the read in vector
const int S = theData.size();
// Initalise new vector with size of S
vector< complex<double> > out(S, 0);
for(unsigned i=0; (i < S); i++)
{
out[i] = complex<double>(0.0, 0.0);
for(unsigned j=0; (j < S); j++)
{
out[i] += theData[j] * polar<double>(1.0, - 2 * PI * i * j / S);
}
}
return out;
}
int main(int argc, char *argv[]) {
vector< complex<double> > numbers;
numbers.push_back(102023);
numbers.push_back(102023);
numbers.push_back(102023);
numbers.push_back(102023);
vector< complex<double> > testing = DFT(numbers);
for(unsigned i=0; (i < testing.size()); i++)
{
cout << testing[i] << endl;
}
}
The inputs are:
102023 102023
102023 102023
And the result:
(408092, 0)
(-0.0666812, -0.0666812)
(1.30764e-07, -0.133362)
(0.200044, -0.200043)
Any help or advice would be great, I'm not expecting a lot, but, anything would be great. Thank you :)
#Phorce is right here. I don't think there is any reson to reinvent the wheel. However, if you want to do this so that you understand the methodology and to have the joy of coding it yourself I can provide a FORTRAN FFT code that I developed some years ago. Of course this is not C++ and will require a translation; this should not be too difficult and should enable you to learn a lot in doing so...
Below is a Radix 4 based algorithm; this radix-4 FFT recursively partitions a DFT into four quarter-length DFTs of groups of every fourth time sample. The outputs of these shorter FFTs are reused to compute many outputs, thus greatly reducing the total computational cost. The radix-4 decimation-in-frequency FFT groups every fourth output sample into shorter-length DFTs to save computations. The radix-4 FFTs require only 75% as many complex multiplies as the radix-2 FFTs. See here for more information.
!+ FILE: RADIX4.FOR
! ===================================================================
! Discription: Radix 4 is a descreet complex Fourier transform algorithim. It
! is to be supplied with two real arrays, one for real parts of function
! one for imaginary parts: It can also unscramble transformed arrays.
! Usage: calling FASTF(XREAL,XIMAG,ISIZE,ITYPE,IFAULT); we supply the
! following:
!
! XREAL - array containing real parts of transform sequence
! XIMAG - array containing imagianry parts of transformation sequence
! ISIZE - size of transform (ISIZE = 4*2*M)
! ITYPE - +1 forward transform
! -1 reverse transform
! IFAULT - 1 if error
! - 0 otherwise
! ===================================================================
!
! Forward transform computes:
! X(k) = sum_{j=0}^{isize-1} x(j)*exp(-2ijk*pi/isize)
! Backward computes:
! x(j) = (1/isize) sum_{k=0}^{isize-1} X(k)*exp(ijk*pi/isize)
!
! Forward followed by backwards will result in the origonal sequence!
!
! ===================================================================
SUBROUTINE FASTF(XREAL,XIMAG,ISIZE,ITYPE,IFAULT)
REAL*8 XREAL(*),XIMAG(*)
INTEGER MAX2,II,IPOW
PARAMETER (MAX2 = 20)
! Check for valid transform size upto 2**(max2):
IFAULT = 1
IF(ISIZE.LT.4) THEN
print*,'FFT: Error: Data array < 4 - Too small!'
return
ENDIF
II = 4
IPOW = 2
! Prepare mod 2:
1 IF((II-ISIZE).NE.0) THEN
II = II*2
IPOW = IPOW + 1
IF(IPOW.GT.MAX2) THEN
print*,'FFT: Error: FFT1!'
return
ENDIF
GOTO 1
ENDIF
! Check for correct type:
IF(IABS(ITYPE).NE.1) THEN
print*,'FFT: Error: Wrong type of transformation!'
return
ENDIF
! No entry errors - continue:
IFAULT = 0
! call FASTG to preform transformation:
CALL FASTG(XREAL,XIMAG,ISIZE,ITYPE)
! Due to Radix 4 factorisation results are not in the same order
! after transformation as they were when the data was submitted:
! We now call SCRAM, to unscramble the reults:
CALL SCRAM(XREAL,XIMAG,ISIZE,IPOW)
return
END
!-END: RADIX4.FOR
! ===============================================================
! Discription: This is the radix 4 complex descreet fast Fourier
! transform with out unscrabling. Suitable for convolutions or other
! applications that do not require unscrambling. Designed for use
! with FASTF.FOR.
!
SUBROUTINE FASTG(XREAL,XIMAG,N,ITYPE)
INTEGER N,IFACA,IFCAB,LITLA
INTEGER I0,I1,I2,I3
REAL*8 XREAL(*),XIMAG(*),BCOS,BSIN,CW1,CW2,PI
REAL*8 SW1,SW2,SW3,TEMPR,X1,X2,X3,XS0,XS1,XS2,XS3
REAL*8 Y1,Y2,Y3,YS0,YS1,YS2,YS3,Z,ZATAN,ZFLOAT,ZSIN
ZATAN(Z) = ATAN(Z)
ZFLOAT(K) = FLOAT(K) ! Real equivalent of K.
ZSIN(Z) = SIN(Z)
PI = (4.0)*ZATAN(1.0)
IFACA = N/4
! Forward transform:
IF(ITYPE.GT.0) THEN
GOTO 5
ENDIF
! If this is for an inverse transform - conjugate the data:
DO 4, K = 1,N
XIMAG(K) = -XIMAG(K)
4 CONTINUE
5 IFCAB = IFACA*4
! Proform appropriate transformations:
Z = PI/ZFLOAT(IFCAB)
BCOS = -2.0*ZSIN(Z)**2
BSIN = ZSIN(2.0*Z)
CW1 = 1.0
SW1 = 0.0
! This is the main body of radix 4 calculations:
DO 10, LITLA = 1,IFACA
DO 8, I0 = LITLA,N,IFCAB
I1 = I0 + IFACA
I2 = I1 + IFACA
I3 = I2 + IFACA
XS0 = XREAL(I0) + XREAL(I2)
XS1 = XREAL(I0) - XREAL(I2)
YS0 = XIMAG(I0) + XIMAG(I2)
YS1 = XIMAG(I0) - XIMAG(I2)
XS2 = XREAL(I1) + XREAL(I3)
XS3 = XREAL(I1) - XREAL(I3)
YS2 = XIMAG(I1) + XIMAG(I3)
YS3 = XIMAG(I1) - XIMAG(I3)
XREAL(I0) = XS0 + XS2
XIMAG(I0) = YS0 + YS2
X1 = XS1 + YS3
Y1 = YS1 - XS3
X2 = XS0 - XS2
Y2 = YS0 - YS2
X3 = XS1 - YS3
Y3 = YS1 + XS3
IF(LITLA.GT.1) THEN
GOTO 7
ENDIF
XREAL(I2) = X1
XIMAG(I2) = Y1
XREAL(I1) = X2
XIMAG(I1) = Y2
XREAL(I3) = X3
XIMAG(I3) = Y3
GOTO 8
! Now IF required - we multiply by twiddle factors:
7 XREAL(I2) = X1*CW1 + Y1*SW1
XIMAG(I2) = Y1*CW1 - X1*SW1
XREAL(I1) = X2*CW2 + Y2*SW2
XIMAG(I1) = Y2*CW2 - X2*SW2
XREAL(I3) = X3*CW3 + Y3*SW3
XIMAG(I3) = Y3*CW3 - X3*SW3
8 CONTINUE
IF(LITLA.EQ.IFACA) THEN
GOTO 10
ENDIF
! Calculate a new set of twiddle factors:
Z = CW1*BCOS - SW1*BSIN + CW1
SW1 = BCOS*SW1 + BSIN*CW1 + SW1
TEMPR = 1.5 - 0.5*(Z*Z + SW1*SW1)
CW1 = Z*TEMPR
SW1 = SW1*TEMPR
CW2 = CW1*CW1 - SW1*SW1
SW2 = 2.0*CW1*SW1
CW3 = CW1*CW2 - SW1*SW2
SW3 = CW1*SW2 + CW2*SW1
10 CONTINUE
IF(IFACA.LE.1) THEN
GOTO 14
ENDIF
! Set up tranform split for next stage:
IFACA = IFACA/4
IF(IFACA.GT.0) THEN
GOTO 5
ENDIF
! This is the calculation of a radix two-stage:
DO 13, K = 1,N,2
TEMPR = XREAL(K) + XREAL(K + 1)
XREAL(K + 1) = XREAL(K) - XREAL(K + 1)
XREAL(K) = TEMPR
TEMPR = XIMAG(K) + XIMAG(K + 1)
XIMAG(K + 1) = XIMAG(K) - XIMAG(K + 1)
XIMAG(K) = TEMPR
13 CONTINUE
14 IF(ITYPE.GT.0) THEN
GOTO 17
ENDIF
! For the inverse case, cojugate and scale the transform:
Z = 1.0/ZFLOAT(N)
DO 16, K = 1,N
XIMAG(K) = -XIMAG(K)*Z
XREAL(K) = XREAL(K)*Z
16 CONTINUE
17 return
END
! ----------------------------------------------------------
!-END of subroutine FASTG.FOR.
! ----------------------------------------------------------
!+ FILE: SCRAM.FOR
! ==========================================================
! Discription: Subroutine for unscrambiling FFT data:
! ==========================================================
SUBROUTINE SCRAM(XREAL,XIMAG,N,IPOW)
INTEGER L(19),II,J1,J2,J3,J4,J5,J6,J7,J8,J9,J10,J11,J12
INTEGER J13,J14,J15,J16,J17,J18,J19,J20,ITOP,I
REAL*8 XREAL(*),XIMAG(*),TEMPR
EQUIVALENCE (L1,L(1)),(L2,L(2)),(L3,L(3)),(L4,L(4))
EQUIVALENCE (L5,L(5)),(L6,L(6)),(L7,L(7)),(L8,L(8))
EQUIVALENCE (L9,L(9)),(L10,L(10)),(L11,L(11)),(L12,L(12))
EQUIVALENCE (L13,L(13)),(L14,L(14)),(L15,L(15)),(L16,L(16))
EQUIVALENCE (L17,L(17)),(L18,L(18)),(L19,L(19))
II = 1
ITOP = 2**(IPOW - 1)
I = 20 - IPOW
DO 5, K = 1,I
L(K) = II
5 CONTINUE
L0 = II
I = I + 1
DO 6, K = I,19
II = II*2
L(K) = II
6 CONTINUE
II = 0
DO 9, J1 = 1,L1,L0
DO 9, J2 = J1,L2,L1
DO 9, J3 = J2,L3,L2
DO 9, J4 = J3,L4,L3
DO 9, J5 = J4,L5,L4
DO 9, J6 = J5,L6,L5
DO 9, J7 = J6,L7,L6
DO 9, J8 = J7,L8,L7
DO 9, J9 = J8,L9,L8
DO 9, J10 = J9,L10,L9
DO 9, J11 = J10,L11,L10
DO 9, J12 = J11,L12,L11
DO 9, J13 = J12,L13,L12
DO 9, J14 = J13,L14,L13
DO 9, J15 = J14,L15,L14
DO 9, J16 = J15,L16,L15
DO 9, J17 = J16,L17,L16
DO 9, J18 = J17,L18,L17
DO 9, J19 = J18,L19,L18
J20 = J19
DO 9, I = 1,2
II = II +1
IF(II.GE.J20) THEN
GOTO 8
ENDIF
! J20 is the bit reverse of II!
! Pairwise exchange:
TEMPR = XREAL(II)
XREAL(II) = XREAL(J20)
XREAL(J20) = TEMPR
TEMPR = XIMAG(II)
XIMAG(II) = XIMAG(J20)
XIMAG(J20) = TEMPR
8 J20 = J20 + ITOP
9 CONTINUE
return
END
! -------------------------------------------------------------------
!-END:
! -------------------------------------------------------------------
Going through this and understanding it will take time! I wrote this using a CalTech paper I found years ago, I cannot recall the reference I am afraid. Good luck.
I hope this helps.
Your code works.
I would give more digits for PI ( 3.1415926535898 ).
Also, you have to devide the output of the DFT summation by S, the DFT size.
Since the input series in your test is constant, the DFT output should have only one non-zero coefficient.
And indeed all the output coefficients are very small relative to the first one.
But for a large input length, this is not an efficient way of implementing the DFT.
If timing is a concern, look into the Fast Fourrier Transform for faster methods to calculate the DFT.
Your code looks right to me. I'm not sure what you were expecting for output but, given that your input is a constant value, the DFT of a constant is a DC term in bin 0 and zeroes in the remaining bins (or a close equivalent, which you have).
You might try testing you code with a longer sequence containing some type of waveform like a sine wave or a square wave. In general, however, you should consider using something like fftw in production code. Its been wrung out and highly optimized by many people for a long time. FFTs are optimized DFTs for special cases (e.g., lengths that are powers of 2).
Your code looks okey. out[0] should represent the "DC" component of your input waveform. In your case, it is 4 times bigger than the input waveform, because your normalization coefficient is 1.
The other coefficients should represent the amplitude and phase of your input waveform. The coefficients are mirrored, i.e., out[i] == out[N-i]. You can test this with the following code:
double frequency = 1; /* use other values like 2, 3, 4 etc. */
for (int i = 0; i < 16; i++)
numbers.push_back(sin((double)i / 16 * frequency * 2 * PI));
For frequency = 1, this gives:
(6.53592e-07,0)
(6.53592e-07,-8)
(6.53592e-07,1.75661e-07)
(6.53591e-07,2.70728e-07)
(6.5359e-07,3.75466e-07)
(6.5359e-07,4.95006e-07)
(6.53588e-07,6.36767e-07)
(6.53587e-07,8.12183e-07)
(6.53584e-07,1.04006e-06)
(6.53581e-07,1.35364e-06)
(6.53576e-07,1.81691e-06)
(6.53568e-07,2.56792e-06)
(6.53553e-07,3.95615e-06)
(6.53519e-07,7.1238e-06)
(6.53402e-07,1.82855e-05)
(-8.30058e-05,7.99999)
which seems correct to me: negligible DC, amplitude 8 for 1st harmonics, negligible amplitudes for other harmonics.
MoonKnight has already provided a radix-4 Decimation In Frequency Cooley-Tukey scheme in Fortran. I'm below providing a radix-2 Decimation In Frequency Cooley-Tukey scheme in Matlab.
The code is an iterative one and considers the scheme in the following figure:
A recursive approach is also possible.
As you will see, the implementation calculates also the number of performed multiplications and additions and compares it with the theoretical calculations reported in How many FLOPS for FFT?.
The code is obviously much slower than the highly optimized FFTW exploited by Matlab.
Note also that the twiddle factors omegaa^((2^(p - 1) * n)) can be calculated off-line and then restored from a lookup table, but this point is skipped in the code below.
For a Matlab implementation of an iterative radix-2 Decimation In Time Cooley-Tukey scheme, please see Implementing a Fast Fourier Transform for Option Pricing.
% --- Radix-2 Decimation In Frequency - Iterative approach
clear all
close all
clc
N = 32;
x = randn(1, N);
xoriginal = x;
xhat = zeros(1, N);
numStages = log2(N);
omegaa = exp(-1i * 2 * pi / N);
mulCount = 0;
sumCount = 0;
tic
M = N / 2;
for p = 1 : numStages;
for index = 0 : (N / (2^(p - 1))) : (N - 1);
for n = 0 : M - 1;
a = x(n + index + 1) + x(n + index + M + 1);
b = (x(n + index + 1) - x(n + index + M + 1)) .* omegaa^((2^(p - 1) * n));
x(n + 1 + index) = a;
x(n + M + 1 + index) = b;
mulCount = mulCount + 4;
sumCount = sumCount + 6;
end;
end;
M = M / 2;
end
xhat = bitrevorder(x);
timeCooleyTukey = toc;
tic
xhatcheck = fft(xoriginal);
timeFFTW = toc;
rms = 100 * sqrt(sum(sum(abs(xhat - xhatcheck).^2)) / sum(sum(abs(xhat).^2)));
fprintf('Time Cooley-Tukey = %f; \t Time FFTW = %f\n\n', timeCooleyTukey, timeFFTW);
fprintf('Theoretical multiplications count \t = %i; \t Actual multiplications count \t = %i\n', ...
2 * N * log2(N), mulCount);
fprintf('Theoretical additions count \t\t = %i; \t Actual additions count \t\t = %i\n\n', ...
3 * N * log2(N), sumCount);
fprintf('Root mean square with FFTW implementation = %.10e\n', rms);
Your code is correct to obtain the DFT.
The function you are testing is (sin ((double) i / points * frequency * 2) which corresponds to a synoid of amplitude 1, frequency 1 and sampling frequency Fs = number of points taken.
Operating with the obtained data we have:
As you can see, the DFT coefficients are symmetric with respect to the position coefficient N / 2, so only the first N / 2 provide information. The amplitude obtained by means of the module of the real and imaginary part must be divided by N and multiplied by 2 to reconstruct it. The frequencies of the coefficients will be multiples of Fs / N by the coefficient number.
If we introduce two sinusoids, one of frequency 2 and amplitude 1.3 and another of frequency 3 and amplitude 1.7.
for (int i = 0; i < 16; i++)
{
numbers.push_back(1.3 *sin((double)i / 16 * frequency1 * 2 * PI)+ 1.7 *
sin((double)i / 16 * frequency2 * 2 * PI));
}
The obtained data are:
Good luck.