Efficiency of summing images using MATLAB and OpenCV - c++

I am totally surprised by all of your answers. Thank you very much!
The bug code is showed as following:
percentage = (double)kk * 100.0 / (double)totalnum;
After I modified it to:
percentage = (double)kk * 100.0 / totalnum;
The problem is SOLVED. And this simple division consumed about 90s out of 150s. Maybe division between double and int is faster than it between doubles.
Again, thanks for all of your answers!
I'm trying to getting the average image from a set of pictures which come from a video. There are only 2 steps for this job:
Sum up all the images into a matrix.
Divide the matrix by the number of images.
I used following code in OpenCV: (C++)
Mat avIM = Mat::zeros(IMG_HEIGHT, IMG_WIDTH, CV_32FC3);
for (ii = startnum; ii <= endnum; ii += interval) {
string fullname = argv[1];
sprintf(filename, "\\%d.png", ii);
fullname.append(filename);
Mat tempIM = imread(fullname.c_str());
if (tempIM.empty()) { cout << "Can't open image!\n"; return -1; }
tempIM.convertTo(tempIM, CV_32FC3);
avIM += tempIM; //Sum up every image
++kk;
}
avIM = avIM * (double)(1.0 / kk); //get average'
And following code in MatLab: (2015a)
avIM = zeros(size(imread([im.dir,'\',num2str(startnum),'.png'])));
pointIdx = startnum:interval:endnum;
for j=pointIdx,
IM = imread([im.dir,'\',num2str(j),'.png']);
avIM = avIM + double(IM); %Sum up every image
end
avIM = uint8(round(avIM./size(pointIdx,2))); %get average
But when I run those two program on 2,100 images, OpenCV took 150.3s(Release) and MatLab took 103.1s. It really confused me that a C++ program runs slower than a MatLab script.
So what's slowing down my OpenCV program? If it's caused by my method of matrix accessing, what should I do to improve the efficiency?

Your code seems good enough, and in my tests I found it's running 10 times faster than Matlab code.
However, I show a slightly optimized code, that performs a little faster than yours.
Notes
Please note that I don't have a folder with images named as you, so I used cv::glob in C++ version, and dir in Matlab version to get the names of the images in the folder.
In my folder I have 82 small images, so the running time is obviously smaller than yours, but the relative performance should be reliable.
Execution time
Sum only Get filenames + Sum
Matlab: 0.173543 s (0.185308 s)
OpenCV #Seven Wang: 0.0145206 s (0.0155748 s)
OpenCV #Miki: 0.0128943 s (0.013333 s)
Considerations
Be sure that you're computing the running time consistently in OpenCV and Matlab.
Code
Matlab code:
tic
folder = 'D:\\SO\\temp\\old_075_6\\';
filenames = dir([folder '*.bmp']);
% Get rows and cols from 1st image
img = imread([folder name]);
S = zeros(size(img));
for ii = 1 : length(filenames)
name = filenames(ii).name;
currentImage = imread([folder name]);
S = S + double(currentImage);
end
S = uint8(round(S / length(filenames)));
toc
C++ code:
#include <opencv2\opencv.hpp>
#include <vector>
#include <iostream>
int main()
{
double ticLoad = double(cv::getTickCount());
std::string folder = "D:\\SO\\temp\\old_075_6\\*.bmp";
std::vector<cv::String> filenames;
cv::glob(folder, filenames);
int rows, cols;
{
// Just load the first image to get rows and cols
cv::Mat3b img = cv::imread(filenames[0]);
rows = img.rows;
cols = img.cols;
}
/*{
double tic = double(cv::getTickCount());
cv::Mat3d S(rows, cols, 0.0);
for (const auto& name : filenames)
{
cv::Mat currentImage = cv::imread(name);
currentImage.convertTo(currentImage, CV_64F);
S += currentImage;
}
S = S * double(1.0 / filenames.size());
cv::Mat3b avg;
S.convertTo(avg, CV_8U);
double toc = double(cv::getTickCount());
double timeLoad = (toc - ticLoad) / cv::getTickFrequency();
double time = (toc - tic) / cv::getTickFrequency();
std::cout << "#Seven Wang: " << time << " s (" << timeLoad << " s)" << std::endl;
}*/
{
double tic = double(cv::getTickCount());
cv::Mat3d S(rows, cols, 0.0);
cv::Mat3b currentImage;
for (const auto& name : filenames)
{
currentImage = cv::imread(name);
cv::add(S, currentImage, S, cv::noArray(), CV_64F);
}
S /= filenames.size();
cv::Mat3b avg;
S.convertTo(avg, CV_8U);
double toc = double(cv::getTickCount());
double timeLoad = (toc - ticLoad) / cv::getTickFrequency();
double time = (toc - tic) / cv::getTickFrequency();
std::cout << "#Miki: " << time << " s (" << timeLoad << " s)" << std::endl;
}
getchar();
return 0;
}

One point that drew my attention is the type "CV_32FC3". Are you specifically preferring that 32 bit float matrix and are you sure Matlab as well gets the pixel values the same way?
Because you have that extra step
tempIM.convertTo(tempIM, CV_32FC3);
in your Cpp code, where Matlab directly operates as soon as it retrieves the image without any conversion, which might be slowing down your cpp code. Furthermore, if Matlab is not getting the image in float values, that might be contributing the speed difference as float point arithmetics is a harder task for CPU to handle compared to integers.

Related

Compute the similarity rate between two Images with opencv/c++

I'm using OpenCV/C++ to compute the similarity rate between two images. I want to tell the user how much % image A looks like image B.
Let's take a look at the code below :
double getSimilarityRate(const cv::Mat A, const cv::Mat B){
double cpt = 0.0;
cv::Mat imgGray1, imgGray2;
cv::cvtColor(A, imgGray1, CV_BGR2GRAY);
cv::cvtColor(B, imgGray2, CV_BGR2GRAY);
imgGray1 = imgGray1 > 128;
imgGray2 = imgGray2 > 128;
double total = imgGray1.cols * imgGray1.rows;
if(imgGray1.rows > 0 && imgGray1.rows == B.rows && imgGray1.cols > 0 && imgGray1.cols == B.cols){
for(int rows = 0; rows < imgGray1.rows; rows++){
for(int cols = 0; cols < imgGray1.cols; cols++){
if(imgGray1.at<int>(rows, cols) == imgGray2.at<int>(rows,cols)) cpt ++;
}
}
}else{
std::cout << "No similartity between the two images ... [EXIT]" << std::endl;
exit(0);
}
double rate = cpt / total;
return rate * 100.0;
}
int main(void)
{
/* ------------------------------------------ # ALGO GETSIMILARITY BETWEEN 2 IMAGES # -------------------------------------- */
double rate;
string fileNameImage1("C:\\Users\\hugoo\\Documents\\Prog\\NexterMU\\Qt\\OpenCV\\DetectionShapeProgram\\mire.jpg");
cv::Mat image1 = imread(fileNameImage1);
string fileNameImage2("C:\\Users\\hugoo\\Documents\\Prog\\NexterMU\\Qt\\OpenCV\\DetectionShapeProgram\\mire.jpg");
cv::Mat image2 = imread(fileNameImage2);
if(image1.empty() || image2.empty()){
std::cout << "Images couldn't be loaded" << std::endl;
exit(-1);
}
rate = getSimilarityRate(image1, image2) ;
First I convert the matrices from BGR to GRAY. So I have only one channel remaining. (Much more easier to compare).
cv::Mat imgGray1, imgGray2;
cv::cvtColor(A, imgGray1, CV_BGR2GRAY);
cv::cvtColor(B, imgGray2, CV_BGR2GRAY);
Then I make them binary (255 or 0 --> pixel's White or Black) :
imgGray1 = imgGray1 > 128;
imgGray2 = imgGray2 > 128;
In my for loops I pass through each pixel and compare him to other one in the second image.
If it matches I increase a variable (cpt ++).
I compute the rate and turn it to a %, with :
double rate = cpt / total;
return rate * 100.0;
The thing is it doesn't seem to compute correctly, because it doesn't return me the rate value in the console...
I think the problem comes from the at() function maybe I don't use it properly.
I suspect imgGray1.at<int>(rows, cols) should be imgGray1.at<uchar>(rows, cols) instead.
Currently .at() function call has int as a template argument, but typically cv::Mat consist of uchar elements. Are you pretty sure that your image has int elements? If it does consist of uchar elements, then using int template argument will result in accessing memory beyond what corresponds to the image (basically all pointer offsets would now be 4x as large as they should be).
More generally, if you use cv::Mat::at(), you need to use different template arguments depending on the output of cv::Mat::type():
8-bit 3-channel image (CV_8UC3) --> .at<cv::Vec3b>(row, column)
8-bit 1-channel image (CV_8UC1) --> .at<uchar>(row, column)
32-bit 3-channel image (CV_32FC3) --> .at<cv::Vec3f>(row, column)
32-bit 1-channel image (CV_32FC1) --> .at<float>(row, column)
For this reason, if a function should support arbitrary cv::Mat's, one either needs to write a bunch of if-else clauses, or to avoid .at() altogether. In your situation, since imgGray1 and imgGray2 are "binarized", I wonder if rate can be calculated using cv::norm, possibly like so:
// NORM_INF counts the number of non-equal elements.
int num_non_equal = cv::norm(imgGray1, imgGray2, NORM_INF);
double rate = 1.0 - num_non_equal / static_cast<double>(total);

OpenCV - Basic Operations - Performance Issue [in Mode: Release]

I might discovered a huge performance issue with OpenCV's own implementation of matrix multiplication / summation, and wanted to check with you guys if I maybe missing something:
In advance: All runs were done in (OpenCV's) Release Mode.
Setup:
(a) I'll do 10 million times a matrix-vector multiplication with a 3-by-3 matrix and a 3-by-1 vector. The implementation follows the code: res = mat * vec;
(b) I'll do the same with my own implementation of accessing the elements individually and then doing the multiplication process using pointer-arithmetic. [basically just multiplying out the process and writing down the equations for each row for the result vector]
I tested these variants with the compiler flags -O0, -O1, -O2, -O3, -Ofast and for OpenCV 3.1 & 3.2.
The timings are done using chrono (high_resolution_clock) on Ubuntu 16.04.
Findings:
In all cases the non-optimized method (b) outperforms the OpenCV method (a) by a factor of ~100 to ~1000.
Question:
How can that be the case? Shouldn't OpenCV be optimized for these kinds of procedures? Should I raise an issue on Github, or is there something I'm totally missing?
Code: [Ready to copy and test on your machine]
#include <chrono>
#include <iostream>
#include "opencv2/core/cvstd.hpp"
#include "opencv2/core.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
int main()
{
// 1. Setup:
std::vector<std::chrono::high_resolution_clock::time_point> timestamp_vec_start(2);
std::vector<std::chrono::high_resolution_clock::time_point> timestamp_vec_end(2);
std::vector<double> timestamp_vec_total(2);
cv::Mat test_mat = (cv::Mat_<float>(3,3) << 0.023, 232.33, 0.545,
22.22, 0.1123, 4.444,
0.012, 3.4521, 0.202);
cv::Mat test_vec = (cv::Mat_<float>(3,1) << 5.77,
1.20,
0.03);
cv::Mat result_1 = cv::Mat(3, 1, CV_32FC1);
cv::Mat result_2 = cv::Mat(3, 1, CV_32FC1);
cv::Mat temp_test_mat_results = cv::Mat(3, 3, CV_32FC1);
cv::Mat temp_test_vec_results = cv::Mat(3, 1, CV_32FC1);
auto ptr_test_mat_res_0 = temp_test_mat_results.ptr<float>(0);
auto ptr_test_mat_res_1 = temp_test_mat_results.ptr<float>(1);
auto ptr_test_mat_res_2 = temp_test_mat_results.ptr<float>(2);
auto ptr_test_vec_res_0 = temp_test_vec_results.ptr<float>(0);
auto ptr_test_vec_res_1 = temp_test_vec_results.ptr<float>(1);
auto ptr_test_vec_res_2 = temp_test_vec_results.ptr<float>(2);
auto ptr_res_0 = result_2.ptr<float>(0);
auto ptr_res_1 = result_2.ptr<float>(1);
auto ptr_res_2 = result_2.ptr<float>(2);
// 2. OpenCV Basic Matrix Operations:
timestamp_vec_start[0] = std::chrono::high_resolution_clock::now();
for(int i = 0; i < 10000000; ++i)
{
// factor of up to 5000 here:
// result_1 = (test_mat + test_mat + test_mat) * (test_vec + test_vec);
// factor of 30~100 here:
result_1 = test_mat * test_vec;
}
timestamp_vec_end[0] = std::chrono::high_resolution_clock::now();
timestamp_vec_total[0] = static_cast<double>(std::chrono::duration_cast<std::chrono::microseconds>(timestamp_vec_end[0] - timestamp_vec_start[0]).count());
// 3. Pixel-Wise Operations:
timestamp_vec_start[1] = std::chrono::high_resolution_clock::now();
for(int i = 0; i < 10000000; ++i)
{
auto ptr_test_mat_0 = test_mat.ptr<float>(0);
auto ptr_test_mat_1 = test_mat.ptr<float>(1);
auto ptr_test_mat_2 = test_mat.ptr<float>(2);
auto ptr_test_vec_0 = test_vec.ptr<float>(0);
auto ptr_test_vec_1 = test_vec.ptr<float>(1);
auto ptr_test_vec_2 = test_vec.ptr<float>(2);
ptr_test_mat_res_0[0] = ptr_test_mat_0[0] + ptr_test_mat_0[0] + ptr_test_mat_0[0];
ptr_test_mat_res_0[1] = ptr_test_mat_0[1] + ptr_test_mat_0[1] + ptr_test_mat_0[1];
ptr_test_mat_res_0[2] = ptr_test_mat_0[2] + ptr_test_mat_0[2] + ptr_test_mat_0[2];
ptr_test_mat_res_1[0] = ptr_test_mat_1[0] + ptr_test_mat_1[0] + ptr_test_mat_1[0];
ptr_test_mat_res_1[1] = ptr_test_mat_1[1] + ptr_test_mat_1[1] + ptr_test_mat_1[1];
ptr_test_mat_res_1[2] = ptr_test_mat_1[2] + ptr_test_mat_1[2] + ptr_test_mat_1[2];
ptr_test_mat_res_2[0] = ptr_test_mat_2[0] + ptr_test_mat_2[0] + ptr_test_mat_2[0];
ptr_test_mat_res_2[1] = ptr_test_mat_2[1] + ptr_test_mat_2[1] + ptr_test_mat_2[1];
ptr_test_mat_res_2[2] = ptr_test_mat_2[2] + ptr_test_mat_2[2] + ptr_test_mat_2[2];
ptr_test_vec_res_0[0] = ptr_test_vec_0[0] + ptr_test_vec_0[0];
ptr_test_vec_res_1[0] = ptr_test_vec_1[0] + ptr_test_vec_1[0];
ptr_test_vec_res_2[0] = ptr_test_vec_2[0] + ptr_test_vec_2[0];
ptr_res_0[0] = ptr_test_mat_res_0[0]*ptr_test_vec_res_0[0] + ptr_test_mat_res_0[1]*ptr_test_vec_res_1[0] + ptr_test_mat_res_0[2]*ptr_test_vec_res_2[0];
ptr_res_1[0] = ptr_test_mat_res_1[0]*ptr_test_vec_res_0[0] + ptr_test_mat_res_1[1]*ptr_test_vec_res_1[0] + ptr_test_mat_res_1[2]*ptr_test_vec_res_2[0];
ptr_res_2[0] = ptr_test_mat_res_2[0]*ptr_test_vec_res_0[0] + ptr_test_mat_res_2[1]*ptr_test_vec_res_1[0] + ptr_test_mat_res_2[2]*ptr_test_vec_res_2[0];
}
timestamp_vec_end[1] = std::chrono::high_resolution_clock::now();
timestamp_vec_total[1] = static_cast<double>(std::chrono::duration_cast<std::chrono::microseconds>(timestamp_vec_end[1] - timestamp_vec_start[1]).count());
// 4. Printout Timing Results:
std::cout << "\n\nTimings:\n\n";
std::cout << "Time spent in OpenCV's implementation: " << timestamp_vec_total[0]/1000.0 << " ms.\n";
std::cout << "Time spent in element-wise implementation: " << timestamp_vec_total[1]/1000.0 << " ms.\n\n";
std::cin.get();
return 0;
}
OpenCV is not optimized for small matrix operations.
You can reduce your overhead a little by not allocating a new Matrix for the result inside the loop by using cv::gemm
But if small matrix operations are a bottleneck for you I recommend using Eigen.
Using a quick Eigen implementation like:
Eigen::Matrix3d mat;
mat << 0.023, 232.33, 0.545,
22.22, 0.1123, 4.444,
0.012, 3.4521, 0.202;
Eigen::Vector3d vec3;
vec3 << 5.77,
1.20,
0.03;
Eigen::Vector3d result_e;
for (int i = 0; i < 10000000; ++i)
{
result_e = (mat *3 ) * (vec3 *2);
}
gives me the following numbers with VS2015 (obviously the difference might be less dramatic in GCC or Clang):
Timings:
Time spent in OpenCV's implementation: 2384.45 ms.
Time spent in element-wise implementation: 78.653 ms.
Time spent in Eigen implementation: 36.088 ms.

Best way to indexing a matrix in opencv

Let say, A and B are matrices of the same size.
In Matlab, I could use simple indexing as below.
idx = A>0;
B(idx) = 0
How can I do this in OpenCV? Should I just use
for (i=0; ... rows)
for(j=0; ... cols)
if (A.at<double>(i,j)>0) B.at<double>(i,j) = 0;
something like this? Is there a better (faster and more efficient) way?
Moreover, in OpenCV, when I try
Mat idx = A>0;
the variable idx seems to be a CV_8U matrix (not boolean but integer).
You can easily convert this MATLAB code:
idx = A > 0;
B(idx) = 0;
// same as
B(A>0) = 0;
to OpenCV as:
Mat1d A(...)
Mat1d B(...)
Mat1b idx = A > 0;
B.setTo(0, idx) = 0;
// or
B.setTo(0, A > 0);
Regarding performance, in C++ it's usually faster (it depends on the enabled optimizations) to work on raw pointers (but is less readable):
for (int r = 0; r < B.rows; ++r)
{
double* pA = A.ptr<double>(r);
double* pB = B.ptr<double>(r);
for (int c = 0; c < B.cols; ++c)
{
if (pA[c] > 0.0) pB[c] = 0.0;
}
}
Also note that in OpenCV there isn't any boolean matrix, but it's a CV_8UC1 matrix (aka a single channel matrix of unsigned char), where 0 means false, and any value >0 is true (typically 255).
Evaluation
Note that this may vary according to optimization enabled with OpenCV. You can test the code below on your PC to get accurate results.
Time in ms:
my results my results #AdrienDescamps
(OpenCV 3.0 No IPP) (OpenCV 2.4.9)
Matlab : 13.473
C++ Mask: 640.824 5.81815 ~5
C++ Loop: 5.24414 4.95127 ~4
Note: I'm not entirely sure about the performance drop with OpenCV 3.0, so I just remark: test the code below on your PC to get accurate results.
As #AdrienDescamps stated in comments:
It seems that the performance drop with OpenCV 3.0 is related to the OpenCL option, that is now enabled in the comparison operator.
C++ Code
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main()
{
// Random initialize A with values in [-100, 100]
Mat1d A(1000, 1000);
randu(A, Scalar(-100), Scalar(100));
// B initialized with some constant (5) value
Mat1d B(A.rows, A.cols, 5.0);
// Operation: B(A>0) = 0;
{
// Using mask
double tic = double(getTickCount());
B.setTo(0, A > 0);
double toc = (double(getTickCount()) - tic) * 1000 / getTickFrequency();
cout << "Mask: " << toc << endl;
}
{
// Using for loop
double tic = double(getTickCount());
for (int r = 0; r < B.rows; ++r)
{
double* pA = A.ptr<double>(r);
double* pB = B.ptr<double>(r);
for (int c = 0; c < B.cols; ++c)
{
if (pA[c] > 0.0) pB[c] = 0.0;
}
}
double toc = (double(getTickCount()) - tic) * 1000 / getTickFrequency();
cout << "Loop: " << toc << endl;
}
getchar();
return 0;
}
Matlab Code
% Random initialize A with values in [-100, 100]
A = (rand(1000) * 200) - 100;
% B initialized with some constant (5) value
B = ones(1000) * 5;
tic
B(A>0) = 0;
toc
UPDATE
OpenCV 3.0 uses IPP optimization in the function setTo. If you have that enabled (you can check with cv::getBuildInformation()), you'll have a faster computation.
The answer of Miki is very good, but i just want to add some clarification about the performance problem to avoid any confusion.
It is true that the best way to implement an image filter (or any algorithm) with OpenCV is to use the raw pointers, as shown in the second C++ example of Miki (C++ Loop).
Using the at function is also correct, but significantly slower.
However, most of the time, you don't need to worry about that, and you can simply use the high level functions of OpenCV (first example of Miki , C++ Mask). They are well optimized, and will usually be almost as fast as a low level loop on pointers, or even faster.
Of course, there are exceptions (we just found one), and you should always test for your specific problem.
Now, regarding this specific problem :
The example here where the high level function was much slower (100x slower) than the low level loop is NOT a normal case, as it is demonstrated by the timings with other version/configuration of OpenCV, that are much lower.
The problem seems to be that when OpenCV3.0 is compiled with OpenCL, there is a huge overhead the first time a function that uses OpenCL is called. The simplest solution is to disable OpenCL at compile time, if you use OpenCV3.0 (see also here for other possible solutions if you are interested).

opencv C++ neural network predict() function throws "Bad argument" error

I have managed to train a neural network to recognize numbers in an image and have saved the network parameters to an .xml file.
However, when testing the network against a new image the code fails at the predict() stage with the error:
OpenCV Error: Bad argument (Both input and output must be floating-point matrices of the same type and have the same number of rows) in CvANN_MLP::predict, file ........\opencv\modules\ml\src\ann_mlp.cpp, line 279.
ann_mlp.cpp line 279 is:
if( !CV_IS_MAT(_inputs) || !CV_IS_MAT(_outputs) ||
!CV_ARE_TYPES_EQ(_inputs,_outputs) ||
(CV_MAT_TYPE(_inputs->type) != CV_32FC1 &&
CV_MAT_TYPE(_inputs->type) != CV_64FC1) ||
_inputs->rows != _outputs->rows )
CV_Error( CV_StsBadArg, "Both input and output must be floating-point matrices "
"of the same type and have the same number of rows" );
I have checked input rows by running this code:
cv::Size s = newVec.size();
int rows = s.height;
int cols = s.width;
cout << "newVec dimensions: " << rows << " x " << cols << endl;
...and it comes out with the expected 1 x 900 vector / matrix.
I have set the input and output matrices to be CV_32FC1 as per the error dialog like this:
Input matrix
cv::Mat newVec(1, 900, CV_32FC1);
newVec = crop_img.reshape(0, 1); //reshape / unroll image to vector
CvMat n = newVec;
newVec = cv::Mat(&n);
Output matrix
cv::Mat classOut = cvCreateMatHeader(1, CLASSES, CV_32FC1);
And I try to run the prediction function like this:
CvANN_MLP* nnetwork = new CvANN_MLP;
nnetwork->load("nnetwork.xml", "nnetwork");
int maxIndex = 0;
cv::Mat classOut = cvCreateMatHeader(1, CLASSES, CV_32FC1);
//prediction
nnetwork->predict(newVec, classOut);
float value;
float maxValue = classOut.at<float>(0, 0);
for (int index = 1; index<CLASSES; index++)
{
value = classOut.at<float>(0, index);
if (value>maxValue)
{
maxValue = value;
maxIndex = index;
}
}
Any ideas? Much appreciated...
I suspect the problem is your input, not your output.
First it's important to understand that OpenCV deserves a lot of the blame for this, not you. Their C++ API is quite mediocre, and it caused major confusion to you.
See, normally in C++ when you define a 1x900 matrix of floats, it stays a matrix of floats. C++ has strong type safety.
OpenCV does not. If you assign a matrix of bytes to a matrix of floats, the latter will change its type (!).
Your code initializes newVec to such a matrix of floats, then assigns a second matrix, and then yet another matrix. I suspect that crop_img is still an image, i.e. 8 bits. Reshaping it will make it 1x900, but not floating point. That's the job of .convertTo.

Implementing FFT low-pass filter in C with FFTW

I am trying to create a very simple C++ program that given an argument in range [0-100] applies a low-pass filter to a grayscale image that should "compress" it proprotionally to the value of the given argument.
I am using the FFTW library.
I have some doubts about how I define the frequency threshold, cut. Is there any more effective way to define such value?
//fftw_complex *fft
//double[] magnitude
// . . .
int percent = 100;
if (percent < 0 || percent > 100) {
cerr << "Compression rate must be a value between 0 and 100." << endl;
return -1;
}
double cut =(double)(w*h) * ((double)percent / (double)100);
for (i = 0; i < (w * h); i++) {
magnitude[i] = sqrt(pow(fft[i][0], 2.0) + pow(fft[i][1], 2.0));
if (magnitude[i] < cut) {
fft[i][0] = 0.0;
fft[i][1] = 0.0;
}
}
Update1:
I've changed my code to this, but again I'm not sure this is a proper way to filter frequencies. The image is surely compressed, but non-square images are messed up and setting compression to 100% isn't the real maximum compression available (I can go up to ~140%).
Here you can find an image of what I see now.
int cX = w/2;
int cY = h/2;
cout<<"TEST "<<((double)percent/(double)100)*h<<endl;
for(i = 0; i<(w*h);i++){
int row = i/s;
int col = i%s;
int distance = sqrt((col-cX)*(col-cX)+(row-cY)*(row-cY));
if(distance<((double)percent/(double)100)*min(cX,cY)){
fft[i][0] = 0.0;
fft[i][1] = 0.0;
}
}
This is not a low-pass filter at all. A low-pass filter passes low frequencies, i.e. it removes fine details (blurring). You obviously need a 2D FFT for that.
This code just removes random bits, essentially.
[edit]
The new code looks a lot more like a low-pass filter. The 141% setting is expected: the diagonal of a square is sqrt(2)=1.41 times its side. Converting an index into a row/column pair should use the image width, not some random unexplained s.
I don't know where your zero frequency is located. That should be easy to spot (largest value) but it might be in (0,0) instead of (w/2,h/2)