I am using OpenCV C++ for image processing. I want to do some fast processing on Mat and GpuMat by element.
For example, I have to apply a complexed function to every element of the Mat or GpuMat. Currently, I am accessing each element of a Mat by looping as below:
// C++ Example 1: a and b are Mat
for (int i = 0; i < 512; i++) {
for (int j = 0; j < 512; j++) {
double sPixel = s.at<double>(512 * i + j);
if (sPixel >= 0 && sPixel <= 1) {
a.at<double>(512 * i + j) = double(1);
} else if (sPixel > 1) {
b.at<double>(512 * i + j) = double(1);
}
}
}
// C++ Example 2: f, x are Mat
for (int i = 0; i < 512; i++) {
for (int j = 0; j < 512; j++) {
f.at<double>(512 * i + j) = (1 / (2 * sigma)) * (1 + cos(pi * x.at<double>(512 * i + j) / sigma));
}
}
However, I think this method is slow because there are no actual relations among elements of Mat, if the by element calculation is done parallelly it would be better.
On the other hand, I cannot access elements of GpuMat. If I download and upload data between Mat and GpuMat frequently, it would be extremely slow and the advantage of using GPU does not exist.
So my question is:
What are some improved ways to do by element processing on Mat and GpuMat?
Especially those provided by OpenCV itself.
How to do by element processing on GpuMat?
You just use built-in openCV functions that do per-element operations. E.g. you have overloaded matrix operators for addition, subtraction of matrices or matrices and scalars, functions for element-wise multiplication, division, absolute difference, trigonometric functions, powers, roots etc. They usually have the same name as the standard library math functions. Just search the docs. For comparing matrix elements like in your first example, use matrix expressions.
This is really the same as the point 1. You have to check the functions that openCV provide and divide your operation into steps that might be executed with those function. E.g. here is the nice list of such functions:
http://docs.opencv.org/2.4/modules/gpu/doc/per_element_operations.html
http://docs.opencv.org/trunk/d8/d34/group__cudaarithm__elem.html
If the above functions are not enough for you, avoid accessing pixels by using at() method as this is extremely inefficent and not recommended when iterating through all the pixels. Use the ptr() function instead to access whole rows.
Here is the example how can you transform your calculations using the above techniques:
//first example
b = (s > 1);
a = (s >= 0).mul(s <= 1);
//second example
f = (1 / (2*sigma)) * ((1 + cos_mat) / sigma);
There is no per-element cos() function in openCV, but if you want performance, you can implement cosine as Taylor series, which will equal a couple of per-element multiplications and subtractions/additions, and obtain the cos_mat matrix that way. You can find an example here:
http://answers.opencv.org/question/55602/sine-or-cosine-of-every-element-in-mat-c/
Related
I wrote a program that loads, saves, and performs the fft and ifft on black and white png images. After much debugging headache, I finally got some coherent output only to find that it distorted the original image.
input:
fft:
ifft:
As far as I have tested, the pixel data in each array is stored and converted correctly. Pixels are stored in two arrays, 'data' which contains the b/w value of each pixel and 'complex_data' which is twice as long as 'data' and stores real b/w value and imaginary parts of each pixel in alternating indices. My fft algorithm operates on an array structured like 'complex_data'. After code to read commands from the user, here's the code in question:
if (cmd == "fft")
{
if (height > width) size = height;
else size = width;
N = (int)pow(2.0, ceil(log((double)size)/log(2.0)));
temp_data = (double*) malloc(sizeof(double) * width * 2); //array to hold each row of the image for processing in FFT()
for (i = 0; i < (int) height; i++)
{
for (j = 0; j < (int) width; j++)
{
temp_data[j*2] = complex_data[(i*width*2)+(j*2)];
temp_data[j*2+1] = complex_data[(i*width*2)+(j*2)+1];
}
FFT(temp_data, N, 1);
for (j = 0; j < (int) width; j++)
{
complex_data[(i*width*2)+(j*2)] = temp_data[j*2];
complex_data[(i*width*2)+(j*2)+1] = temp_data[j*2+1];
}
}
transpose(complex_data, width, height); //tested
free(temp_data);
temp_data = (double*) malloc(sizeof(double) * height * 2);
for (i = 0; i < (int) width; i++)
{
for (j = 0; j < (int) height; j++)
{
temp_data[j*2] = complex_data[(i*height*2)+(j*2)];
temp_data[j*2+1] = complex_data[(i*height*2)+(j*2)+1];
}
FFT(temp_data, N, 1);
for (j = 0; j < (int) height; j++)
{
complex_data[(i*height*2)+(j*2)] = temp_data[j*2];
complex_data[(i*height*2)+(j*2)+1] = temp_data[j*2+1];
}
}
transpose(complex_data, height, width);
free(temp_data);
free(data);
data = complex_to_real(complex_data, image.size()/4); //tested
image = bw_data_to_vector(data, image.size()/4); //tested
cout << "*** fft success ***" << endl << endl;
void FFT(double* data, unsigned long nn, int f_or_b){ // f_or_b is 1 for fft, -1 for ifft
unsigned long n, mmax, m, j, istep, i;
double wtemp, w_real, wp_real, wp_imaginary, w_imaginary, theta;
double temp_real, temp_imaginary;
// reverse-binary reindexing to separate even and odd indices
// and to allow us to compute the FFT in place
n = nn<<1;
j = 1;
for (i = 1; i < n; i += 2) {
if (j > i) {
swap(data[j-1], data[i-1]);
swap(data[j], data[i]);
}
m = nn;
while (m >= 2 && j > m) {
j -= m;
m >>= 1;
}
j += m;
};
// here begins the Danielson-Lanczos section
mmax = 2;
while (n > mmax) {
istep = mmax<<1;
theta = f_or_b * (2 * M_PI/mmax);
wtemp = sin(0.5 * theta);
wp_real = -2.0 * wtemp * wtemp;
wp_imaginary = sin(theta);
w_real = 1.0;
w_imaginary = 0.0;
for (m = 1; m < mmax; m += 2) {
for (i = m; i <= n; i += istep) {
j = i + mmax;
temp_real = w_real * data[j-1] - w_imaginary * data[j];
temp_imaginary = w_real * data[j] + w_imaginary * data[j-1];
data[j-1] = data[i-1] - temp_real;
data[j] = data[i] - temp_imaginary;
data[i-1] += temp_real;
data[i] += temp_imaginary;
}
wtemp = w_real;
w_real += w_real * wp_real - w_imaginary * wp_imaginary;
w_imaginary += w_imaginary * wp_real + wtemp * wp_imaginary;
}
mmax=istep;
}}
My ifft is the same only with the f_or_b set to -1 instead of 1. My program calls FFT() on each row, transposes the image, calls FFT() on each row again, then transposes back. Is there maybe an error with my indexing?
Not an actual answer as this question is Debug only so some hints instead:
your results are really bad
it should look like this:
first line is the actual DFFT result
Re,Im,Power is amplified by a constant otherwise you would see a black image
the last image is IDFFT of the original not amplified Re,IM result
the second line is the same but the DFFT result is wrapped by half size of image in booth x,y to match the common results in most DIP/CV texts
As you can see if you IDFFT back the wrapped results the result is not correct (checker board mask)
You have just single image as DFFT result
is it power spectrum?
or you forget to include imaginary part? to view only or perhaps also to computation somewhere as well?
is your 1D **DFFT working?**
for real data the result should be symmetric
check the links from my comment and compare the results for some sample 1D array
debug/repair your 1D FFT first and only then move to the next level
do not forget to test Real and complex data ...
your IDFFT looks BW (no gray) saturated
so did you amplify the DFFT results to see the image and used that for IDFFT instead of the original DFFT result?
also check if you do not round to integers somewhere along the computation
beware of (I)DFFT overflows/underflows
If your image pixel intensities are big and the resolution of image too then your computation could loss precision. Newer saw this in images but if your image is HDR then it is possible. This is a common problem with convolution computed by DFFT for big polynomials.
Thank you everyone for your opinions. All that stuff about memory corruption, while it makes a point, is not the root of the problem. The sizes of data I'm mallocing are not overly large, and I am freeing them in the right places. I had a lot of practice with this while learning c. The problem was not the fft algorithm either, nor even my 2D implementation of it.
All I missed was the scaling by 1/(M*N) at the very end of my ifft code. Because the image is 512x512, I needed to scale my ifft output by 1/(512*512). Also, my fft looks like white noise because the pixel data was not rescaled to fit between 0 and 255.
Suggest you look at the article http://www.yolinux.com/TUTORIALS/C++MemoryCorruptionAndMemoryLeaks.html
Christophe has a good point but he is wrong about it not being related to the problem because it seems that in modern times using malloc instead of new()/free() does not initialise memory or select best data type which would result in all problems listed below:-
Possibly causes are:
Sign of a number changing somewhere, I have seen similar issues when a platform invoke has been used on a dll and a value is passed by value instead of reference. It is caused by memory not necessarily being empty so when your image data enters it will have boolean maths performed on its values. I would suggest that you make sure memory is empty before you put your image data there.
Memory rotating right (ROR in assembly langauge) or left (ROL) . This will occur if data types are being used which do not necessarily match, eg. a signed value entering an unsigned data type or if the number of bits is different in one variable to another.
Data being lost due to an unsigned value entering a signed variable. Outcomes are 1 bit being lost because it will be used to determine negative or positive, or at extremes if twos complement takes place the number will become inverted in meaning, look for twos complement on wikipedia.
Also see how memory should be cleared/assigned before use. http://www.cprogramming.com/tutorial/memory_debugging_parallel_inspector.html
I have a Kernel filter that I generated and I want to apply it to my image but I could not get a right result by doing this:
Actually I can use a different method as well since I am not to familiar with opencv I need help thanks.
channel[c] is the read image;
int size = 5; // Gaussian filter box side size
double gauss[5][5];
int sidestp = (size - 1) / 2;
// I have a function to generate the gaussiankernel filter
float sum = 0;
for (int x = 1; x < channels[c].cols - 1; x++){
for (int y = 1; y < channels[c].rows - 1; y++){
for (int i = -size; i <= size; i++){
for (int j = -sidestp; j <= sidestp; j++){
sum = sum + gauss[i + sidestp][j + sidestp] * channels[c].at<uchar>(x - i, y - j);
}
}
result.at<uchar>(y, x) = sum;
}
}
OpenCV has an inbuilt function filter2D that does this convolution for you.
You need to provide your source and destination images, along with the custom kernel (as a Mat), and a few more arguments. See this if it still bothers you.
Just to add to the previous answer, since you are performing Gaussian blur, you can use the OpenCV GaussianBlur (Check here). Unlike filter2D, you can use the standard deviations as input parameter.
I have a for loop the takes an OpenCV Mat object of n x n dimensions, and returns a Mat object of n^2 x 1 dimensions. It works, but when I time the method it takes between 1 and 2 milliseconds. Since I am calling this method 3 or 4 million times its taking my program about an hour to run. A research paper I'm referencing suggests the author was able to produce a program with the same function that ran in only a few minutes, without running any threads in parallel. After timing each section of code, the only portion taking >1 ms is the following method.
static Mat mat2vec(Mat mat)
{
Mat toReturn = Mat(mat.rows*mat.cols, 1, mat.type());
float* matPt;
float* retPt;
for (int i = 0; i < mat.rows; i++) //rows
{
matPt = mat.ptr<float>(i);
for (int j = 0; j < mat.row(i).cols; j++) //col
{
retPt = toReturn.ptr<float>(i*mat.cols + j);
retPt[0] = matPt[j];
}
}
return toReturn;
}
Is there any way that I can increase the speed at which this method converts an n x n matrix into an n^2 x 1 matrix (or cv::Mat representing a vector)?
that solved most of the problem #berak, its running a lot faster now. however in some cases like below, the mat is not continuous. Any idea of how I can get an ROI in a continuous mat?
my method not looks like this:
static Mat mat2vec(Mat mat)
{
if ( ! mat.isContinuous() )
{
mat = mat.clone();
}
return mat.reshape(1,2500);
}
Problems occur at:
Mat patch = Mat(inputSource, Rect((inputPoint.x - (patchSize / 2)), (inputPoint.y - (patchSize / 2)), patchSize, patchSize));
Mat puVec = mat2vec(patch);
assuming that the data in your Mat is continuous, Mat::reshape() for the win.
and it's almost for free. only rows/cols get adjusted, no memory moved. i.e, mat = mat.reshape(1,1) would make a 1d float array of it.
Seeing this in OpenCV 3.2, but the function is now mat.reshape(1).
How would I do a matrix multiplication in cpp format that would after be compiled into a mex file?
My normal matrix multiplication in a Matlab script is as follow:
cMatrix = (1 / r) * pfMatrix * wcMatrix; %here pfMatrix is 2x3 and wcMatrix is 3x8
% Hence cMatrix is 2x8
% r is a scalar
The pfMatrix, wcMatrix and r are declared correctly in the cpp file and they have the same values as in the script. However cMatrix doesn't give me the same results. Here the implementation of the Matrix multiplication in the cpp :
int i, n, j;
for (i = 0; i<1; i++)
{
for (n = 0; n<7; n++)
{
for (j = 0; j<2; j++)
{
d->cMatrix[i][n] += (d->pfMatrix[i][j]) * (d->wcMatrix[j][n]);
}
d->cMatrix[i][n] = (1 / d->r) * d->cMatrix[i][n];
}
}
Edit:
I modified the loop following Ben Voigt answer. The results in cMatrix are still not identical to the one calculated from the Matlab script.
For example :
pfMatrix = [7937.91049469652,0,512;0,7933.81033431703,384];
wcMatrix = [-0.880633810389421,-1.04063381038942,-1.04063381038942,-0.880633810389421,-0.815633810389421,-1.10563381038942,-1.10563381038942,-0.815633810389421;-0.125,-0.125,0.125,0.125,-0.29,-0.29,0.29,0.29;100,100,100,100,100,100,100,100];
r = 100;
In this case, cMatrix(1,1) is :
(pfMatrix(1,1)*wcMatrix(1,1) + pfMatrix(1,2)*wcMatrix(2,1) + pfMatrix(1,3)*wcMatrix(3,1)) / r = 442.09
However, with the mex file the equivalent result is 959.
Edit #2:
I found the error in an element of pfMatrix that was not declared correctly (missing a division by 2). So the answer of Ben Voigt is working correctly. However, there is still a slight difference between the two results (Matlab script gives 442 and the mex gives 447, could it be a results of different data type?).
Edit #3:
Found the error and it was not related with the matrix multiplication loop.
Using your result matrix as scratch space is not a great idea. The compiler has to worry about aliasing, which means it can't optimize.
Try an explicit working variable, which also provides a convenient place to zero it:
for (int i = 0; i < 2; ++i) {
for (int n = 0; n < 8; ++n) {
double accum = 0.0;
for (int j = 0; j < 3; ++j) {
accum += (d->pfMatrix[i][j]) * (d->wcMatrix[j][n]);
}
d->cMatrix[i][n] = accum / d->r;
}
}
Your ranges were also wrong, which I've fixed.
(Also note that good performance on large matrices requires banding to get good cache behavior, however that shouldn't be an issue on a product of this size.)
A multiplication between matrices must be in this way: A[m][n] * B[n][p] = R[m][p].
The conditions that you wrote in the for loops are not correct and doesn't respect the matrix dimensions.
Look also at the Eigen libraries, which are open-source and provide a simple way to do the matrix multiplications.
I want to smooth a histogram.
Therefore I tried to smooth the internal matrix of cvHistogram.
typedef struct CvHistogram
{
int type;
CvArr* bins;
float thresh[CV_MAX_DIM][2]; /* for uniform histograms */
float** thresh2; /* for non-uniform histograms */
CvMatND mat; /* embedded matrix header for array histograms */
}
I tried to smooth the matrix like this:
cvCalcHist( planes, hist, 0, 0 ); // Compute histogram
(...)
// smooth histogram with Gaussian Filter
cvSmooth( hist->mat, hist_img, CV_GAUSSIAN, 3, 3, 0, 0 );
Unfortunately, this is not working because cvSmooth needs a CvMat as input instead of a CvMatND. I couldn't transform CvMatND into CvMat (CvMatND is 2-dim in my case).
Is there anybody who can help me? Thanks.
You can use the same basic algorithm used for Mean filter, just calculating the average.
for(int i = 1; i < NBins - 1; ++i)
{
hist[i] = (hist[i - 1] + hist[i] + hist[i + 1]) / 3;
}
Optionally you can use a slightly more flexible algorithm allowing you to easily change the window size.
int winSize = 5;
int winMidSize = winSize / 2;
for(int i = winMidSize; i < NBins - winMidSize; ++i)
{
float mean = 0;
for(int j = i - winMidSize; j <= (i + winMidSize); ++j)
{
mean += hist[j];
}
hist[i] = mean / winSize;
}
But bear in mind that this is just one simple technique.
If you really want to do it using OpenCv tools, I recommend you access the openCv forum: http://tech.groups.yahoo.com/group/OpenCV/join
You can dramatically change the "smoothness" of a histogram by changing the number of bins you use. A good rule of thumb is to have sqrt(n) bins if you have n data points. You might try applying this heuristic to your histogram and see if you get a better result.