Increment for loop by variable - c++

I am taking in sound as a float called scaledVol. I wish to change the spacing of the letters being drawn out by scaledVol.
This is the code snippet:
for (int i = 0; i < camWidth; i+=7){
for (int j = 0; j < camHeight; j+=9){
// get the pixel and its lightness (lightness is the average of its RGB values)
float lightness = pixelsRef.getColor(i,j).getLightness();
// calculate the index of the character from our asciiCharacters array
int character = powf( ofMap(lightness, 0, 255, 0, 1), 2.5) * asciiCharacters.size();
// draw the character at the correct location
ofSetColor(0, 255, 0);
font.drawString(ofToString(asciiCharacters[character]), f, f);
}
}
where i sets the width between character spacing and j sets the height between character spacing.
Instead of incrementing by 7 or 9, I would like to increment by a float called scaledVol.

Instead of incrementing by 7 or 9, I would like to increment by a float called scaledVol.
Then code:
for (int i = 0; i < camWidth; i+=(int)scaledVol){
You may want to take the floor of, and ensure the conversion is done once, the increment; perhaps code instead
int incr = (int) floor(scaledVol);
assert (incr > 0);
for (int i = 0; i < camWidth; i+=incr) {
Read more about floor(3), ceil(3), round(3), and IEEE floating point and rounding errors
Please use your debugger (e.g. gdb) to understand more.
You could use more C++ friendly casts e.g.
int incr = int(floor(scaledVol));
or static_cast
int incr = static_cast<int>(floor(scaledVol));
or perhaps even reinterpret_cast
int incr = reinterpret_cast<int>(floor(scaledVol));
which might not work as you want, particularily if both numerical types have same size.

Need something like
for (float i = 0.0f; i < camWidth; i+=scaleVol){
Assuming that camWidth is a float. If not cast it to a float.
This will also over come a problem with rounding errors when converting scaledVol to an int

You can use float as the type of the two loop variables, and then cast them to int:
for (float x = 0; (int)x < camWidth; x+=scaledVol) {
int i = (int)x;
for (float y = 0; (int)y < camHeight; y+=scaledVol) {
int j = (int)y;
// the rest of the code using i and j
}
}
Be careful that scaledVol has best be greater than 1, otherwise you will have consecutive values of i and j that are equal. Your treatment in `// the rest of the code`` may not like that.

Related

Function started with std::async crashes after quite a few iterations

I am trying to develop a simple evolution algorithm in C++. To make my calculations faster I decided to use async functions to run multiple calculations at once:
std::vector<std::future<int> > compute(8);
unsigned nptr = 0;
int syncp = 0;
while(nptr != network::networks.size()){
compute.at(syncp) = std::async(&network::analyse, &network::networks.at(nptr), data, width, height, sw, dFnum.at(idx));
syncp++;
if(syncp == 8){
syncp = 0;
for(unsigned i = 0; i < 8; i++){
compute.at(i).get();
}
}
nptr++;
}
This is how I start my calculating function. The function is called analyse, and for each "network" it assigns a score depending on how good it identifies the image.
This is part of the analyse function:
for(unsigned i = 0; i < entry.size(); i++){
double sum = 0;
data * d = &entry.at(i);
pattern * p = &pattern::patterns.at(d->patNo);
int sx = iWidth;
int sy = iHeight;
if(d->xPercentage*iWidth + d->xSpan*iWidth < sx) sx = d->xPercentage*iWidth + d->xSpan*iWidth;
if(d->yPercentage*iHeight + d->xSpan*iWidth < sy) sy = d->yPercentage*iHeight + d->xSpan*iWidth;
int xdisp = sx-d->xPercentage*iWidth;
int ydisp = sy-d->yPercentage*iHeight;
for(int x = d->xPercentage*iWidth; x < sx; x++){
for(int y = d->yPercentage*iHeight; y < sy; y++){
double xpl = x-d->xPercentage*iWidth;
double ypl = y-d->yPercentage*iHeight;
xpl /= xdisp;
ypl /= ydisp;
unsigned idx = (unsigned)(xpl*(p->width) + ypl*(p->height)*(p->width));
if(idx >= p->lweight.size()) idx = p->lweight.size()-1;
double weight = p->lweight.at(idx) - 5;
if(imageData[y*iWidth+x])
sum += weight;
else
sum -= 2*weight;
}
}
digitWeight[d->digit-1] += sum;
}
}
Now, there is no need to analyse the function itself - I'm sure it works, I have tested it on a single thread, and it runs just fine. The only problem is, after some time of execution, I get errors like segmentation fault, or vector range check error.
They mostly happen at this line:
digitWeight[d->digit-1] += sum;
Now, you can be sure that d->digit-1 is a valid range for this array.
The problem is that the value of the d pointer is different than it was here:
data * d = &entry.at(i);
It magically changes during the execution of the function, and starts pointing to different data, leading to errors. I have tried saving the value of d->digit to some variable and later use this variable, and it worked fine for just a while longer, before crashing on another shared resource, imageData this time.
I'm thinking this might be something related to data sharing - all async functions share the same array of data - it's a static vector. But this data is only read, not written anywhere, so why would it stop working? I know of something called mutex locking, but this would make no sense to lock this async functions, as it would run just as slow as a single threaded program would run.
I have also tried running the functions like this:
std::vector<std::thread*> threads(8);
unsigned nptr = 0;
int threadp = 0;
while(nptr != network::networks.size()){
threads.at(threadp) = new std::thread(&network::analyse, &network::networks.at(nptr), data, width, height, sw, dFnum.at(idx));
threadp++;
if(threadp == 8){
threadp = 0;
for(unsigned i = 0; i < 8; i++){
if(threads.at(i)->joinable()) threads.at(i)->join();
delete threads.at(i);
}
}
nptr++;
}
and it did work for a second, but after some time a very similar error appeared.
Data is a structure containing 7 integers, one of which is an ID of
pattern, and pattern is a class that contains two integers - width and height
and vector of chars.
Why does it happen on read-only data and how can I prevent it?
Here is an example of what happens:

Weird but close fft and ifft of image in c++

I wrote a program that loads, saves, and performs the fft and ifft on black and white png images. After much debugging headache, I finally got some coherent output only to find that it distorted the original image.
input:
fft:
ifft:
As far as I have tested, the pixel data in each array is stored and converted correctly. Pixels are stored in two arrays, 'data' which contains the b/w value of each pixel and 'complex_data' which is twice as long as 'data' and stores real b/w value and imaginary parts of each pixel in alternating indices. My fft algorithm operates on an array structured like 'complex_data'. After code to read commands from the user, here's the code in question:
if (cmd == "fft")
{
if (height > width) size = height;
else size = width;
N = (int)pow(2.0, ceil(log((double)size)/log(2.0)));
temp_data = (double*) malloc(sizeof(double) * width * 2); //array to hold each row of the image for processing in FFT()
for (i = 0; i < (int) height; i++)
{
for (j = 0; j < (int) width; j++)
{
temp_data[j*2] = complex_data[(i*width*2)+(j*2)];
temp_data[j*2+1] = complex_data[(i*width*2)+(j*2)+1];
}
FFT(temp_data, N, 1);
for (j = 0; j < (int) width; j++)
{
complex_data[(i*width*2)+(j*2)] = temp_data[j*2];
complex_data[(i*width*2)+(j*2)+1] = temp_data[j*2+1];
}
}
transpose(complex_data, width, height); //tested
free(temp_data);
temp_data = (double*) malloc(sizeof(double) * height * 2);
for (i = 0; i < (int) width; i++)
{
for (j = 0; j < (int) height; j++)
{
temp_data[j*2] = complex_data[(i*height*2)+(j*2)];
temp_data[j*2+1] = complex_data[(i*height*2)+(j*2)+1];
}
FFT(temp_data, N, 1);
for (j = 0; j < (int) height; j++)
{
complex_data[(i*height*2)+(j*2)] = temp_data[j*2];
complex_data[(i*height*2)+(j*2)+1] = temp_data[j*2+1];
}
}
transpose(complex_data, height, width);
free(temp_data);
free(data);
data = complex_to_real(complex_data, image.size()/4); //tested
image = bw_data_to_vector(data, image.size()/4); //tested
cout << "*** fft success ***" << endl << endl;
void FFT(double* data, unsigned long nn, int f_or_b){ // f_or_b is 1 for fft, -1 for ifft
unsigned long n, mmax, m, j, istep, i;
double wtemp, w_real, wp_real, wp_imaginary, w_imaginary, theta;
double temp_real, temp_imaginary;
// reverse-binary reindexing to separate even and odd indices
// and to allow us to compute the FFT in place
n = nn<<1;
j = 1;
for (i = 1; i < n; i += 2) {
if (j > i) {
swap(data[j-1], data[i-1]);
swap(data[j], data[i]);
}
m = nn;
while (m >= 2 && j > m) {
j -= m;
m >>= 1;
}
j += m;
};
// here begins the Danielson-Lanczos section
mmax = 2;
while (n > mmax) {
istep = mmax<<1;
theta = f_or_b * (2 * M_PI/mmax);
wtemp = sin(0.5 * theta);
wp_real = -2.0 * wtemp * wtemp;
wp_imaginary = sin(theta);
w_real = 1.0;
w_imaginary = 0.0;
for (m = 1; m < mmax; m += 2) {
for (i = m; i <= n; i += istep) {
j = i + mmax;
temp_real = w_real * data[j-1] - w_imaginary * data[j];
temp_imaginary = w_real * data[j] + w_imaginary * data[j-1];
data[j-1] = data[i-1] - temp_real;
data[j] = data[i] - temp_imaginary;
data[i-1] += temp_real;
data[i] += temp_imaginary;
}
wtemp = w_real;
w_real += w_real * wp_real - w_imaginary * wp_imaginary;
w_imaginary += w_imaginary * wp_real + wtemp * wp_imaginary;
}
mmax=istep;
}}
My ifft is the same only with the f_or_b set to -1 instead of 1. My program calls FFT() on each row, transposes the image, calls FFT() on each row again, then transposes back. Is there maybe an error with my indexing?
Not an actual answer as this question is Debug only so some hints instead:
your results are really bad
it should look like this:
first line is the actual DFFT result
Re,Im,Power is amplified by a constant otherwise you would see a black image
the last image is IDFFT of the original not amplified Re,IM result
the second line is the same but the DFFT result is wrapped by half size of image in booth x,y to match the common results in most DIP/CV texts
As you can see if you IDFFT back the wrapped results the result is not correct (checker board mask)
You have just single image as DFFT result
is it power spectrum?
or you forget to include imaginary part? to view only or perhaps also to computation somewhere as well?
is your 1D **DFFT working?**
for real data the result should be symmetric
check the links from my comment and compare the results for some sample 1D array
debug/repair your 1D FFT first and only then move to the next level
do not forget to test Real and complex data ...
your IDFFT looks BW (no gray) saturated
so did you amplify the DFFT results to see the image and used that for IDFFT instead of the original DFFT result?
also check if you do not round to integers somewhere along the computation
beware of (I)DFFT overflows/underflows
If your image pixel intensities are big and the resolution of image too then your computation could loss precision. Newer saw this in images but if your image is HDR then it is possible. This is a common problem with convolution computed by DFFT for big polynomials.
Thank you everyone for your opinions. All that stuff about memory corruption, while it makes a point, is not the root of the problem. The sizes of data I'm mallocing are not overly large, and I am freeing them in the right places. I had a lot of practice with this while learning c. The problem was not the fft algorithm either, nor even my 2D implementation of it.
All I missed was the scaling by 1/(M*N) at the very end of my ifft code. Because the image is 512x512, I needed to scale my ifft output by 1/(512*512). Also, my fft looks like white noise because the pixel data was not rescaled to fit between 0 and 255.
Suggest you look at the article http://www.yolinux.com/TUTORIALS/C++MemoryCorruptionAndMemoryLeaks.html
Christophe has a good point but he is wrong about it not being related to the problem because it seems that in modern times using malloc instead of new()/free() does not initialise memory or select best data type which would result in all problems listed below:-
Possibly causes are:
Sign of a number changing somewhere, I have seen similar issues when a platform invoke has been used on a dll and a value is passed by value instead of reference. It is caused by memory not necessarily being empty so when your image data enters it will have boolean maths performed on its values. I would suggest that you make sure memory is empty before you put your image data there.
Memory rotating right (ROR in assembly langauge) or left (ROL) . This will occur if data types are being used which do not necessarily match, eg. a signed value entering an unsigned data type or if the number of bits is different in one variable to another.
Data being lost due to an unsigned value entering a signed variable. Outcomes are 1 bit being lost because it will be used to determine negative or positive, or at extremes if twos complement takes place the number will become inverted in meaning, look for twos complement on wikipedia.
Also see how memory should be cleared/assigned before use. http://www.cprogramming.com/tutorial/memory_debugging_parallel_inspector.html

C++ interpolate array with cosine

I got this function from [this website]http://paulbourke.net/miscellaneous/interpolation/):
double CosineInterpolate(
double y1,double y2,
double mu)
{
double mu2;
mu2 = (1-cos(mu*PI))/2;
return(y1*(1-mu2)+y2*mu2);
}
How do I use this to interpolate an array? Here's how I'd be calling the function.
Interpolate(point_a, point_b, number_of_positions_between_the_points, position)
e.g.
for (int i = 0; i < ArrayOfPoints.size()-1; ++i) {
double point_a = ArrayOfPoints[i];
double point_b = ArrayOfPoints[i+1];
for (int j = 0; j < 2048; ++j){
array[j] = Interpolate(point_a, point_b, 2048, j)
}
}
You have the number of positions between the points, and then you have the current position. Think of mu as a percentage of the linear distance between the first point and the second that is determined by the current position, and the total number of positions. That is:
mu = (double)current_position / number_of_positions_between_the_points;
That will give you values between 0 and 1, in fixed increments, determined by how many positions you want to have between the points.
Hint: In your loop, j is the current position.
The other thing that you have to deal with is that you are calling a function named Interpolate(point_a, point_b, 2048, j) but you haven't shown the implementation for that function. Instead, you have the CosineInterpolate function. Presumably you wanted to abstract the interpolation method by invoking CosineInterpolate from Interpolate. The first part of the answer tells you how to do that. I hope this helps!

Why am I getting "nan" values in C++?

I am making a Correlogram for an image. For each pixel, a correlogram finds the pixels of same color within a certain range of distance, d. Correlogram is a 2D matrix i.e. correlogram[color][distance]. The calculation of a Correlogram is somewhat similar to that of a Histogram.
My Code: I am posting some major part of the code in which all the calculations are going on. Rest of the code (which i didn't post) is used to fulfill other condtions and therefore is not necessary.
Problem: In my final correlogram[][] , some values are "nan". I have checked the code but i am not able to find where is the problem in my calculation/syntax.
int ColorBins = 180;
int DistanceRange = 5;
double calcCorrelogram(Mat hsvImage)
{
double correlogram[ColorBins][DistanceRange];
int pixelNum[ColorBins]; //Used to count the number of pixels of same color
Mat hsvPlanes[3];
split(hsvImage, hsvPlanes);
for(int pi=0; pi<hsvImage.rows; pi++)
{
for(int pj=0; pj<hsvImage.cols; pj++)
{
int pixelColor = (int)hsvPlanes[0].at<uchar>(pi,pj);
pixelNum[pixelColor]++;
for(int d=1; d<=DistanceRange; d++)
{
int sameColorNum=0; //* number of pixels with same color in the d-distance boundary */
int totalBoundaryNum=0; //* total number of pixels in the d-distance boundary */
for(int i= pi-d, j= pj-d; j<=pj+d; j++)
{
if(i<0)
break;
if(j<0 || j>=hsvImage.cols)
continue;
int neighbourColor = (int)hsvPlanes[0].at<uchar>(i,j);
if(pixelColor == neighbourColor)
{
sameColorNum++;
}
totalBoundaryNum++;
correlogram[pixelColor][d-1] = correlogram[pixelColor][d-1] + (double)sameColorNum / (double)totalBoundaryNum;
}
}
}
for(int c=0; c<ColorBins; c++)
{
for(int d=0; d<DistanceRange; d++)
{
if(pixelNum[c] != 0)
correlogram[c][d] = correlogram[c][d] / (double)pixelNum[c];
}
}
}
NaNs are generally created when you divide zero by zero or multiply zero by infinity. One easy way to check for abnormal numbers like NaN and infinity is to multiply by zero and check if the result is zero:
bool is_valid_double(double x)
{
return x*0.0==0.0;
}
This will return false if x is either NaN or infinity.
Then you can sprinkle your code with assertions to help find where things are going wrong:
assert(is_valid_double(correlogram[c][d]));
Once you get a crash due to an assertion failure, you can use the debugger to look at the state of the program to help determine what is going on.

accessing image pixels as float array

I want to access image pixels as float array in opencv. Ive done the following:
Mat input = imread("Lena.jpg",CV_LOAD_IMAGE_GRAYSCALE);
int height = input.rows;
int width = input.cols;
Mat out;
input.convertTo(input, CV_32FC1);
copyMakeBorder(input, input, 3, 3, 3, 3, 0);
out = Mat(height, width, input.type());
float *outdata = (float*)out.data;
float *indata = (float*)input.data;
for(int j = 0; j < height; j++){
for(int i =0; i < width; i++){
outdata[j*width + i] = indata[(j* width + i)];
}
}
normalize(out, out,0,255,NORM_MINMAX,CV_8UC1);
imshow("output", out);
waitKey();
This should return the original image in "out", however, I'm getting some weird image. Can anyone explain whats wrong with the code. I think i need to use some step size (widthStep). Thanks.
the line
copyMakeBorder(input, input, 3, 3, 3, 3, 0);
changes the dimensions of input, it adds 6 rows and 6 columns to the image. That means your height and width variables are holding the wrong values when you define out and try to loop over the values on input.
if you change the order to
copyMakeBorder(input, input, 3, 3, 3, 3, 0);
int height = input.rows;
int width = input.cols;
it should work fine.
Some ideas:
Something like outdata[j*width + i] is a more standard pattern for this sort of thing.
According to the opencv documentation, there is a templated Mat::at(int y, int x) method that allows you to access individual elements of a matrix.
float f = input.at<float>(0, 0);
Note that this requires that your underlying matrix type is float -- it won't do a conversion for you.
Alternatively, you could access the data row-by-row, as in this example that sums up the positive elements of a matrix M of type double:
double sum=0;
for(int i = 0; i < M.rows; i++)
{
const double* Mi = M.ptr<double>(i);
for(int j = 0; j < M.cols; j++)
sum += std::max(Mi[j], 0.);
}
If none of these work, I'd suggest creating a small matrix with known values (e.g. a 2x2 matrix with 1 black pixel and 3 white pixels) and use that to help debug your code.
To really make it apparent what the problem is, imagine a 16 by 16 image. Now think of pixel number 17 in the linear representation.
17 is a prime number. There is no j*i that will index your source image at pixel 17 if the row or column width is 16. Thus elements like 17, 19, 23 and so on will be uninitialized or at best 0, resulting in a "weird" output.
How about pixel 8 in the linear representation? that one in contrast will get hit by your loop four times, i.e. by 1x8, 2x4, 4x2, and 8x1!
The indexing #NateKohl presents in his answer will fix it since he multiplies a row position by the length of the row and then simply walks along the columns.
You can try this loop...
for(int row=0;row<height;row++)
{
for(int col=0;col<width;col++)
{
float float_data = input.at<float>(row,col);
// do some processing with value of float_data
out.at<float>(row,col) = float_data;
}
}
Is there a need to cast the uchar pointers of input and out Mats to float pointers?