Bilinear Interpolation from C to Neon - c++

I'm trying to downsample an Image using Neon. So I tried to exercise neon by writing a function that subtracts two images using neon and I have succeeded.
Now I came back to write the bilinear interpolation using neon intrinsics.
Right now I have two problems, getting 4 pixels from one row and one column and also compute the interpolated value (gray) from 4 pixels or if it is possible from 8 pixels from one row and one column. I tried to think about it, but I think the algorithm should be rewritten at all ?
void resizeBilinearNeon( uint8_t *src, uint8_t *dest, float srcWidth, float srcHeight, float destWidth, float destHeight)
{
int A, B, C, D, x, y, index;
float x_ratio = ((float)(srcWidth-1))/destWidth ;
float y_ratio = ((float)(srcHeight-1))/destHeight ;
float x_diff, y_diff;
for (int i=0;i<destHeight;i++) {
for (int j=0;j<destWidth;j++) {
x = (int)(x_ratio * j) ;
y = (int)(y_ratio * i) ;
x_diff = (x_ratio * j) - x ;
y_diff = (y_ratio * i) - y ;
index = y*srcWidth+x ;
uint8x8_t pixels_r = vld1_u8 (src[index]);
uint8x8_t pixels_c = vld1_u8 (src[index+srcWidth]);
// Y = A(1-w)(1-h) + B(w)(1-h) + C(h)(1-w) + Dwh
gray = (int)(
pixels_r[0]*(1-x_diff)*(1-y_diff) + pixels_r[1]*(x_diff)*(1-y_diff) +
pixels_c[0]*(y_diff)*(1-x_diff) + pixels_c[1]*(x_diff*y_diff)
) ;
dest[i*w2 + j] = gray ;
}
}

Neon will definitely help with downsampling in an arbitrary ratio using bilinear filtering. The key being clever use of vtbl.8 instruction, that is able to perform a parallel look-up-table for 8 consecutive destination pixels from pre-loaded array:
d0 = a [b] c [d] e [f] g h, d1 = i j k l m n o p
d2 = q r s t u v [w] x, d3 = [y] z [A] B [C][D] E F ...
d4 = G H I J K L M N, d5 = O P Q R S T U V ...
One can easily calculate the fractional positions for the pixels in brackets:
[b] [d] [f] [w] [y] [A] [C] [D], accessed with vtbl.8 d6, {d0,d1,d2,d3}
The row below would be accessed with vtbl.8 d7, {d2,d3,d4,d5}
Incrementing vadd.8 d6, d30 ; with d30 = [1 1 1 1 1 ... 1] gives lookup indices for the pixels right of the origin etc.
There's no reason for getting the pixels from two rows other than illustrating it's possible and that the method can be used to implement also slight distortions if needed.
In real time applications using e.g. of lanzcos can be a bit overkill, but still feasible using NEON. Downsampling of larger factors require of course (heavy) filtering, but can be easily achieved with iteratively averaging and decimating by 2:1 and only at the end using fractional sampling.
For any 8 consecutive pixels to write, one can calculate the vector
x_positions = (X + [0 1 2 3 4 5 6 7]) * source_width / target_width;
y_positions = (Y + [0 0 0 0 0 0 0 0]) * source_height / target_height;
ptr = to_int(x_positions) + y_positions * stride;
x_position += (ptr & 7); // this pointer arithmetic goes only for 8-bit planar
ptr &= ~7; // this is to adjust read pointer to qword alignment
vld1.8 {d0,d1}, [r0]
vld1.8 {d2,d3], [r0], r2 // wasn't this possible? (use r2==stride)
d4 = int_part_of (x_positions);
d5 = d4 + 1;
d6 = fract_part_of (x_positions);
d7 = fract_part_of (y_positions);
vtbl.8 d8,d4,{d0,d1} // read top row
vtbl.8 d9,d5,{d0,d1} // read top row +1
MIX(d8,d9,d6) // horizontal mix of ptr[] & ptr[1]
vtbl.8 d10,d4,{d2,d3} // read bottom row
vtbl.8 d11,d5,{d2,d3} // read bottom row
MIX(d10,d11,d6) // horizontal mix of ptr[1024] & ptr[1025]
MIX(d8,d10,d7)
// MIX (dst, src, fract) is a macro that somehow does linear blending
// should be doable with ~3-4 instructions
To calculate the integer parts, it's enough to use 8.8 bit resolution (one really doesn't have to calculate 666+[0 1 2 3 .. 7]) and keep all intermediate results in simd register.
Disclaimer -- this is conceptual pseudo c / vector code. In SIMD there are two parallel tasks to be optimized: what's the minimum amount of arithmetic operations needed and how to minimize unnecessary shuffling / copying of data. In this respect too NEON with three register approach is much better suited to serious DSP than SSE. The second respect is the amount of multiplication instruction and the third advantage the interleaving instructions.

#MarkRansom is not correct about nearest neighbor versus 2x2 bilinear interpolation; bilinear using 4 pixels will produce better output than nearest neighbor. He is correct that averaging the appropriate number of pixels (more than 4 if scaling by > 2:1) will produce better output still. However, NEON will not help with image downsampling unless the scaling is done by an integer ratio.
The maximum benefit of NEON and other SIMD instruction sets is to be able to process 8 or 16 pixels at once using the same operations. By accessing individual elements the way you are, you lose all the SIMD benefit. Another problem is that moving data from NEON to ARM registers is a slow operation. Downsampling images is best done by a GPU or optimized ARM instructions.

Related

Why is my data in the frequency domain "mirrored" when performing (2d) IDFT into DFT using FFTW?

I am manually initializing a state in the 2d frequency domain by setting the real components of certain modes in a 16x16 data set. I then perform a 2d IDFT to acquire the real domain data. This all works as expected.
I then perform a DFT on the real domain data to get back (what should be) identical frequency modes to those that I manually initialized. However, they come back with their amplitudes halfed, and their vertical frequencies "mirrored". To illustrate:
Input modes:
k[1, 0]: 32 + 0i
k[2, 0]: 16 + 0i
k[3, 0]: 8 + 0i
k[4, 0]: 4 + 0i
Output modes after IDFT -> DFT:
k[ 1, 0]: 16 + 0i
k[ 2, 0]: 8 + 0i
k[ 3, 0]: 4 + 0i
k[ 4, 0]: 2 + 0i
k[12, 0]: 2 + 0i
k[13, 0]: 4 + 0i
k[14, 0]: 8 + 0i
k[15, 0]: 16 + 0i
My question is, why are the modes in the output of the DFT not the same as the initial input to the IDFT?
For some extra context, the problem I am having with this is that I am using this data to "solve" the heat equation, and higher frequency signals get scaled down very quickly. So the k[12, 0] to k[15, 0] modes don't actually contribute much after a few time steps.
Code to reproduce problem:
int N = 16; // Dimensions of the data
int logical_width = (N / 2) + 1; // Logical width of the frequency domain
double* real = new double[N * N];
fftw_complex* complex = (fftw_complex*)fftw_malloc(sizeof(fftw_complex) * N * logical_width);
fftw_plan plan = fftw_plan_dft_r2c_2d(N, N, real, complex, FFTW_ESTIMATE);
fftw_plan iplan = fftw_plan_dft_c2r_2d(N, N, complex, real, FFTW_ESTIMATE);
// Initialize all real data to 0
for (int i = 0; i < N * N; i++) {
real[i] = 0.0;
}
// Initialize all complex data to 0
for (int i = 0; i < N * logical_width; i++) {
complex[i][REAL] = 0.0;
complex[i][IMAG] = 0.0;
}
// Set first 4 vertical modes
complex[1 * logical_width][REAL] = 32;
complex[2 * logical_width][REAL] = 16;
complex[3 * logical_width][REAL] = 8;
complex[4 * logical_width][REAL] = 4;
// Print before IDFT -> DFT
printComplex(complex, N);
// IDFT
fftw_execute(iplan);
// DFT back
fftw_execute(plan);
// Print after IDFT -> DFT
printComplex(complex, N, true); // Pass true to divide amplitudes by N*N
// Clean up
fftw_destroy_plan(plan);
fftw_destroy_plan(iplan);
delete[] real;
fftw_free(complex);
The output of the two printComplex(...) calls can be seen in the question above.
You need to read up on the Discrete Fourier Transform.
For a real-valued time domain signal, the DFT has a conjugate symmetry:
F(k) = conj(F(N-k)),
with N the number of samples. By inverse transforming a non-symmetric frequency-domain signal, you obtain a complex-valued time-domain signal, but because you use a complex-to-real transform, only the real part of this result is actually computed. You’re throwing away half the data here. The forward transform then returns the DFT of this transformed signal. Because your time-domain signal is now real-valued, your frequency-domain result has conjugate symmetry.

How to convert scalar code of the double version of VDT's Pade Exp fast_ex() approx into SSE2?

Here's the code I'm trying to convert: the double version of VDT's Pade Exp fast_ex() approx (here's the old repo resource):
inline double fast_exp(double initial_x){
double x = initial_x;
double px=details::fpfloor(details::LOG2E * x +0.5);
const int32_t n = int32_t(px);
x -= px * 6.93145751953125E-1;
x -= px * 1.42860682030941723212E-6;
const double xx = x * x;
// px = x * P(x**2).
px = details::PX1exp;
px *= xx;
px += details::PX2exp;
px *= xx;
px += details::PX3exp;
px *= x;
// Evaluate Q(x**2).
double qx = details::QX1exp;
qx *= xx;
qx += details::QX2exp;
qx *= xx;
qx += details::QX3exp;
qx *= xx;
qx += details::QX4exp;
// e**x = 1 + 2x P(x**2)/( Q(x**2) - P(x**2) )
x = px / (qx - px);
x = 1.0 + 2.0 * x;
// Build 2^n in double.
x *= details::uint642dp(( ((uint64_t)n) +1023)<<52);
if (initial_x > details::EXP_LIMIT)
x = std::numeric_limits<double>::infinity();
if (initial_x < -details::EXP_LIMIT)
x = 0.;
return x;
}
I got this:
__m128d PExpSSE_dbl(__m128d x) {
__m128d initial_x = x;
__m128d half = _mm_set1_pd(0.5);
__m128d one = _mm_set1_pd(1.0);
__m128d log2e = _mm_set1_pd(1.4426950408889634073599);
__m128d p1 = _mm_set1_pd(1.26177193074810590878E-4);
__m128d p2 = _mm_set1_pd(3.02994407707441961300E-2);
__m128d p3 = _mm_set1_pd(9.99999999999999999910E-1);
__m128d q1 = _mm_set1_pd(3.00198505138664455042E-6);
__m128d q2 = _mm_set1_pd(2.52448340349684104192E-3);
__m128d q3 = _mm_set1_pd(2.27265548208155028766E-1);
__m128d q4 = _mm_set1_pd(2.00000000000000000009E0);
__m128d px = _mm_add_pd(_mm_mul_pd(log2e, x), half);
__m128d t = _mm_cvtepi64_pd(_mm_cvttpd_epi64(px));
px = _mm_sub_pd(t, _mm_and_pd(_mm_cmplt_pd(px, t), one));
__m128i n = _mm_cvtpd_epi64(px);
x = _mm_sub_pd(x, _mm_mul_pd(px, _mm_set1_pd(6.93145751953125E-1)));
x = _mm_sub_pd(x, _mm_mul_pd(px, _mm_set1_pd(1.42860682030941723212E-6)));
__m128d xx = _mm_mul_pd(x, x);
px = _mm_mul_pd(xx, p1);
px = _mm_add_pd(px, p2);
px = _mm_mul_pd(px, xx);
px = _mm_add_pd(px, p3);
px = _mm_mul_pd(px, x);
__m128d qx = _mm_mul_pd(xx, q1);
qx = _mm_add_pd(qx, q2);
qx = _mm_mul_pd(xx, qx);
qx = _mm_add_pd(qx, q3);
qx = _mm_mul_pd(xx, qx);
qx = _mm_add_pd(qx, q4);
x = _mm_div_pd(px, _mm_sub_pd(qx, px));
x = _mm_add_pd(one, _mm_mul_pd(_mm_set1_pd(2.0), x));
n = _mm_add_epi64(n, _mm_set1_epi64x(1023));
n = _mm_slli_epi64(n, 52);
// return?
}
But I'm not able to finish the last lines - i.e. this code:
if (initial_x > details::EXP_LIMIT)
x = std::numeric_limits<double>::infinity();
if (initial_x < -details::EXP_LIMIT)
x = 0.;
return x;
How would you convert in SSE2?
Than of course I need to check the whole, since I'm not quite sure I've converted it correctly.
EDIT: I found the SSE conversion of float exp - i.e. from this:
/* multiply by power of 2 */
z *= details::uint322sp((n + 0x7f) << 23);
if (initial_x > details::MAXLOGF) z = std::numeric_limits<float>::infinity();
if (initial_x < details::MINLOGF) z = 0.f;
return z;
to this:
n = _mm_add_epi32(n, _mm_set1_epi32(0x7f));
n = _mm_slli_epi32(n, 23);
return _mm_mul_ps(z, _mm_castsi128_ps(n));
Yup, dividing two polynomials can often give you a better tradeoff between speed and precision than one huge polynomial. As long as there's enough work to hide the divpd throughput. (The latest x86 CPUs have pretty decent FP divide throughput. Still bad vs. multiply, but it's only 1 uop so it doesn't stall the pipeline if you use it rarely enough, i.e. mixed with lots of multiplies. Including in the surrounding code that uses exp)
However, _mm_cvtepi64_pd(_mm_cvttpd_epi64(px)); won't work with SSE2. Packed-conversion intrinsics to/from 64-bit integers requires AVX512DQ.
To do packed rounding to the nearest integer, ideally you'd use SSE4.1 _mm_round_pd(x, _MM_FROUND_TO_NEAREST_INT |_MM_FROUND_NO_EXC), (or truncation towards zero, or floor or ceil towards -+Inf).
But we don't actually need that.
The scalar code ends up with int n and double px both representing the same numeric value. It uses the bad/buggy floor(val+0.5) idiom instead of rint(val) or nearbyint(val) to round to nearest, and then converts that already-integer double to an int (with C++'s truncation semantics, but that doesn't matter because the double value's already an exact integer.)
With SIMD intrinsics, it appears to be easiest to just convert to 32-bit integer and back.
__m128i n = _mm_cvtpd_epi32( _mm_mul_pd(log2e, x) ); // round to nearest
__m128d px = _mm_cvtepi32_pd( n );
Rounding to int with the desired mode, then converting back to double, is equivalent to double->double rounding and then grabbing an int version of that like the scalar version does. (Because you don't care what happens for doubles too large to fit in an int.)
cvtsd2si and si2sd instructions are 2 uops each, and shuffle the 32-bit integers to packed in the low 64 bits of a vector. So to set up for 64-bit integer shifts to stuff the bits into a double again, you'll need to shuffle. The top 64 bits of n will be zeros, so we can use that to create 64-bit integer n lined up with the doubles:
n = _mm_shuffle_epi32(n, _MM_SHUFFLE(3,1,2,0)); // 64-bit integers
But with just SSE2, there are workarounds. Converting to 32-bit integer and back is one option: you don't care about inputs too small or too large. But packed-conversion between double and int costs at least 2 uops on Intel CPUs each way, so a total of 4. But only 2 of those uops need the FMA units, and your code probably doesn't bottleneck on port 5 with all those multiplies and adds.
Or add a very large number and subtract it again: large enough that each double is 1 integer apart, so normal FP rounding does what you want. (This works for inputs that won't fit in 32 bits, but not double > 2^52. So either way that would work.) Also see How to efficiently perform double/int64 conversions with SSE/AVX? which uses that trick. I couldn't find an example on SO, though.
Related:
Fastest Implementation of Exponential Function Using AVX and Fastest Implementation of Exponential Function Using SSE have versions with other speed / precision tradeoffs, for _ps (packed single-precision float).
Fast SSE low precision exponential using double precision operations is at the other end of the spectrum, but still for double.
How many clock cycles does cost AVX/SSE exponentiation on modern x86_64 CPU? discusses some existing libraries like SVML, and Agner Fog's VCL (GPL licensed). And glibc's libmvec.
Then of course I need to check the whole, since I'm not quite sure I've converted it correctly.
iterating over all 2^64 double bit-patterns is impractical, unlike for float where there are only 4 billion, but maybe iterating over all doubles that have the low 32 bits of their mantissa all zero would be a good start. i.e. check in a loop with
bitpatterns = _mm_add_epi64(bitpatterns, _mm_set1_epi64x( 1ULL << 32 ));
doubles = _mm_castsi128_pd(bitpatterns);
https://randomascii.wordpress.com/2014/01/27/theres-only-four-billion-floatsso-test-them-all/
For those last few lines, correcting the input for out-of-range inputs:
The float version you quote just leaves out the range-check entirely. This is obviously the fastest way, if your inputs will always be in range or if you don't care about what happens for out-of-range inputs.
Alternate cheaper range-checking (maybe only for debugging) would be to turn out-of-range values into NaN by ORing the packed-compare result into the result. (An all-ones bit-pattern represents a NaN.)
__m128d out_of_bounds = _mm_cmplt_pd( limit, abs(initial_x) ); // abs = mask off the sign bit
result = _mm_or_pd(result, out_of_bounds);
In general, you can vectorize simple condition setting of a value using branchless compare + blend. Instead of if(x) y=0;, you have the SIMD equivalent of y = (condition) ? 0 : y;, on a per-element basis. SIMD compares produce a mask of all-zero / all-one elements so you can use it to blend.
e.g. in this case cmppd the input and blendvpd the output if you have SSE4.1. Or with just SSE2, and/andnot/or to blend. See SSE intrinsics for comparison (_mm_cmpeq_ps) and assignment operation for a _ps version of both, _pd is identical.
In asm it will look like this:
; result in xmm0 (in need of fixups for out of range inputs)
; initial_x in xmm2
; constants:
; xmm5 = limit
; xmm6 = +Inf
cmpltpd xmm2, xmm5 ; xmm2 = input_x < limit ? 0xffff... : 0
andpd xmm0, xmm2 ; result = result or 0
andnpd xmm2, xmm6 ; xmm2 = 0 or +Inf (In that order because we used ANDN)
orpd xmm0, xmm2 ; result |= 0 or +Inf
; xmm0 = (input < limit) ? result : +Inf
(In an earlier version of the answer, I thought I was maybe saving a movaps to copy a register, but this is just a bog-standard blend. It destroys initial_x, so the compiler needs to copy that register at some point while calculating result, though.)
Optimizations for this special condition
Or in this case, 0.0 is represented by an all-zero bit-pattern, so do a compare that will produce true if in-range, and AND the output with that. (To leave it unchanged or force it to +0.0). This is better than _mm_blendv_pd, which costs 2 uops on most Intel CPUs (and the AVX 128-bit version always costs 2 uops on Intel). And it's not worse on AMD or Skylake.
+-Inf is represented by a bit-pattern of significand=0, exponent=all-ones. (Any other value in the significand represents +-NaN.) Since too-large inputs will presumably still leave non-zero significands, we can't just AND the compare result and OR that into the final result. I think we need to do a regular blend, or something as expensive (3 uops and a vector constant).
It adds 2 cycles of latency to the final result; both the ANDNPD and ORPD are on the critical path. The CMPPD and ANDPD aren't; they can run in parallel with whatever you do to compute the result.
Hopefully your compiler will actually use ANDPS and so on, not PD, for everything except the CMP, because it's 1 byte shorter but identical because they're both just bitwise ops. I wrote ANDPD just so I didn't have to explain this in comments.
You might be able to shorten the critical path latency by combining both fixups before applying to the result, so you only have one blend. But then I think you also need to combine the compare results.
Or since your upper and lower bounds are the same magnitude, maybe you can compare the absolute value? (mask off the sign bit of initial_x and do _mm_cmplt_pd(abs_initial_x, _mm_set1_pd(details::EXP_LIMIT))). But then you have to sort out whether to zero or set to +Inf.
If you had SSE4.1 for _mm_blendv_pd, you could use initial_x itself as the blend control for the fixup that might need applying, because blendv only cares about the sign bit of the blend control (unlike with the AND/ANDN/OR version where all bits need to match.)
__m128d fixup = _mm_blendv_pd( _mm_setzero_pd(), _mm_set1_pd(INFINITY), initial_x ); // fixup = (initial_x signbit) ? 0 : +Inf
// see below for generating fixup with an SSE2 integer arithmetic-shift
const signbit_mask = _mm_castsi128_pd(_mm_set1_epi64x(0x7fffffffffffffff)); // ~ set1(-0.0)
__m128d abs_init_x = _mm_and_pd( initial_x, signbit_mask );
__m128d out_of_range = _mm_cmpgt_pd(abs_init_x, details::EXP_LIMIT);
// Conditionally apply the fixup to result
result = _mm_blendv_pd(result, fixup, out_of_range);
Possibly use cmplt instead of cmpgt and rearrange if you care what happens for initial_x being a NaN. Choosing the compare so false applies the fixup instead of true will mean that an unordered comparison results in either 0 or +Inf for an input of -NaN or +NaN. This still doesn't do NaN propagation. You could _mm_cmpunord_pd(initial_x, initial_x) and OR that into fixup, if you want to make that happen.
Especially on Skylake and AMD Bulldozer/Ryzen where SSE2 blendvpd is only 1 uop, this should be pretty nice. (The VEX encoding, vblendvpd is 2 uops, having 3 inputs and a separate output.)
You might still be able to use some of this idea with only SSE2, maybe creating fixup by doing a compare against zero and then _mm_and_pd or _mm_andnot_pd with the compare result and +Infinity.
Using an integer arithmetic shift to broadcast the sign bit to every position in the double isn't efficient: psraq doesn't exist, only psraw/d. Only logical shifts come in 64-bit element size.
But you could create fixup with just one integer shift and mask, and a bitwise invert
__m128i ix = _mm_castsi128_pd(initial_x);
__m128i ifixup = _mm_srai_epi32(ix, 11); // all 11 bits of exponent field = sign bit
ifixup = _mm_and_si128(ifixup, _mm_set1_epi64x(0x7FF0000000000000ULL) ); // clear other bits
// ix = the bit pattern for 0 (non-negative x) or +Inf (negative x)
__m128d fixup = _mm_xor_si128(ifixup, _mm_set1_epi32(-1)); // bitwise invert
Then blend fixup into result for out-of-range inputs as normal.
Cheaply checking abs(initial_x) > details::EXP_LIMIT
If the exp algorithm was already squaring initial_x, you could compare against EXP_LIMIT squared. But it's not, xx = x*x only happens after some calculation to create x.
If you have AVX512F/VL, VFIXUPIMMPD might be handy here. It's designed for functions where the special case outputs are from "special" inputs like NaN and +-Inf, negative, positive, or zero, saving a compare for those cases. (e.g. for after a Newton-Raphson reciprocal(x) for x=0.)
But both of your special cases need compares. Or do they?
If you square your input and subtract, it only costs one FMA to do initial_x * initial_x - details::EXP_LIMIT * details::EXP_LIMIT to create a result that's negative for abs(initial_x) < details::EXP_LIMIT, and non-negative otherwise.
Agner Fog reports that vfixupimmpd is only 1 uop on Skylake-X.

Using OpenMP to for optimizing BiLinear Interpolation

I'm working on ARM, and I'm trying to optimize downsampling an image, I have used OpenCV cv::resize and its slow ~3ms for 1280*960 to 400*300, I'm trying to use OpenMP to accelerate it, however while putting the parallel for statement, the image has been distorted. I know that is related to private variables and shared data between the threads, but I can't find the problem.
void resizeBilinearGray(uint8_t *pixels, uint8_t *temp, int w, int h, int w2, int h2) {
int A, B, C, D, x, y, index, gray ;
float x_ratio = ((float)(w-1))/w2 ;
float y_ratio = ((float)(h-1))/h2 ;
float x_diff, y_diff;
int offset = 0 ;
#pragma omp parallel for
for (int i=0;i<h2;i++) {
for (int j=0;j<w2;j++) {
x = (int)(x_ratio * j) ;
y = (int)(y_ratio * i) ;
x_diff = (x_ratio * j) - x ;
y_diff = (y_ratio * i) - y ;
index = y*w+x ;
// range is 0 to 255 thus bitwise AND with 0xff
A = pixels[index] & 0xff ;
B = pixels[index+1] & 0xff ;
C = pixels[index+w] & 0xff ;
D = pixels[index+w+1] & 0xff ;
// Y = A(1-w)(1-h) + B(w)(1-h) + C(h)(1-w) + Dwh
gray = (int)(
A*(1-x_diff)*(1-y_diff) + B*(x_diff)*(1-y_diff) +
C*(y_diff)*(1-x_diff) + D*(x_diff*y_diff)
) ;
temp[offset++] = gray ;
}
}
}
Why don't you try replacing temp[offset++] with temp[i*w2 + j]?
Your offset has multiple problems. For one it has a race condition. But worse is that OpenMP is assigning very different i and j values to each thread so they are reading non adjacent parts of memory. That's why your image is distorted.
Besides OpenMP there are several other ways to speed up your code you could try. I don't know ARM but on Intel you can get a big speed up with SSE. Additionally, you could try fixed floating point. I have found speed ups with both in bilinear interpolation.
fastcpp.blogspot.no/2011/06/bilinear-pixel-interpolation-using-sse.html
I think your problem is the offset variable. Since many threads can work at the same time, you never know which thread will update offset first. This is why the resulting image is distorted.
A better strategy is to iterate on the resulting image pixels. For each resulting pixel, you find the coordinates of the source image pixels, perform the interpolation, and write the result. This way, you are sure each thread work on different pixels, and on the right pixel.

Optimized float Blur variations

I am looking for optimized functions in c++ for calculating areal averages of floats. the function is passed a source float array, a destination float array (same size as source array), array width and height, "blurring" area width and height.
The function should "wrap-around" edges for the blurring/averages calculations.
Here is example code that blur with a rectangular shape:
/*****************************************
* Find averages extended variations
*****************************************/
void findaverages_ext(float *floatdata, float *dest_data, int fwidth, int fheight, int scale, int aw, int ah, int weight, int xoff, int yoff)
{
printf("findaverages_ext scale: %d, width: %d, height: %d, weight: %d \n", scale, aw, ah, weight);
float total = 0.0;
int spos = scale * fwidth * fheight;
int apos;
int w = aw;
int h = ah;
float* f_temp = new float[fwidth * fheight];
// Horizontal
for(int y=0;y<fheight ;y++)
{
Sleep(10); // Do not burn your processor
total = 0.0;
// Process entire window for first pixel (including wrap-around edge)
for (int kx = 0; kx <= w; ++kx)
if (kx >= 0 && kx < fwidth)
total += floatdata[y*fwidth + kx];
// Wrap
for (int kx = (fwidth-w); kx < fwidth; ++kx)
if (kx >= 0 && kx < fwidth)
total += floatdata[y*fwidth + kx];
// Store first window
f_temp[y*fwidth] = (total / (w*2+1));
for(int x=1;x<fwidth ;x++) // x width changes with y
{
// Substract pixel leaving window
if (x-w-1 >= 0)
total -= floatdata[y*fwidth + x-w-1];
// Add pixel entering window
if (x+w < fwidth)
total += floatdata[y*fwidth + x+w];
else
total += floatdata[y*fwidth + x+w-fwidth];
// Store average
apos = y * fwidth + x;
f_temp[apos] = (total / (w*2+1));
}
}
// Vertical
for(int x=0;x<fwidth ;x++)
{
Sleep(10); // Do not burn your processor
total = 0.0;
// Process entire window for first pixel
for (int ky = 0; ky <= h; ++ky)
if (ky >= 0 && ky < fheight)
total += f_temp[ky*fwidth + x];
// Wrap
for (int ky = fheight-h; ky < fheight; ++ky)
if (ky >= 0 && ky < fheight)
total += f_temp[ky*fwidth + x];
// Store first if not out of bounds
dest_data[spos + x] = (total / (h*2+1));
for(int y=1;y< fheight ;y++) // y width changes with x
{
// Substract pixel leaving window
if (y-h-1 >= 0)
total -= f_temp[(y-h-1)*fwidth + x];
// Add pixel entering window
if (y+h < fheight)
total += f_temp[(y+h)*fwidth + x];
else
total += f_temp[(y+h-fheight)*fwidth + x];
// Store average
apos = y * fwidth + x;
dest_data[spos+apos] = (total / (h*2+1));
}
}
delete f_temp;
}
What I need is similar functions that for each pixel finds the average (blur) of pixels from shapes different than rectangular.
The specific shapes are: "S" (sharp edges), "O" (rectangular but hollow), "+" and "X", where the average float is stored at the center pixel on destination data array. Size of blur shape should be variable, width and height.
The functions does not need to be pixelperfect, only optimized for performance. There could be separate functions for each shape.
I am also happy if anyone can tip me of how to optimize the example function above for rectangluar blurring.
What you are trying to implement are various sorts of digital filters for image processing. This is equivalent to convolving two signals where the 2nd one would be the filter's impulse response. So far, you regognized that a "rectangular average" is separable. By separable I mean, you can split the filter into two parts. One that operates along the X axis and one that operates along the Y axis -- in each case a 1D filter. This is nice and can save you lots of cycles. But not every filter is separable. Averaging along other shapres (S, O, +, X) is not separable. You need to actually compute a 2D convolution for these.
As for performance, you can speed up your 1D averages by properly implementing a "moving average". A proper "moving average" implementation only requires a fixed amount of little work per pixel regardless of the averaging "window". This can be done by recognizing that neighbouring pixels of the target image are computed by an average of almost the same pixels. You can reuse these sums for the neighbouring target pixel by adding one new pixel intensity and subtracting an older one (for the 1D case).
In case of arbitrary non-separable filters your best bet performance-wise is "fast convolution" which is FFT-based. Checkout www.dspguide.com. If I recall correctly, there is even a chapter on how to properly do "fast convolution" using the FFT algorithm. Although, they explain it for 1-dimensional signals, it also applies to 2-dimensional signals. For images you have to perform 2D-FFT/iFFT transforms.
To add to sellibitze's answer, you can use a summed area table for your O, S and + kernels (not for the X one though). That way you can convolve a pixel in constant time, and it's probably the fastest method to do it for kernel shapes that allow it.
Basically, a SAT is a data structure that lets you calculate the sum of any axis-aligned rectangle. For the O kernel, after you've built a SAT, you'd take the sum of the outer rect's pixels and subtract the sum of the inner rect's pixels. The S and + kernels can be implemented similarly.
For the X kernel you can use a different approach. A skewed box filter is separable:
You can convolve with two long, thin skewed box filters, then add the two resulting images together. The center of the X will be counted twice, so will you need to convolve with another skewed box filter, and subtract that.
Apart from that, you can optimize your box blur in many ways.
Remove the two ifs from the inner loop by splitting that loop into three loops - two short loops that do checks, and one long loop that doesn't. Or you could pad your array with extra elements from all directions - that way you can simplify your code.
Calculate values like h * 2 + 1 outside the loops.
An expression like f_temp[ky*fwidth + x] does two adds and one multiplication. You can initialize a pointer to &f_temp[ky*fwidth] outside the loop, and just increment that pointer in the loop.
Don't do the division by h * 2 + 1 in the horizontal step. Instead, divide by the square of that in the vertical step.

algorithms for modular inverses

i have read section about The Extended Euclidean Algorithm & Modular Inverses,which states that it not only computes GCD(n,m) but also a and b such that a*n+b*b=1;
algorithm is described by by this way:
Write down n, m, and the two-vectors (1,0) and (0,1)
Divide the larger of the two numbers by the smaller - call this
quotient q
Subtract q times the smaller from the larger (ie reduce the larger
modulo the smaller)
(i have question here if we denote by q n/m,then n-q*m is not equal to 0?because q=n/m;(assume that n>m),so why it is necessary such kind of operation?
then 4 step
4.Subtract q times the vector corresponding to the smaller from the
vector corresponding to the larger
5.Repeat steps 2 through 4 until the result is zero
6.Publish the preceding result as gcd(n,m)
so my question for this problem also is how can i implement this steps in code?please help me,i dont know how start and from which point could i start to solve such problem,for clarify result ,it should look like this
An example of this algorithm is the following computation of 30^(-1)(mod 53);
53 30 (1,0) (0,1)
53-1*30=23 30 (1,0)-1*(0,1)=(1,-1) (0,1)
23 30-1*23=7 (1,-1) (0,1)-1*(1,-1)=(-1,2)
23-3*7=2 7 (1,-1)-3*(-1,2)=(4,-7) (-1,2)
2 7-3*2=1 (4,-7) (-1,2)-3*(4,7)=(-13,23)
2-2*1=0 1 (4,-7)-2*(-13,23)=(30,-53) (-13,23)
From this we see that gcd(30,53)=1 and, rearranging terms, we see that 1=-13*53+23*30,
so we conclude that 30^(-1)=23(mod 53).
The division is supposed to be integer division with truncation. The standard EA for gcd(a, b) with a <= b goes like this:
b = a * q0 + r0
a = r0 * q1 + r1
r0 = r1 * q2 + r2
...
r[N+1] = 0
Now rN is the desired GCD. Then you back-substitute:
r[N-1] = r[N] * q[N+1]
r[N-2] = r[N-1] * q[N] + r[N]
= (r[N] * q[N+1]) * q[N] + r[N]
= r[N] * (q[N+1] * q[N] + 1)
r[N-3] = r[N-2] * q[N-1] + r[N-1]
= ... <substitute> ...
Until you finally reach rN = m * a + n * b. The algorithm you describe keeps track of the backtracking data right away, so it's a bit more efficient.
If rN == gcd(a, b) == 1, then you have indeed found the multiplicative inverse of a modulo b, namely m: (a * m) % b == 1.