I'm working on ARM, and I'm trying to optimize downsampling an image, I have used OpenCV cv::resize and its slow ~3ms for 1280*960 to 400*300, I'm trying to use OpenMP to accelerate it, however while putting the parallel for statement, the image has been distorted. I know that is related to private variables and shared data between the threads, but I can't find the problem.
void resizeBilinearGray(uint8_t *pixels, uint8_t *temp, int w, int h, int w2, int h2) {
int A, B, C, D, x, y, index, gray ;
float x_ratio = ((float)(w-1))/w2 ;
float y_ratio = ((float)(h-1))/h2 ;
float x_diff, y_diff;
int offset = 0 ;
#pragma omp parallel for
for (int i=0;i<h2;i++) {
for (int j=0;j<w2;j++) {
x = (int)(x_ratio * j) ;
y = (int)(y_ratio * i) ;
x_diff = (x_ratio * j) - x ;
y_diff = (y_ratio * i) - y ;
index = y*w+x ;
// range is 0 to 255 thus bitwise AND with 0xff
A = pixels[index] & 0xff ;
B = pixels[index+1] & 0xff ;
C = pixels[index+w] & 0xff ;
D = pixels[index+w+1] & 0xff ;
// Y = A(1-w)(1-h) + B(w)(1-h) + C(h)(1-w) + Dwh
gray = (int)(
A*(1-x_diff)*(1-y_diff) + B*(x_diff)*(1-y_diff) +
C*(y_diff)*(1-x_diff) + D*(x_diff*y_diff)
) ;
temp[offset++] = gray ;
}
}
}
Why don't you try replacing temp[offset++] with temp[i*w2 + j]?
Your offset has multiple problems. For one it has a race condition. But worse is that OpenMP is assigning very different i and j values to each thread so they are reading non adjacent parts of memory. That's why your image is distorted.
Besides OpenMP there are several other ways to speed up your code you could try. I don't know ARM but on Intel you can get a big speed up with SSE. Additionally, you could try fixed floating point. I have found speed ups with both in bilinear interpolation.
fastcpp.blogspot.no/2011/06/bilinear-pixel-interpolation-using-sse.html
I think your problem is the offset variable. Since many threads can work at the same time, you never know which thread will update offset first. This is why the resulting image is distorted.
A better strategy is to iterate on the resulting image pixels. For each resulting pixel, you find the coordinates of the source image pixels, perform the interpolation, and write the result. This way, you are sure each thread work on different pixels, and on the right pixel.
Related
I am writing a program in C++ to reconstruct a 3D object from a set of projected 2D images, the most computation-intensive part of which involves magnifying and shifting each image via bilinear interpolation. I currently have a pair of functions for this task; "blnSetup" defines a handful of parameters outside the loop, then "bilinear" applies the interpolation point-by-point within the loop:
(NOTE: 'I' is a 1D array containing ordered rows of image data)
//Pre-definition structure (in header)
struct blnData{
float* X;
float* Y;
int* I;
float X0;
float Y0;
float delX;
float delY;
};
//Pre-definition function (outside the FOR loop)
extern inline blnData blnSetup(float* X, float* Y, int* I)
{
blnData bln;
//Create pointers to X, Y, and I vectors
bln.X = X;
bln.Y = Y;
bln.I = I;
//Store offset and step values for X and Y
bln.X0 = X[0];
bln.delX = X[1] - X[0];
bln.Y0 = Y[0];
bln.delY = Y[1] - Y[0];
return bln;
}
//Main interpolation function (inside the FOR loop)
extern inline float bilinear(float x, float y, blnData bln)
{
float Ixy;
//Return -1 if the target point is outside the image matrix
if (x < bln.X[0] || x > bln.X[-1] || y < bln.Y[0] || y > bln.Y[-1])
Ixy = 0;
//Otherwise, apply bilinear interpolation
else
{
//Define known image width
int W = 200;
//Find nearest indices for interpolation
int i = floor((x - bln.X0) / bln.delX);
int j = floor((y - bln.Y0) / bln.delY);
//Interpolate I at (xi, yj)
Ixy = 1 / ((bln.X[i + 1] - bln.X[i])*(bln.Y[j + 1] - bln.Y[j])) *
(
bln.I[W*j + i] * (bln.X[i + 1] - x) * (bln.Y[j + 1] - y) +
bln.I[W*j + i + 1] * (x - bln.X[i]) * (bln.Y[j + 1] - y) +
bln.I[W*(j + 1) + i] * (bln.X[i + 1] - x) * (y - bln.Y[j]) +
bln.I[W*(j + 1) + i + 1] * (x - bln.X[i]) * (y - bln.Y[j])
);
}
return Ixy;
}
EDIT: The function calls are below. 'flat.imgdata' is a std::vector containing the input image data and 'proj.imgdata' is a std::vector containing the transformed image.
int Xs = flat.dim[0];
int Ys = flat.dim[1];
int* Iarr = flat.imgdata.data();
float II, x, y;
bln = blnSetup(X, Y, Iarr);
for (int j = 0; j < flat.imgdata.size(); j++)
{
x = 1.2*X[j % Xs];
y = 1.2*Y[j / Xs];
II = bilinear(x, y, bln);
proj.imgdata[j] = (int)II;
}
Since I started optimizing, I have been able to reduce computation time by ~50x (!) by switching from std::vectors to C arrays within the interpolation function, and another 2x or so by cleaning up redundant computations/typecasting/etc, but assuming O(n) with n being the total number of processed pixels, the full reconstruction (~7e10 pixels) should still take 40min or so--about an order of magnitude longer than my goal of <5min.
According to Visual Studio's performance profiler, the interpolation function call ("II = bilinear(x, y, bln);") is unsurprisingly still the majority of my computation load. I haven't been able to find any linear algebraic methods for fast multiple interpolation, so my question is: is this basically as fast as my code will get, short of applying more or faster CPUs to the task? Or is there a different approach that might speed things up?
P.S. I've also only been coding in C++ for about a month now, so feel free to point out any beginner mistakes I might be making.
I wrote up a long answer suggesting looking at OpenCV (opencv.org), or using Halide (http://halide-lang.org/), and getting into how image warping is optimized, but I think a shorter answer might serve better. If you are really just scaling and translating entire images, OpenCV has code to do that and we have an example for resizing in Halide as well (https://github.com/halide/Halide/blob/master/apps/resize/resize.cpp).
If you really have an algorithm that needs to index an image using floating-point coordinates which result from a computation that cannot be turned into a moderately simple function on integer coordinates, then you really want to be using filtered texture sampling on a GPU. Most techniques for optimizing on the CPU rely on exploiting some regular pattern of access in the algorithm and removing float to integer conversion from the addressing. (For resizing, one uses two integer variables, one which indexes the pixel coordinate of the image and the other which is the fractional part of the coordinate and it indexes a kernel of weights.) If this is not possible, the speedups are somewhat limited on CPUs. OpenCV does provide fairly general remapping support, but it likely isn't all that fast.
Two optimizations that may be applicable here are trying to move the boundary condition out the loop and using a two pass approach in which the horizontal and vertical dimensions are processed separately. The latter may or may not win and will require tiling the data to fit in cache if the images are very large. Tiling in general is pretty important for large images, but it isn't clear it is the first order performance problem here and depending on the values in the inputs, the cache behavior may not be regular enough anyway.
"vector 50x slower than array". That's a dead giveaway you're in debug mode, where vector::operator[] is not inlined. You will probably get the necessary speedup, and a lot more, simply by switching to release mode.
As a bonus, vector has a .back() method, so you have a proper replacement for that [-1]. Pointers to the begin of an array don't contain the array size, so you can't find the back of an array that way.
I am calculating the mean value of multiple squared regions inside an image in C++. Therefore I am shifting a squared region over the image and calculating the mean calculation with the openCV "mean" function, but replaced it with a std. mean calculation (see below), which was unexpectedly faster. Nevertheless it takes ~8ms on an Android device, since the mean calculation is called about 400 times (each mean calculation takes ~0.025ms)
uchar rectSize = 10;
Rect roi(0,0,rectSize, rectSize);
int pxNumber = rectSize * rectSize;
uchar value;
//Shifting the region to the bottom
for(uchar y=0; y<NumberOfRectangles_Y; y++)
{
p = outBitMat.ptr<uchar>(y);
roi.x = rectSize;
//Shifting the region to the right
for(uchar x=0; x<NumberOfRectangles_X; x++, ++p)
{
meanCalc(normalized(roi),rectSize, pxNumber, value);
roi.x += rectSize;
}
roi.y += rectSize;
}
void meanCalc(const cv::Mat& normalized, uchar& rectSize, int& pxNumber, uchar& value)
{
for(uchar y=0; y < rectSize; y++)
{
p = normalized.ptr<uchar>(y);
for(uchar x=0; x < rectSize; x++, ++p)
{
sum += *p;
}
}
value = sum / (float)pxNumber;
}
Is there a way to speed this mean calculation of each rectangular window inside the image up? Can I do some kind of pre-pixel ordering to only calculate the mean once and being faster?
Thanks in advance
Update
Based on the answer of User 6502 and the use of a sum table I resulted in the following:
Mat tab;
integral(image,tab);
int* input = (int*)(tab.data);
value = (input[yStart*tabWidth + xStart] + input[(yStart+rectSize)*tabWidth + xStart+rectSize]
- input[yStart*tabWidth + xStart+rectSize] - input[(yStart+rectSize)*tabWidth + xStart]) / (double)pxNumber;
Whereby this function needs almost the same times. Can it be that the sum table is only useful, when calculating a lot of overlapping regions. Because in my case, I only touch each pixel for calculation only once.
You can compute the mean over any rectangle in constant time (independently of the rectangle size) by pre-computing a "summed area table".
You need to compute a table where the element (i, j) is the sum of all original data in the rectangle from (0, 0) to (i, j) and this can be done in a single pass over the data.
Once you have the table the sum of values between (x0, y0) and (x1, y1) can be computed in constant time with:
tab(x0, y0) + tab(x1, y1) - tab(x0, y1) - tab(x1, y0)
To understand how the algorithm works it's easier to think first to the one-dimensional case: to compute in constant time the sum of values from v[x0] to v[x1] you can pre-compute a table with
st[0] = v[0];
for (int i=1; i<n; i++) st[i] = st[i-1] + v[i];
and then you can take the difference st[x1] - st[x0] to know the sum of any interval of the original data.
The algorithm can indeed be easily extended to n dimensions.
Moreover it may be not evident at a first sight but the precomputation of the summed area table can be implemented for a multi-core architecture to take advantage of parallel execution.
For 2d a simple decomposition is to consider that computing the 2d sum table is the same as computing a 1d sum table on each row and then computing an 1d sum table for each column on the result.
You can use optimized version your function menCalc from Simd Library:
SIMD_API void SimdValueSum(const uint8_t * src, size_t stride,
size_t width, size_t height, uint64_t * sum);
It is much faster because uses different SIMD like SSE, AVX and so on.
We have some old devices that don't support non-pot textures and we have a function that converts ARGB textures to next power of 2 texture. The problem is that it's quite slow and we're wondering if there is a better approach to convert these textures.
void PotTexture()
{
size_t u2 = 1; while (u2 < imageData.width) u2 *= 2;
size_t v2 = 1; while (v2 < imageData.height) v2 *= 2;
std::vector<unsigned char> pottedImageData;
pottedImageData.resize(u2 * v2 * 4);
size_t y, x, c;
for (y = 0; y < imageData.height; y++)
{
for (x = 0; x < imageData.width; x++)
{
for (c = 0; c < 4; c++)
{
pottedImageData[4 * u2 * y + 4 * x + c] = imageData.convertedData[4 * imageData.width * y + 4 * x + c];
}
}
}
imageData.width = u2;
imageData.height = v2;
std::swap(imageData.convertedData, pottedImageData);
}
On some devices this can easily use 100% of the CPU so any optimizations would be amazing. Are there any existing functions that I could look at that perform this conversion?
Edit:
I've optimized the above loop slightly to:
for (y = 0; y < imageData.height; y++)
{
memcpy(
&(pottedImageData[y * u2 * 4]),
&(imageData.convertedData[y * imageData.width * 4]),
imageData.width * 4);
}
Even devices that don't support NPOT texture should support NPOT load.
Create the texture as an exact power of 2 and NO CONTENT using glTexImage2D, passing a null pointer for data.
data may be a null pointer. In this case, texture memory is allocated to accommodate a texture of width width and height height. You can then download subtextures to initialize this texture memory. The image is undefined if the user tries to apply an uninitialized portion of the texture image to a primitive.
Then use glTexSubImage2D to upload a NPOT image, which occupies only a portion of the total texture. This can be done without any CPU-side image rearrangement.
Having had a similar problem in a program I wrote, I took a very different approach. Rather than stretch the source texture, I just copied it into the top left corner of an otherwise empty power-of-two texture.
Then in the pixel shader you use a pair of floats to adjust s,t values so you fetch from just the top left corner.
float sAdjust = static_cast<float>(textureWidth) / static_cast<float>(containerWidth)
float tAdjust = static_cast<float>(textureHeight) / static_cast<float>(containerHeight)
is how you compute them, and to use them you'll get a Vec2 holding the s,t coordinates, just multiply s by sAdjust and t by tAdjust before using them to fetch. If you're using Direct3D, it'd be something akin to this:
D3DXVECTOR4 stAdjust;
stAdjust.x = sAdjust;
stAdjust.y = tAdjust;
// Transfer stAdjust into a float4 inside your pixel shader, call it stAdjust in there
Now in the pixel shader assume you have:
float2 texcoord;
float4 stAdjust;
you just say:
texcoord.x = texcoord.x * stAdjust.x;
texcoord.y = texcoord.y * stAdjust.y;
before using texcoord. Sorry I can't tell you how to do this in GLSL, but you get the general idea.
Okay, the very first optimization can be done here:
size_t u2 = 1; while (u2 < imageData.width) u2 *= 2;
size_t v2 = 1; while (v2 < imageData.height) v2 *= 2;
What you want to do is (for each dimension) find the floor of the logarithm-base2 (log2) and put that into 2**n+1. The standard math library has function log2 but it operates on floating point. But we can use is. 2**n can be written as 1 << n. So this gives
size_t const dim_p2_… = 1 << (int)floor(log2(dim_…)+1);
Better but not yet ideal, because of that float conversion. The Bit Twiddling hacks document has a few functions for integer ilog2: https://graphics.stanford.edu/~seander/bithacks.html#IntegerLog
But we're still not optimal. Let me introduce you to Compiler intrinsics, which translate into machine instructions, if the machine in question can do it on the metal.
GNU GCC: int __builtin_ffs (unsigned int x), which returns one plus the index of the least significant 1-bit of x, or if x is zero, returns zero.
MSVC++: _BitScanReverse, which returns the length of the run of the most significant bits set zero. So _BitScanReverse is like builtin_ffs - 1 (there's also a builtin_clz which behaves exactly like BitScanReverse.
So we can do
#define ilog2_p1(x) (__builtin_ffs(x))
or
#define ilog2_p1(x) (__BitScanReverse(x)+1)
and use that.
size_t const dim_p2_… = 1 << (int)floor(ilog2_p1(dim_…));
While we're at bit twiddling: We can save that whole ordeal if a texture is already in power of two format. A few years ago I (independently) rediscovered the wonderfully portable bit twiddling trick, exploiting the properties of complement-2 integers. You can also find it in the bit twiddles document. But the type neutral, concise macro form is rarely seen. So here it is:
#define ISPOW2(x) ( (x) && !( (x) & ((x) - 1) ) )
You're using C++ so templates are in order:
template<typename T> bool ispow2(T const x) { return x && !( x & (x - 1) ); }
Then Ben Voight already told you, how to use glTexSubImage2D to load that into the texture. Also have a look at the GL_ARB_texture_rectangle extension, that allows to load NPOT textures, but without the ability for mipmapping and advanced filtering. But it might be a viable choice for you.
If you ever feel the need to scale the texture it's always worth looking into dual spaces. In this case the spatial frequency domain dual space. Upscaling a signal is essentially a impulse response. As such it can be described as a convolution. Convolutions usually are O(n²) in complexity. But due to the Fourier Convolution theorem in Fourier space the equivalent is simple multiplication, so it becomes O(n). FFT can be done with O(n log n), so the total complexity is about O(n + 2n log n), which is much better.
I'm trying to downsample an Image using Neon. So I tried to exercise neon by writing a function that subtracts two images using neon and I have succeeded.
Now I came back to write the bilinear interpolation using neon intrinsics.
Right now I have two problems, getting 4 pixels from one row and one column and also compute the interpolated value (gray) from 4 pixels or if it is possible from 8 pixels from one row and one column. I tried to think about it, but I think the algorithm should be rewritten at all ?
void resizeBilinearNeon( uint8_t *src, uint8_t *dest, float srcWidth, float srcHeight, float destWidth, float destHeight)
{
int A, B, C, D, x, y, index;
float x_ratio = ((float)(srcWidth-1))/destWidth ;
float y_ratio = ((float)(srcHeight-1))/destHeight ;
float x_diff, y_diff;
for (int i=0;i<destHeight;i++) {
for (int j=0;j<destWidth;j++) {
x = (int)(x_ratio * j) ;
y = (int)(y_ratio * i) ;
x_diff = (x_ratio * j) - x ;
y_diff = (y_ratio * i) - y ;
index = y*srcWidth+x ;
uint8x8_t pixels_r = vld1_u8 (src[index]);
uint8x8_t pixels_c = vld1_u8 (src[index+srcWidth]);
// Y = A(1-w)(1-h) + B(w)(1-h) + C(h)(1-w) + Dwh
gray = (int)(
pixels_r[0]*(1-x_diff)*(1-y_diff) + pixels_r[1]*(x_diff)*(1-y_diff) +
pixels_c[0]*(y_diff)*(1-x_diff) + pixels_c[1]*(x_diff*y_diff)
) ;
dest[i*w2 + j] = gray ;
}
}
Neon will definitely help with downsampling in an arbitrary ratio using bilinear filtering. The key being clever use of vtbl.8 instruction, that is able to perform a parallel look-up-table for 8 consecutive destination pixels from pre-loaded array:
d0 = a [b] c [d] e [f] g h, d1 = i j k l m n o p
d2 = q r s t u v [w] x, d3 = [y] z [A] B [C][D] E F ...
d4 = G H I J K L M N, d5 = O P Q R S T U V ...
One can easily calculate the fractional positions for the pixels in brackets:
[b] [d] [f] [w] [y] [A] [C] [D], accessed with vtbl.8 d6, {d0,d1,d2,d3}
The row below would be accessed with vtbl.8 d7, {d2,d3,d4,d5}
Incrementing vadd.8 d6, d30 ; with d30 = [1 1 1 1 1 ... 1] gives lookup indices for the pixels right of the origin etc.
There's no reason for getting the pixels from two rows other than illustrating it's possible and that the method can be used to implement also slight distortions if needed.
In real time applications using e.g. of lanzcos can be a bit overkill, but still feasible using NEON. Downsampling of larger factors require of course (heavy) filtering, but can be easily achieved with iteratively averaging and decimating by 2:1 and only at the end using fractional sampling.
For any 8 consecutive pixels to write, one can calculate the vector
x_positions = (X + [0 1 2 3 4 5 6 7]) * source_width / target_width;
y_positions = (Y + [0 0 0 0 0 0 0 0]) * source_height / target_height;
ptr = to_int(x_positions) + y_positions * stride;
x_position += (ptr & 7); // this pointer arithmetic goes only for 8-bit planar
ptr &= ~7; // this is to adjust read pointer to qword alignment
vld1.8 {d0,d1}, [r0]
vld1.8 {d2,d3], [r0], r2 // wasn't this possible? (use r2==stride)
d4 = int_part_of (x_positions);
d5 = d4 + 1;
d6 = fract_part_of (x_positions);
d7 = fract_part_of (y_positions);
vtbl.8 d8,d4,{d0,d1} // read top row
vtbl.8 d9,d5,{d0,d1} // read top row +1
MIX(d8,d9,d6) // horizontal mix of ptr[] & ptr[1]
vtbl.8 d10,d4,{d2,d3} // read bottom row
vtbl.8 d11,d5,{d2,d3} // read bottom row
MIX(d10,d11,d6) // horizontal mix of ptr[1024] & ptr[1025]
MIX(d8,d10,d7)
// MIX (dst, src, fract) is a macro that somehow does linear blending
// should be doable with ~3-4 instructions
To calculate the integer parts, it's enough to use 8.8 bit resolution (one really doesn't have to calculate 666+[0 1 2 3 .. 7]) and keep all intermediate results in simd register.
Disclaimer -- this is conceptual pseudo c / vector code. In SIMD there are two parallel tasks to be optimized: what's the minimum amount of arithmetic operations needed and how to minimize unnecessary shuffling / copying of data. In this respect too NEON with three register approach is much better suited to serious DSP than SSE. The second respect is the amount of multiplication instruction and the third advantage the interleaving instructions.
#MarkRansom is not correct about nearest neighbor versus 2x2 bilinear interpolation; bilinear using 4 pixels will produce better output than nearest neighbor. He is correct that averaging the appropriate number of pixels (more than 4 if scaling by > 2:1) will produce better output still. However, NEON will not help with image downsampling unless the scaling is done by an integer ratio.
The maximum benefit of NEON and other SIMD instruction sets is to be able to process 8 or 16 pixels at once using the same operations. By accessing individual elements the way you are, you lose all the SIMD benefit. Another problem is that moving data from NEON to ARM registers is a slow operation. Downsampling images is best done by a GPU or optimized ARM instructions.
I am looking for optimized functions in c++ for calculating areal averages of floats. the function is passed a source float array, a destination float array (same size as source array), array width and height, "blurring" area width and height.
The function should "wrap-around" edges for the blurring/averages calculations.
Here is example code that blur with a rectangular shape:
/*****************************************
* Find averages extended variations
*****************************************/
void findaverages_ext(float *floatdata, float *dest_data, int fwidth, int fheight, int scale, int aw, int ah, int weight, int xoff, int yoff)
{
printf("findaverages_ext scale: %d, width: %d, height: %d, weight: %d \n", scale, aw, ah, weight);
float total = 0.0;
int spos = scale * fwidth * fheight;
int apos;
int w = aw;
int h = ah;
float* f_temp = new float[fwidth * fheight];
// Horizontal
for(int y=0;y<fheight ;y++)
{
Sleep(10); // Do not burn your processor
total = 0.0;
// Process entire window for first pixel (including wrap-around edge)
for (int kx = 0; kx <= w; ++kx)
if (kx >= 0 && kx < fwidth)
total += floatdata[y*fwidth + kx];
// Wrap
for (int kx = (fwidth-w); kx < fwidth; ++kx)
if (kx >= 0 && kx < fwidth)
total += floatdata[y*fwidth + kx];
// Store first window
f_temp[y*fwidth] = (total / (w*2+1));
for(int x=1;x<fwidth ;x++) // x width changes with y
{
// Substract pixel leaving window
if (x-w-1 >= 0)
total -= floatdata[y*fwidth + x-w-1];
// Add pixel entering window
if (x+w < fwidth)
total += floatdata[y*fwidth + x+w];
else
total += floatdata[y*fwidth + x+w-fwidth];
// Store average
apos = y * fwidth + x;
f_temp[apos] = (total / (w*2+1));
}
}
// Vertical
for(int x=0;x<fwidth ;x++)
{
Sleep(10); // Do not burn your processor
total = 0.0;
// Process entire window for first pixel
for (int ky = 0; ky <= h; ++ky)
if (ky >= 0 && ky < fheight)
total += f_temp[ky*fwidth + x];
// Wrap
for (int ky = fheight-h; ky < fheight; ++ky)
if (ky >= 0 && ky < fheight)
total += f_temp[ky*fwidth + x];
// Store first if not out of bounds
dest_data[spos + x] = (total / (h*2+1));
for(int y=1;y< fheight ;y++) // y width changes with x
{
// Substract pixel leaving window
if (y-h-1 >= 0)
total -= f_temp[(y-h-1)*fwidth + x];
// Add pixel entering window
if (y+h < fheight)
total += f_temp[(y+h)*fwidth + x];
else
total += f_temp[(y+h-fheight)*fwidth + x];
// Store average
apos = y * fwidth + x;
dest_data[spos+apos] = (total / (h*2+1));
}
}
delete f_temp;
}
What I need is similar functions that for each pixel finds the average (blur) of pixels from shapes different than rectangular.
The specific shapes are: "S" (sharp edges), "O" (rectangular but hollow), "+" and "X", where the average float is stored at the center pixel on destination data array. Size of blur shape should be variable, width and height.
The functions does not need to be pixelperfect, only optimized for performance. There could be separate functions for each shape.
I am also happy if anyone can tip me of how to optimize the example function above for rectangluar blurring.
What you are trying to implement are various sorts of digital filters for image processing. This is equivalent to convolving two signals where the 2nd one would be the filter's impulse response. So far, you regognized that a "rectangular average" is separable. By separable I mean, you can split the filter into two parts. One that operates along the X axis and one that operates along the Y axis -- in each case a 1D filter. This is nice and can save you lots of cycles. But not every filter is separable. Averaging along other shapres (S, O, +, X) is not separable. You need to actually compute a 2D convolution for these.
As for performance, you can speed up your 1D averages by properly implementing a "moving average". A proper "moving average" implementation only requires a fixed amount of little work per pixel regardless of the averaging "window". This can be done by recognizing that neighbouring pixels of the target image are computed by an average of almost the same pixels. You can reuse these sums for the neighbouring target pixel by adding one new pixel intensity and subtracting an older one (for the 1D case).
In case of arbitrary non-separable filters your best bet performance-wise is "fast convolution" which is FFT-based. Checkout www.dspguide.com. If I recall correctly, there is even a chapter on how to properly do "fast convolution" using the FFT algorithm. Although, they explain it for 1-dimensional signals, it also applies to 2-dimensional signals. For images you have to perform 2D-FFT/iFFT transforms.
To add to sellibitze's answer, you can use a summed area table for your O, S and + kernels (not for the X one though). That way you can convolve a pixel in constant time, and it's probably the fastest method to do it for kernel shapes that allow it.
Basically, a SAT is a data structure that lets you calculate the sum of any axis-aligned rectangle. For the O kernel, after you've built a SAT, you'd take the sum of the outer rect's pixels and subtract the sum of the inner rect's pixels. The S and + kernels can be implemented similarly.
For the X kernel you can use a different approach. A skewed box filter is separable:
You can convolve with two long, thin skewed box filters, then add the two resulting images together. The center of the X will be counted twice, so will you need to convolve with another skewed box filter, and subtract that.
Apart from that, you can optimize your box blur in many ways.
Remove the two ifs from the inner loop by splitting that loop into three loops - two short loops that do checks, and one long loop that doesn't. Or you could pad your array with extra elements from all directions - that way you can simplify your code.
Calculate values like h * 2 + 1 outside the loops.
An expression like f_temp[ky*fwidth + x] does two adds and one multiplication. You can initialize a pointer to &f_temp[ky*fwidth] outside the loop, and just increment that pointer in the loop.
Don't do the division by h * 2 + 1 in the horizontal step. Instead, divide by the square of that in the vertical step.