Filtering 1bpp images - c++

I'm looking to filter a 1 bit per pixel image using a 3x3 filter: for each input pixel, the corresponding output pixel is set to 1 if the weighted sum of the pixels surrounding it (with weights determined by the filter) exceeds some threshold.
I was hoping that this would be more efficient than converting to 8 bpp and then filtering that, but I can't think of a good way to do it. A naive method is to keep track of nine pointers to bytes (three consecutive rows and also pointers to either side of the current byte in each row, for calculating the output for the first and last bits in these bytes) and for each input pixel compute
sum = filter[0] * (lastRowPtr & aMask > 0) + filter[1] * (lastRowPtr & bMask > 0) + ... + filter[8] * (nextRowPtr & hMask > 0),
with extra faff for bits at the edge of a byte. However, this is slow and seems really ugly. You're not gaining any parallelism from the fact that you've got eight pixels in each byte and instead are having to do tonnes of extra work masking things.
Are there any good sources for how to best do this sort of thing? A solution to this particular problem would be amazing, but I'd be happy being pointed to any examples of efficient image processing on 1bpp images in C/C++. I'd like to replace some more 8 bpp stuff with 1 bpp algorithms in future to avoid image conversions and copying, so any general resouces on this would be appreciated.

I found a number of years ago that unpacking the bits to bytes, doing the filter, then packing the bytes back to bits was faster than working with the bits directly. It seems counter-intuitive because it's 3 loops instead of 1, but the simplicity of each loop more than made up for it.
I can't guarantee that it's still the fastest; compilers and especially processors are prone to change. However simplifying each loop not only makes it easier to optimize, it makes it easier to read. That's got to be worth something.
A further advantage to unpacking to a separate buffer is that it gives you flexibility for what you do at the edges. By making the buffer 2 bytes larger than the input, you unpack starting at byte 1 then set byte 0 and n to whatever you like and the filtering loop doesn't have to worry about boundary conditions at all.

Look into separable filters. Among other things, they allow massive parallelism in the cases where they work.
For example, in your 3x3 sample-weight-and-filter case:
Sample 1x3 (horizontal) pixels into a buffer. This can be done in isolation for each pixel, so a 1024x1024 image can run 1024^2 simultaneous tasks, all of which perform 3 samples.
Sample 3x1 (vertical) pixels from the buffer. Again, this can be done on every pixel simultaneously.
Use the contents of the buffer to cull pixels from the original texture.
The advantage to this approach, mathematically, is that it cuts the number of sample operations from n^2 to 2n, although it requires a buffer of equal size to the source (if you're already performing a copy, that can be used as the buffer; you just can't modify the original source for step 2). In order to keep memory use at 2n, you can perform steps 2 and 3 together (this is a bit tricky and not entirely pleasant); if memory isn't an issue, you can spend 3n on two buffers (source, hblur, vblur).
Because each operation is working in complete isolation from an immutable source, you can perform the filter on every pixel simultaneously if you have enough cores. Or, in a more realistic scenario, you can take advantage of paging and caching to load and process a single column or row. This is convenient when working with odd strides, padding at the end of a row, etc. The second round of samples (vertical) may screw with your cache, but at the very worst, one round will be cache-friendly and you've cut processing from exponential to linear.
Now, I've yet to touch on the case of storing data in bits specifically. That does make things slightly more complicated, but not terribly much so. Assuming you can use a rolling window, something like:
d = s[x-1] + s[x] + s[x+1]
works. Interestingly, if you were to rotate the image 90 degrees during the output of step 1 (trivial, sample from (y,x) when reading), you can get away with loading at most two horizontally adjacent bytes for any sample, and only a single byte something like 75% of the time. This plays a little less friendly with cache during the read, but greatly simplifies the algorithm (enough that it may regain the loss).
Pseudo-code:
buffer source, dest, vbuf, hbuf;
for_each (y, x) // Loop over each row, then each column. Generally works better wrt paging
{
hbuf(x, y) = (source(y, x-1) + source(y, x) + source(y, x+1)) / 3 // swap x and y to spin 90 degrees
}
for_each (y, x)
{
vbuf(x, 1-y) = (hbuf(y, x-1) + hbuf(y, x) + hbuf(y, x+1)) / 3 // 1-y to reverse the 90 degree spin
}
for_each (y, x)
{
dest(x, y) = threshold(hbuf(x, y))
}
Accessing bits within the bytes (source(x, y) indicates access/sample) is relatively simple to do, but kind of a pain to write out here, so is left to the reader. The principle, particularly implemented in this fashion (with the 90 degree rotation), only requires 2 passes of n samples each, and always samples from immediately adjacent bits/bytes (never requiring you to calculate the position of the bit in the next row). All in all, it's massively faster and simpler than any alternative.

Rather than expanding the entire image to 1 bit/byte (or 8bpp, essentially, as you noted), you can simply expand the current window - read the first byte of the first row, shift and mask, then read out the three bits you need; do the same for the other two rows. Then, for the next window, you simply discard the left column and fetch one more bit from each row. The logic and code to do this right isn't as easy as simply expanding the entire image, but it'll take a lot less memory.
As a middle ground, you could just expand the three rows you're currently working on. Probably easier to code that way.

Related

FFT of large data (16gB) using Matlab

I am trying to compute a fast fourier transform of a large chunk of data imported from a text file which is around 16 gB in size. I was trying to think of a way to compute its fft in matlab, but due to my computer memory (8gB) it is giving me an out of memory error. I tried using memmap, textscan, but was not able to apply to get FFT of the combined data.
Can anyone kindly guide me as to how should I approach to get the fourier transform? I am also trying to get the fourier transform (using definition) using C++ code on a remote server, but it's taking a long time to execute. Can anyone give me a proper insight as to how should I handle this large data?
It depends on the resolution of the FFT that you require. If you only need an FFT of, say, 1024 points, then you can reshape your data to, or sequentially read it as N x 1024 blocks. Once you have it in this format, you can then add the output of each FFT result to a 1024 point complex accumulator.
If you need the same resolution after the FFT, then you need more memory, or a special fft routine that is not included in Matlab (but I'm not sure if it is even mathematically possible to do a partial FFT by buffering small chunks through for full resolution).
It may be better you implement FFT with your own code.
The FFT algorithm has a "butterfly" operation. Hence you can split the whole step into smaller blocks.
The file size is too large for a typical pc to handle. But FFT doesn't need all data at once. It can always start with 2-point (maybe 8-point is better) FFT, and you can build up by cascading the stages. It means you can read only a few points at a time, do some calculation, and save your data to disk. Next time you doing another iteration, you can read the saved data from disk.
Depending on how you build the data structure, you can either store all the data in one single file, and read/save it with pointers (in Matlab it's merely a number); or you can store every single point in one individual file, generating billions of files and distinguishing them by file names.
The idea is you can dump your calculation to disk, instead of memory. Of course it requires such amount of disk space, which is more feasible.
I can show you a piece of pseudo-code. Depending on the data structure of your original data (that 16GB txt file), the implementation will be different, but you can easily operate as you own the file. I will start with 2-point FFT and do with the 8-point sample in this wikipedia picture.
1.Do 2-point FFT on x, generating y, the 3rd column of white circles from left.
read x[0], x[4] from file 'origin'
y[0] = x[0] + x[4]*W(N,0);
y[1] = x[0] - x[4]*W(N,0);
save y[0], y[1] to file 'temp'
remove x[0], x[4], y[0], y[1] from memory
read x[2], x[6] from file 'origin'
y[2] = x[2] + x[6]*W(N,0);
y[3] = x[2] - x[6]*W(N,0);
save y[2], y[3] to file 'temp'
remove x[2], x[6], y[2], y[3] from memory
....
2.Do 2-point FFT on y, generating z, the 5th column of white circles.
3.Do 2-point FFT on z, generating final result, X.
Basically the Cooley–Tukey FFT algorithm is designed to enable you cut up the data and calculate piece by piece, so it's possible to handle large-amount data. I know it's not a regular way but if you can take a look at the Chinese version of that Wikipedia page, you may find a number of pictures that may help you understand how it splits up the points.
I've encountered this same problem. I ended up finding a solution in a paper:
Extending sizes of effective convolution algorithms. It essentially involves loading shorter chunks, multiplying by a phase factor and FFT-ing, then loading the next chunk in the series. This gives a sampled of the total FFT of the full signal. The process is then repeated with a number of times with different phase factors to fill in the remaining points. I will attempt to summarize here (adapted from Table II in the paper):
For a total signal f(j) of length N, decide on a number m or shorter chunks each of length N/m that you can store in memory (if needed, zero-pad the signal such that N is a multiple of m)
For beta = 0, 1, 2, ... ,m - 1 do the following:
Divide the new series into m subintervals of N/m successive points.
For each subinterval, multiply each jth element by exp(i*2*pi*j*beta/N). Here, j is indexed according to the position of the point relative to the first in the whole data stream.
Sum the first elements of each subinterval to produce a single number, sum the second elements, and so forth. This can be done as points are read from file, so there is no need to have the full set of N points in memory.
Fourier transform the resultant series, which contains N/m points.
This will give F(k) for k = ml + beta, for l = 0, ..., N/m-1. Save these values to disk.
Go to 2, and proceed with the next value of beta.

Storing Tile Data In Excess of 100 Million Tiles Per Layer Multiple Layers

Problem: i am trying to store tile data for my map class. i had the idea of using a palette per layer, the palette would describe the data in the layer which would be an array of bytes with each byte representing a tile type.
this means 1 layer of 100 million tiles would equal ~96mb. however i overlooked how much data i could actually store in a byte and it turns out i can only store 256 tiles of course. resulting in square-root of 256 * tile-size texture sizes ( in this case 256 as tile sizes are 16) . 256*256 texture sizes are too small as each palette can only have one texture. severely limiting the tiles i can have in a layer.
i am now stuck in a bind as if i use 2 bytes ( short ) instead of 1 byte to store tile data i will double my memory usage to ~192mb per layer. and i want 4 layers at the minimum. inflating the end product to 768mb of ram used. i also can not describe the data in the data as the array offset of each byte is also a description of its location.
is there a way i could store this data more efficiently. worst case scenario will involve me saving all this to the disk and buffering to memory from the disk. but i would prefer to keep it in memory.
i guess i could come up with something smart in a few hours but i thought i would ask to see if there are any common methods i am unaware of to combat this problem.
I suggest representing your data in an array which maps to the two dimensional plane using a space filling curve such as the Hilbert curve.
Then, compress this using a combination of Huffman coding and run-length encoding. This will be particularly effective if you data is often repeated locally (i.e. there are lots of sections which are all the same tile next to each other).
Do this compression in blocks of say 256 tiles. Then, have an array of offsets that indicate how far into the compressed data certain bytes numbers are.
For example, the start of the second block (tile 256) byte might be at position 103, and the start of the third block (tile 512) might be at position 192.
Then say to access the 400th tile, you can work out this is from the second block, so decompress the second block (in this case from byte 103 to byte 191) and from this get the 400 - 256 = 144 tile. Save (cache) this decompressed data for the moment, it's likely if you're getting nearby tiles they'll also be in this decompressed block. Perhaps in your array of offsets you should also include what blocks have been recently cached, and where in the cache they are.
If you wanted to allow modifications, you'd probably have to change your data structure from one large array to a vector of vectors. Have an indicator for each vector whether it is compressed or not. When doing modifications, uncompress blocks and modify them, and recompress blocks the least recently modified blocks when memory is running out.
Or, you could just dump the whole structure to a file and memory map the file. This is much simpler but may be slower depending on the compressibility of your data and your access patterns due to additional I/O.

How to get ALL data from 2D Real to Complex FFT in Cuda

I am trying to do a 2D Real To Complex FFT using CUFFT.
I realize that I will do this and get W/2+1 complex values back (W being the "width" of my H*W matrix).
The question is - what if I want to build out a full H*W version of this matrix after the transform - how do I go about copying some values from the H*(w/2+1) result matrix back to a full size matrix to get both parts and the DC value in the right place
Thanks
I'm not familiar with CUDA, so take that into consideration when reading my response. I am familiar with FFTs and signal processing in general, though.
It sounds like you start out with an H (rows) x W (cols) matrix, and that you are doing a 2D FFT that essentially does an FFT on each row, and you end up with an H x W/2+1 matrix. A W-wide FFT returns W values, but the CUDA function only returns W/2+1 because real data is even in the frequency domain, so the negative frequency data is redundant.
So, if you want to reproduce the missing W/2-1 points, simply mirror the positive frequency. For instance, if one of the rows is as follows:
Index Data
0 12 + i
1 5 + 2i
2 6
3 2 - 3i
...
The 0 index is your DC power, the 1 index is the lowest positive frequency bin, and so forth. You would thus make your closest-to-DC negative frequency bin 5+2i, the next closest 6, and so on. Where you put those values in the array is up to you. I would do it the way Matlab does it, with the negative frequency data after the positive frequency data.
I hope that makes sense.
There are two ways this can be acheived. You will have to write your own kernel to acheive either of this.
1) You will need to perform conjugate on the (half) data you get to find the other half.
2) Since you want full results anyway, it would be best if you convert the input data from real to complex (by padding with 0 imaginary) and performing the complex to complex transform.
From practice I have noticed that there is not much of a difference in speed either way.
I actually searched the nVidia forums and found a kernel that someone had written that did just what I was asking. That is what I used. if you search the cuda forum for "redundant results fft" or similar you will find it.

Picture entropy calculation

I've run into some nasty problem with my recorder. Some people are still using it with analog tuners, and analog tuners have a tendency to spit out 'snow' if there is no signal present.
The Problem is that when noise is fed into the encoder, it goes completely crazy and first consumes all CPU then ultimately freezes. Since main point od the recorder is to stay up and running no matter what, I have to figure out how to proceed with this, so encoder won't be exposed to the data it can't handle.
So, idea is to create 'entropy detector' - a simple and small routine that will go through the frame buffer data and calculate entropy index i.e. how the data in the picture is actually random.
Result from the routine would be a number, that will be 0 for completely back picture, and 1 for completely random picture - snow, that is.
Routine in itself should be forward scanning only, with few local variables that would fit into registers nicely.
I could use zlib or 7z api for such task, but I would really want to cook something on my own.
Any ideas?
PNG works this way (approximately): For each pixel, replace its value by the value that it had minus the value of the pixel left to it. Do this from right to left.
Then you can calculate the entropy (bits per character) by making a table of how often which value appears now, making relative values out of these absolute ones and adding the results of log2(n)*n for each element.
Oh, and you have to do this for each color channel (r, g, b) seperately.
For the result, take the average of the bits per character for the channels and divide it by 2^8 (assuming that you have 8 bit per color).

Converting an FFT to a spectogram

I have an audio file and I am iterating through the file and taking 512 samples at each step and then passing them through an FFT.
I have the data out as a block 514 floats long (Using IPP's ippsFFTFwd_RToCCS_32f_I) with real and imaginary components interleaved.
My problem is what do I do with these complex numbers once i have them? At the moment I'm doing for each value
const float realValue = buffer[(y * 2) + 0];
const float imagValue = buffer[(y * 2) + 1];
const float value = sqrt( (realValue * realValue) + (imagValue * imagValue) );
This gives something slightly usable but I'd rather some way of getting the values out in the range 0 to 1. The problem with he above is that the peaks end up coming back as around 9 or more. This means things get viciously saturated and then there are other parts of the spectrogram that barely shows up despite the fact that they appear to be quite strong when I run the audio through audition's spectrogram. I fully admit I'm not 100% sure what the data returned by the FFT is (Other than that it represents the frequency values of the 512 sample long block I'm passing in). Especially my understanding is lacking on what exactly the compex number represents.
Any advice and help would be much appreciated!
Edit: Just to clarify. My big problem is that the FFT values returned are meaningless without some idea of what the scale is. Can someone point me towards working out that scale?
Edit2: I get really nice looking results by doing the following:
size_t count2 = 0;
size_t max2 = kFFTSize + 2;
while( count2 < max2 )
{
const float realValue = buffer[(count2) + 0];
const float imagValue = buffer[(count2) + 1];
const float value = (log10f( sqrtf( (realValue * realValue) + (imagValue * imagValue) ) * rcpVerticalZoom ) + 1.0f) * 0.5f;
buffer[count2 >> 1] = value;
count2 += 2;
}
To my eye this even looks better than most other spectrogram implementations I have looked at.
Is there anything MAJORLY wrong with what I'm doing?
The usual thing to do to get all of an FFT visible is to take the logarithm of the magnitude.
So, the position of the output buffer tells you what frequency was detected. The magnitude (L2 norm) of the complex number tells you how strong the detected frequency was, and the phase (arctangent) gives you information that is a lot more important in image space than audio space. Because the FFT is discrete, the frequencies run from 0 to the nyquist frequency. In images, the first term (DC) is usually the largest, and so a good candidate for use in normalization if that is your aim. I don't know if that is also true for audio (I doubt it)
For each window of 512 sample, you compute the magnitude of the FFT as you did. Each value represents the magnitude of the corresponding frequency present in the signal.
mag
/\
|
| ! !
| ! ! !
+--!---!----!----!---!--> freq
0 Fs/2 Fs
Now we need to figure out the frequencies.
Since the input signal is of real values, the FFT is symmetric around the middle (Nyquist component) with the first term being the DC component. Knowing the signal sampling frequency Fs, the Nyquist frequency is Fs/2. And therefore for the index k, the corresponding frequency is k*Fs/512
So for each window of length 512, we get the magnitudes at specified frequency. The group of those over consecutive windows form the spectrogram.
Just so people know I've done a LOT of work on this whole problem. The main thing I've discovered is that the FFT requires normalisation after doing it.
To do this you average all the values of your window vector together to get a value somewhat less than 1 (or 1 if you are using a rectangular window). You then divide that number by the number of frequency bins you have post the FFT transform.
Finally you divide the actual number returned by the FFT by the normalisation number. Your amplitude values should now be in the -Inf to 1 range. Log, etc, as you please. You will still be working with a known range.
There are a few things that I think you will find helpful.
The forward FT will tend to give larger numbers in the output than in the input. You can think of it as all of the intensity at a certain frequency being displayed at one place rather than being distributed through the dataset. Does this matter? Probably not because you can always scale the data to fit your needs. I once wrote an integer based FFT/IFFT pair and each pass required rescaling to prevent integer overflow.
The real data that are your input are converted into something that is almost complex. As it turns out buffer[0] and buffer[n/2] are real and independent. There is a good discussion of it here.
The input data are sound intensity values taken over time, equally spaced. They are said to be, appropriately enough, in the time domain. The output of the FT is said to be in the frequency domain because the horizontal axis is frequency. The vertical scale remains intensity. Although it isn't obvious from the input data, there is phase information in the input as well. Although all of the sound is sinusoidal, there is nothing that fixes the phases of the sine waves. This phase information appears in the frequency domain as the phases of the individual complex numbers, but often we don't care about it (and often we do too!). It just depends upon what you are doing. The calculation
const float value = sqrt((realValue * realValue) + (imagValue * imagValue));
retrieves the intensity information but discards the phase information. Taking the logarithm essentially just dampens the big peaks.
Hope this is helpful.
If you are getting strange results then one thing to check is the documentation for the FFT library to see how the output is packed. Some routines use a packed format where real/imaginary values are interleaved, or they may begin at the N/2 element and wrap around.
For a sanity check I would suggest creating sample data with known characteristics, eg Fs/2, Fs/4 (Fs = sample frequency) and compare the output of the FFT routine with what you'd expect. Try creating both a sine and cosine at the same frequency, as these should have the same magnitude in the spectrum, but have different phases (ie the realValue/imagValue will differ, but the sum of squares should be the same.
If you're intending on using the FFT though then you really need to know how it works mathematically, otherwise you're likely to encounter other strange problems such as aliasing.