I want to upsample an array of captured (from webcam) OpenCV images or corresponding float arrays (Pixel values don't need to be discrete integer). Unfortunately the upsampling ratio is not always integer, so I cannot figure myself how to do it with simple linear interpolation.
Is there an easier way or a library to do this?
Well, I dont know a library to to do framerate scaling.
But I can tell you that the most appropriate way to do it yourself is by just dropping or doubling frames.
Blending pictures by simple linear pixel interpolation will not improve quality, playback will still look jerky and even also blurry now.
To proper interpolate frame rates much more complicated algorithms are needed.
Modern TV's have build in hardware for that and video editing software like e.g. After-Effects has functions that do it.
These algorithms are able to create in beetween pictures by motion analysis. But that is beyond the range of a small problem solution.
So either go on searching for an existing library you can use or do it by just dropping/doubling frames.
The ImageMagick MagickWand library will resize images using proper filtering algorithms - see the MagickResizeImage() function (and use the Sinc filter).
I am not 100% familiar with video capture, so I'm not sure what you mean by "pixel values don't need to be discrete integer". Does this mean the color information per pixel may not be integers?
I am assuming that by "the upsampling ratio is not always integer", you mean that you will upsample from one resolution to another, but you might not be doubling or tripling. For example, instead of 640x480 -> 1280x960, you may be doing, 640x480 -> 800x600.
A simple algorithm might be:
For each pixel in the larger grid
Scale the x/y values to lie between 0,1 (divide x by width, y by height)
Scale the x/y values by the width/height of the smaller grid -> xSmaller, ySmaller
Determine the four pixels that contain your point, via floating point floor/ceiling functions
Get the x/y values of where the point lies within that rectangle,between 0,1 (subtract the floor/ceiling values xSmaller, ySmaller) -> xInterp, yInterp
Start with black, and add your four colors, scaled by the xInterp/yInterp factors for each
You can make this faster for multiple frames by creating a lookup table to map pixels -> xInterp/yInterp values
I am sure there are much better algorithms out there than linear interpolation (bilinear, and many more). This seems like the sort of thing you'd want optimized at the processor level.
Use libswscale from the ffmpeg project. It is the most optimized and supports a number of different resampling algorithms.
Related
I have been working on an algorithm which can detect seizure-inducing strobe lights in a video.
Currently, my code returns virtually every frame as capable of causing a seizure (3Hz flashes).
My code calculates the relative luminance of each pixel and sees how many times the luminance goes up then down, etc. or down then up, etc. by more than 10% within any given second.
Is there any way to do this without comparing each individual pixel within a second of each other and that only returns the correct frames.
An example of what I am trying to emulate: https://trace.umd.edu/peat
The common approach to solving this type of problems is to convert the frames to grayscale and then construct a cube containing frames from a 1 to 3 seconds time interval. From this cube, you can extract the time-varying characteristics of either individual pixels (noisy), or blocks (recommended). The resulting 1D curves can first be observed manually to see if they actually show the 3Hz variation that you are looking for (sometimes, these variations are either lost or distorted because of the camera's auto exposure settings). If you can see it, they you should be able to use FFT to isolate and detect it automatically.
Convert the image to grayscale. Break the image up into blocks, maybe 16x16 or 64x64 or larger (experiment to see what works). Take the average luminance of each block over a minimum of 2/3 seconds. Create a wave of luminance over time. Do an fft on this wave and look for a minimum energy threshold around 3Hz.
I am working on a microscope that streams live images via a built-in video camera to a PC, where further image processing can be performed on the streamed image. Any processing done on the streamed image must be done in "real-time" (minimal frames dropped).
We take the average of a series of static images to counter random noise from the camera to improve the output of some of our image processing routines.
My question is: how do I know if the image is no longer static - either the sample under inspection has moved or rotated/camera zoom-in or out - so I can reset the image series used for averaging?
I looked through some of the threads, and some ideas that seemed interesting:
Note: using Windows, C++ and Intel IPP. With IPP the image is a byte array (Ipp8u).
1. Hash the images, and compare the hashes (normal hash or perceptual hash?)
2. Use normalized cross correlation (IPP has many variations - which to use?)
Which do you guys think is suitable for my situation (speed)?
If you camera doesn't shake, you can, as inVader said, subtract images. Then a sum of absolute values of all pixels of the difference image is sometimes enough to tell if images are the same or different. However, if your noise, lighting level, etc... varies, this will not give you a good enough S/N ratio.
And in noizy conditions normal hashes are even more useless.
The best would be to identify that some features of your object has changed, like it's boundary (if it's regular) or it's mass center (if it's irregular). If you have a boundary position, you'll need to analyze just one line of pixels, perpendicular to that boundary, to tell that boundary has moved.
Mass center position may be a subject to frequent false-negative responses, but adding a total mass and/or moment of inertia may help.
If the camera shakes, you may have to align images before comparing (depending on comparison method and required accuracy, a single pixel misalignment might be huge), and that's where cross-correlation helps.
And further, you doesn't have to analyze each image. You can skip one, and if the next differs, discard both of them. Here you have twice as much time to analyze an image.
And if you are averaging images, you might just define an optimal amount of images you need and compare just the first and the last image in the sequence.
So, simplest thing to try would be to take subsequent images, subtract them from each other and have a look at the difference. Then define some rules including local and global thresholds for the difference in which two images are considered equal. Simple subtraction of bitmap/array data, looking for maxima and calculating the average differnce across the whole thing should be ne problem to do in real time.
If there are varying light conditions or something moving in a predictable way(like a door opening and closing), then something more powerful, albeit slower, like gaussian mixture models for background modeling, might be worth looking into, click here. It is quite compute intensive, but can be parallelized pretty easily.
Motion detection algorithms is what is used.
http://www.codeproject.com/Articles/10248/Motion-Detection-Algorithms
http://www.codeproject.com/Articles/22243/Real-Time-Object-Tracker-in-C
First of all I would take a series of images at a slow fps rate and downsample those images to make them smaller, not too much but enough to speed up the process.
Now you have several options:
You could make a sum of absolute differences of the two images by subtracting them and use a threshold to value if the image has changed.
If you want to speed it up even further I would suggest doing a progressive SAD using a small kernel and moving from the top of the image to the bottom. You can value the complessive amount of differences during the process and eventually stop when you are satisfied.
I am writing an application in C++ that requires a little bit of image processing. Since I am completely new to this field I don't quite know where to begin.
Basically I have an image that contains a rectangle with several boxes. What I want is to be able to isolate that rectangle (x, y, width, height) as well as get the center coordinates of each of the boxes inside (18 total).
I was thinking of using a simple for-loop to loop through the pixels in the image until I find a pattern but I was wondering if there is a more efficient approach. I also want to see if I can do it efficiently without using big libraries like OpenCV.
Here are a couple example images, any help would be appreciated:
Also, what are some good resources where I could learn more about image processing like this.
The detection algorithm here can be fairly simple. Your box-of-squares (BOS) is always aligned with the edge of the image, and has a simple structure. Here's how I'd approach it.
Choose a colorspace. Assume RGB is OK for now, but it may work better in something else.
For each line
For each pixel, calculate the magnitude difference between the pixel and the pixel immediately below it. The magnitude difference is simply sqrt((X-x)^2+(Y-y)^2+(Z-z)^2)), where X,Y,Z are color coordinates of the first pixel, and x,y,z are color coordinates of the pixel below it. For RGB, XYZ=RGB of course.
Calculate the maximum run length of consecutive difference magnitudes that are below a certain threshold magThresh. You may also choose a forgiving version of this: maximum run length, but allowing intrusions up to intrLen pixels long that must be followed by up to contLen pixels long runs. This is to take care of possible line-to-line differences at the edges of the squares.
Find the largest set of consecutive lines that have the maximum run lengths above minWidth and below maxWidth.
Thus you've found the lines which contain the box, and by recalculating data in 2.1 above, you'll get to know where the boxes are in horizontal coordinates.
Detecting box edges is done by repeating the same thing but scanning left-to-right within the box. At that point you'll have approximate box centroids that take no notice of bleeding between pixels.
This can be all accomplished by repeatedly running the image through various convolution kernels followed by doing thresholding, I'd think. The good thing is that both of those operations have very fast library implementations. You do not want to reimplement them by hand, it will be likely significantly slower.
If you insist on doing it yourself (personally I'd use OpenCV, it's industrial-strength and free), you're going to need an edge detection algorithm first. There are a good few out there on the internet, but be prepared for some frightening mathematics...
Many involve iterating over each pixel, and lifting it and it's neighbours' values into a matrix, and then convolving with a kernel matrix. Be aware that this has to be done for every pixel (in principle though, in your case you can stop at the first discovered rectangle), and for each colour channel - so it would be highly advisable to push onto the GPU.
Why JPEG compression processes image by 8x8 blocks instead of applying Discrete Cosine Transform to the whole image?
8 X 8 was chosen after numerous experiments with other sizes.
The conclusions of experiments are:
1. Any matrices of sizes greater than 8 X 8 are harder to do mathematical operations (like transforms etc..) or not supported by hardware or take longer time.
2. Any matrices of sizes less than 8 X 8 dont have enough information to continue along with the pipeline. It results in bad quality of the compressed image.
Because, that would take "forever" to decode. I don't remember fully now, but I think you need at least as many coefficients as there are pixels in the block. If you code the whole image as a single block I think you need to, for every pixel, iterate through all the DCT coefficients.
I'm not very good at big O calculations but I guess the complexity would be O("forever"). ;-)
For modern video codecs I think they've started using 16x16 blocks instead.
One good reason is that images (or at least the kind of images humans like to look at) have a high degree of information correlation locally, but not globally.
Every relatively smooth patch of skin, or piece of sky or grass or wall eventually ends in a sharp edge and is replaced by something entirely different. This means you still need a high frequency cutoff in order to represent the image adequately rather than just blur it out.
Now, because Fourier-like transforms like DCT "jumble" all the spacial information, you wouldn't be able to throw away any intermediate coefficients either, nor the high-frequency components "you don't like".
There are of course other ways to try to discard visual noise and reconstruct edges at the same time by preserving high frequency components only when needed, or do some iterative reconstruction of the image at finer levels of detail. You might want to look into space-scale representation and wavelet transforms.
What algorithms to use for image downsizing?
What is faster?
What algorithm is performed for image resizing ( specially downsizing from big 600x600 to super small 6x6 for example) by such giants as flash and silver player, and html5?
Bilinear is the most widely used method and can be made to run about as fast as the nearest neighbor down-sampling algorithm, which is the fastest but least accurate.
The trouble with a naive implementation of bilinear sampling is that if you use it to reduce an image by more than half, then you can run into aliasing artifacts similar to what you would encounter with nearest neighbor. The solution to this is to use an pyramid based approach. Basically if you want to reduce 600x600 to 30x30, you first reduce to 300x300, then 150x150, then 75x75, then 38x38, and only then use bilinear to reduce to 30x30.
When reducing an image by half, the bilinear sampling algorithm becomes much simpler. Basically for each alternating row and column of pixels:
y[i/2][j/2] = (x[i][j] + x[i+1][j] + x[i][j+1] + x[i+1][j+1]) / 4;
There is an excellent article at The Code Project showing the effects of various image filters.
For shrinking an image I suggest the bicubic algorithm; this has a natural sharpening effect, so detail in the image is retained at smaller sizes.
There's one special case: downsizing JPG's by more than a factor of 8. A direct factor of 8 rescale can be done on the raw JPG data, without decompressing it. JPG's are stored as compressed blocks of 8x8 pixels, with the average pixel value first. As a result, it typically takes more time to read the file from disk or the network than it takes to downscale it.
Normally I would stick to a bilinear filter for scaling down. For resizing images to tiny sizes, though, you may be out of luck. Most icons are pixel-edited by hand to make them look their best.
Here is a good resource which explains the concepts quite well.