I am new to OpenCV and I am looking to fuse two images(Panchromatic and Multispectral) using OpenCV with C++. Note that I have already registered the reference image and now I just need to fuse the reference and the sensed image. I could not find any functions that could help me with this. Did I miss something or is there no direct way to fuse two images?
Please suggest any simple way to proceed with the fusion process.
Since you are trying to fuse together the panchromatic and multispectral images, you would need to :
Convert the input images into a suitable format (YUV works for me,
HSI might too).
Fuse the luminance or intensity values of the two images, leaving the color space untouched.
Combine the fused channel with the color information to produce the final image.
.
cvtColor(ref, tmp1, CV_BGR2GRAY, 0);
cvtColor(trans, tmp2, CV_BGR2GRAY, 0);
cv::Mat yuv;
cvtColor(ref, yuv, CV_BGR2YUV, 3);
vector <Mat> channels_ref;
split(yuv, channels_ref);
double alpha = 0.3;
double beta = 1 - alpha;
addWeighted(tmp1, alpha, tmp2, beta, 0.0, channels_ref[0]);
Mat merge[] = {channels_ref[0], channels_ref[1], channels_ref[2]};
cv::merge(merge, 3, output);
cvtColor(output, output, CV_YUV2BGR);
imshow("Linear Blend", output);
waitKey(0);
I revisited this question after a long time and decided to have a go at it as there was no sample imagery available before. In the meantime, I have generated some - see later.
So, let's say you have a hi-res, panchromatic image with 10m resolution something like this:
and a lo-res, multi-spectral image with 40m resolution of the same area, something like this:
Then, just using ImageMagick at the command-line for now (since it is installed on most Linux distros and is available for OSX and Windows), do what I was alluding to in the comments under your original question...
convert hi-res-panchromatic.tif \
\( lo-res-multispectral.tif -resize 400% -colorspace Lab -separate -delete 0 \) \
-set colorspace Lab -combine result.tif
So, that says... "Load up the hi-res image. Then, to one side, load the lo-res image and upsize it to 400% to account for the 40m resolution versus 10m resolution and convert it to Lab colorspace and separate the channels. Delete the Lightness (L) channel of the lo-res image. Now, returning to the main processing from the aside processing, we will have the hi-res image that we loaded first acting as the L channel along with the ab channels (i.e. colour information) of the lo-res image. Combine them from Lab back into RGB and save".
I see you haven't logged on in a year, so I will delay any OpenCV code-writing until anyone else expresses an interest in the question - but I hope the technique is understandable.
Note
As I don't happen to have any geo-registered panchromatic and multi-spectral imagery of the same place, I cheated somewhat... I took a single image and synthesised a panchromatic version using ImageMagick:
convert orig.tif -colorspace gray hi-res-panchromatic.tif
and I synthesised the lo-res multi-spectral image using:
convert orig.tif -resize 25% lo-res-multispectral.tif
Also, note that I just used Lab mode here to do the blending, because it is simpler, but in the comments I suggested using Principal Components Analysis. I may re-visit this again and implement that too...
Related
I am trying to automate image conversion using ImageMagick CLI. One of the biggest problems with my image set is with tiny artifacts that should be cut out.
My images are generally consistent, with big objects (c.a. 50% of image space) on a white background. Unfortunately, sometimes tiny artifacts may just look bad and make trimming less efficient.
E.g. something like that:
In reality, the big object is not a solid color, it's just a simplified example. It is not necessarily a circle either, it can be a square, rectangle, or something irregular.
I cannot also use any morphology like opening, closing, or erosion. Filters like gaussian or median also are out of the question. I need to keep the big object untouched since the highest possible quality is required.
An ideal solution would be something similar to Contours known for example from OpenCV, where I could find all the uniform objects and if they don't meet certain rules (e.g. threshold of size greater than 5% of the whole image) - fill them with white color.
Is there any similar mechanism in ImageMagick CLI? I've gone through the docs and haven't found a suitable solution to my problem.
Thanks in advance!
EDIT (ImageMagick version):
Version: ImageMagick 7.1.0-47 Q16-HDRI x86_64 20393 https://imagemagick.org
Copyright: (C) 1999 ImageMagick Studio LLC
License: https://imagemagick.org/script/license.php
Features: Cipher DPC HDRI Modules OpenMP(5.0)
Delegates (built-in): bzlib fontconfig freetype gslib heic jng jp2 jpeg lcms lqr ltdl lzma openexr png ps raw tiff webp xml zlib
Compiler: gcc (4.2)
EDIT (Real-life example):
As requested, here is a real-life example. A picture of a coin on a white background, but with some artifacts:
noise under the coin (slightly on the left)
dot under the coin (slightly on the right)
gray irregular shape in the top right corner
The objects might not be necessarily circles like coins but we may assume that there always will be one object with a strong border (no white spaces on the border, like here) and the rest is noise.
Here is one way to do that im Imagemagick 7. First threshold the image so the background is white and the object(s) is black. That will likely be image dependent. NOTE: that JPG is a lousy format, since solid colors are not really truly solid due to the compression. If you can save your images in some non-lossy compressed or uncompress format, that would be better. Then decide on the largest area you need to remove. Use that with connected components processing so that you have only two regions, one white background and one black object. This will be a mask. If you have several objects that is fine also, but they need to be black. I show the textual output showing the two regions. The mask is just the object with the noise removed. So now use the original input, a white image and the mask to composite the first two images so that where the mask is black, the object is used and where the mask is white, the white image will be used. Note, I create the white image by making a copy (clone) of the input and colorizing it 100% with white. The following is in Unix syntax.
Input:
magick coin.jpg -negate -threshold 2% -negate -type bilevel \
-define connected-components:verbose=true \
-define connected-components:area-threshold=1000 \
-define connected-components:mean-color=true \
-connected-components 4 mask.png
Objects (id: bounding-box centroid area mean-color):
0: 1000x1000+0+0 525.8,555.7 594824 gray(255)
44: 722x720+101+58 460.9,417.0 405176 gray(0)
magick coin.jpg \
\( +clone -fill white -colorize 100 \) \
mask.png \
-compose over -composite \
coin_result.png
Mask
Result:
See https://imagemagick.org/script/connected-components.php
and https://imagemagick.org/Usage/compose/#compose and Composite Operator of Convert (-composite, -geometry) at https://imagemagick.org/Usage/layers/#convert
I'm trying to to add noise to an Image & then denoised to see the difference in my object detection algorithm. So I developed OpenCV code in C++ for detection some objects in the image. I would like to test the robustness of the code, so tried to add some noises. In that way would like to check how the object detection rate changed when add noises to the image. So , first added some random Gaussian Noises like this
cv::Mat noise(src.size(),src.type());
float m = (10,12,34);
float sigma = (1,5,50);
cv::randn(noise, m, sigma); //mean and variance
src += noise;
I got this images:
The original:
The noisy one
So is there any better model for noises? Then how to Denoise it. Is there any DeNoising algorithms?
OpenCV comes with Photo package in which you can find an implementation of Non-local Means Denoising algorithm. The documentation can be found here:
http://docs.opencv.org/3.0-beta/modules/photo/doc/denoising.html
As far as I know it's the only suitable denoising algorithm both in OpenCV 2.4 and OpenCV 3.x
I'm not aware of any other noise models in OpenCV than randn. It shouldn't be a problem however to add a custom function that does that. There are some nice examples in python (you should have no problem rewriting it to C++ as the OpenCV API remains roughly identical) How to add noise (Gaussian/salt and pepper etc) to image in Python with OpenCV
There's also one thing I don't understand: If you can generate noise, why would you denoise the image using some algorithm if you already have the original image without noise?
Check this tutorial it might help you.
http://docs.opencv.org/trunk/d5/d69/tutorial_py_non_local_means.html
Specially this part:
OpenCV provides four variations of this technique.
cv2.fastNlMeansDenoising() - works with a single grayscale images
cv2.fastNlMeansDenoisingColored() - works with a color image.
cv2.fastNlMeansDenoisingMulti() - works with image sequence captured
in short period of time (grayscale images)
cv2.fastNlMeansDenoisingColoredMulti() - same as above, but for color
images.
Common arguments are:
h : parameter deciding filter strength. Higher h value removes noise
better, but removes details of image also. (10 is ok)
hForColorComponents : same as h, but for color images only. (normally
same as h)
templateWindowSize : should be odd. (recommended 7)
searchWindowSize : should be odd. (recommended 21)
And to add gaussian noise to image, maybe this thread will be helpful:
How to add Noise to Color Image - Opencv
I'm reading an image and doing some processing on the blue channel without changing the Red nor the green channels.
When i finished processing the blue channel, i merged back the three channels into one RGB image. and when i use imshow to view the channels, every thing is alright and i can see that the changes i've made only affect the Blue channel and they do not affect the red nor the green ones.
Up to this point every thing is alright !
But when i save the image using imwrite, the resulting image is slightly different, in that the changes made on the blue channel seem to get propagated to the red and green channels, it's like imwrite is doing some kind of mean between the 3 channels :
image = imread('image.jpg', IMREAD_COLOR);
split(image, channels);
// Create some changes on channels[0]
merge(channels, 3, image);
// Up to this point every thing is alright
imwrite("modified.jpg", image); // Image changes when written;
Is there any solution to avoid this behavior ?
JPG is a lossy format: https://en.wikipedia.org/wiki/JPEG
JPEG (/ˈdʒeɪpɛɡ/ JAY-peg)1 is a commonly used method of lossy
compression for digital images, particularly for those images produced
by digital photography. The degree of compression can be adjusted,
allowing a selectable tradeoff between storage size and image quality.
JPEG typically achieves 10:1 compression with little perceptible loss
in image quality.
Solution: Use a lossles Format like PNG to save your image.
I have an image as shown in the inset. I sampled it in Adobe Photoshop using the blue color as the image shows. The sampled image is shown in gray-scale on the left.
I know that openCV provides a similar method to sample images that is the inRange() function. How can I find out the range of HSV values that Adobe checked for to sample my image. Since the resultant image is pretty much what I want and I am not able to determine the range myself It would be a great help if some one could guide me for the same.
You can convert your image in HSV with cv::cvtColor(...) here the documentation
Then accordingly to Wikipedia the blue is near to 240° of the HUE channel of your image.
You can set something like maxHue = 270 and a minHue = 180 or other values to scan your image.
Maybe you should set a minSaturation and a minValue to avoid the black and white.
To find the best ranges you can link them with some sliders in a Qt GUI and change them until you get the same result as photoshop...
Can anyone suggest me a fast way of getting the foreground image?
Currently I am using BackgroundSubtractorMOG2 class to do this. it is very slow. and my task doesn't need that much complex algorithm.
I can get a image of the background in the binging. camera position will not change. so I believe that there is a easy way to do this.
I need to capture a blob of the object moving in front of the camera. and there will be only one object always.
I suggest to do as following, simple solution:
Compute difference matrix:
cv::absdiff(frame, background, absDiff);
This makes each pixel (i,j) in absDiff set to |frame(i,j) - background(i.j)|. Each channel (e.g. R,G,B) is procesed independently.
Convert result to single-channeled monocolor image:
cv::cvtColor(absDiff, absDiffGray, cv::COLOR_BGR2GRAY);
Apply binary filter:
cv::threshold(absDiffGray, absDiffGrayThres, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
Here we used Ots'u Method to determine appriopriate threshold level. If there was any
noise from step 2, binary filter would remove it.
Apply blob detection in absDiffGrayThres image. This can be one of built-in opencv method's or manually written code which look for pixels positions which vale are 255 (remember about fast opencv pixel retrieval operations)
Such process is enough fast to manage with 640x480 RGB images with frame rate at least 30 fps on quite old Core 2 Duo 2.1 GHz, 4 GB RAM without GPU support.
Hardware remark: be sure that your camera lense aperture is not set to auto-adjust. Imagine following situation: you computed a background image on the beginning. Then, some object appears and covers bigger part of camera view. Less light comes to the lense and, beacause of auto light adjustment, camera increases aperture, background color changes, difference gives a blob in place where actually there is not any object.