CAP_PROP_WB_TEMPERATURE does not make any changes in OpenCV - c++

cap.open(0);
cap.set(CAP_PROP_AUTO_WB, 0);
cap.set(CAP_PROP_WB_TEMPERATURE, 10);
I tried to set the WhiteBalance above but the capture is not changing whatever I write to the temperature. Am I missing something?
I read from several forums that some properties have to be in a certain interval. Like CAP_PROP_AUTO_EXPOSURE should expect two values: 0.75 auto exposure On and 0.25 as auto exposure OFF. Once you set auto exposure Off with 0.25 then set exposure to any value that you desire. But I did not see a similar thing for wb temperature.
Note:
Camera Model: DFM 27UR0135-ML - USB 3.0 color board camera
I don't think it doesn't support balance settings, because the code below was working:
//WhiteBalance
Ptr<xphoto::WhiteBalancer> wb;
wb = xphoto::createLearningBasedWB();
wb ->balanceWhite(frame, frame);
But I didnt get good results with automatic white balancing, that's why I want to change the parameters like temperature, red, blue manually.

Auto white balance is OFF at value 1 and ON at value 3.
cap.set(CAP_PROP_AUTO_WB, 1);

Related

Can I balance an extremely bright picture in python? This picture is a result of thousands of pictures stitched together to form a panorama

My aim is to stitch 1-2 thousand images together. I find the key points in all the images, then I find the matches between them. Next, I find the homography between the two images. I also take into account the current homography and all the previous homographies. Finally, I warp the images based on combined homography. (My code is written in python 2.7)
The issue I am facing is that when I overlay the warped images, they become extremely bright. The reason is that most of the area between two consecutive images is common/overalapping. So, when I overlay them, the intensities of the common areas increase by a factor of 2 and as more and more images are overalid the moew bright the values become and eventually I get a matrix where all the pixels have the value of 255.
Can I do something to adjust the brightness after every image I overlay?
I am combining/overlaying the images via open cv function named cv.addWeighted()
dst = cv.addWeighted( src1, alpha, src2, beta, gamma)
here, I am taking alpha and beta = 1
dst = cv.addWeighted( image1, 1, image2, 1, 0)
I also tried decreasing the value of alpha and beta but here a problem comes that, when around 100 images have been overlaid, the first ones start to vanish probably because the intensity of those images became zero after being multiplied by 0.5 at every iteration. The function looked as follows. Here, I also set the gamma as 5:
dst = cv.addWeighted( image1, 0.5, image2, 0.5, 5)
Can someone please help how can I solve the problem of images getting extremely bright (when aplha = beta = 1) or images vanishing after a certain point (when alpha and beta are both around 0.5).
This is the code where I am overlaying the images:
for i in range(0, len(allWarpedImages)):
for j in range(1, len(allWarpedImages[i])):
allWarpedImages[i][0] = cv2.addWeighted(allWarpedImages[i][0], 1, allWarpedImages[i][j], 1, 0)
images.append(allWarpedImages[i][0])
cv2.imwrite('/root/Desktop/thesis' + 'final.png', images[0])
When you stitch two images, the pixel values of overlapping part do not just add up. Ideally, two matching pixels should have the same value (a spot in the first image should also has the same value in the second image), so you simply keep one value.
In reality, two matching pixels may have slightly different pixel value, you may simply average them out. Better still, you adjust their exposure level to match each other before stitching.
For many images to be stitched together, you will need to adjust all of their exposure level to match. To equalize their exposure level is a rather big topic, please read about "histogram equalization" if you are not familiar with it yet.
Also, it is very possible that there is high contrast across that many images, so you may need to make your stitched image an HDR (high dynamic range) image, to prevent pixel value overflow/underflow.

How to manualy set white balance of a uEye camera?

How can I programmatically set the white balance of an uEye USB camera (from the IDS manufacturer) to work with no automatic white balance and pre-defined multipliers when is_SetWhiteBalanceMultipliers() function is obsolete?
Some background: I work with a uEye USB2 camera (from IDS) connected to Linux machine. I need to get an RGB image with pre-defined colors (of cause on a pre-defined scene) from the camera. For example, I want to configure the WB to red 1.25 multiplier, green 1.0, and blue 2.0 multiplier.
For this task, I am using the uEye SDK on Linux (header file ueye.h).
The manual (A: Camera basics > Camera parameters) states that the is_SetWhiteBalanceMultipliers() function is obsolete and suggests to use is_SetAutoParameter() function instead. It was easy to disable the auto-white balance (is_SetAutoParameter( hCam, IS_SET_ENABLE_AUTO_WHITEBALANCE, 0, 0), but I struggle to find a way to configure the red/green/blue multipliers. The parameter IS_SET_AUTO_WB_OFFSET and IS_SET_AUTO_WB_GAIN_RANGE work only when the automatic white balance engaged and do nothing when it is disabled. I will be grateful for any suggestions!
I had the same issue. I think you can achieve the old result using the function "is_SetHardwareGain" on which you directly pass the main, red, green and blue gains. In my case I disabled the white balance before doing it just to make sure it works. In this example, I wanted to set the values to RGB gains = [8%, 0%, 32%] and the master gain to 0% (to not confuse with gain factors 0% normally corresponds to 1x gain factor):
double param1, param2; param1=0;
is_SetColorCorrection (hCam, IS_CCOR_DISABLE, &param1); //Disables the color fitler correction matrix
flagIDS = is_SetAutoParameter (hCam, IS_SET_ENABLE_AUTO_WHITEBALANCE, &param1, &param2);
param1=WB_MODE_DISABLE;
flagIDS = is_SetAutoParameter (hCam, IS_SET_ENABLE_AUTO_SENSOR_WHITEBALANCE, &param1, &param2);
flagIDS = is_SetHardwareGain (hCam, 0, 8, 0, 32);

Tuning background subtraction with OpenCV

My question is the final paragraph.
I am trying to use one of OpenCV's background subtractors as a means of detecting human hands. The code that tries to do this is as follows:
cv::Ptr<cv::BackgroundSubtractor> pMOG2 = cv::createBackgroundSubtractorMOG2();
cv::Mat fgMaskMOG2;
pMOG2->apply(input, fgMaskMOG2, -1);
cv::namedWindow("FG Mask MOG 2");
cv::imshow("FG Mask MOG 2", fgMaskMOG2);
When I initially ran the program on my own test video I was greeted with this (ignore the name of the right most window):
As you can see a mask is not detected for my moving hand at all, given that the background in my video is completely stationary (there were maybe one or two white pixels at a time showing up in the mask). So I tried using a different video, one that many examples seemed to use which was moving traffic.
You can see it picked up on a moving car -very- slightly. I have tried (for both these videos) setting the "learning threshold" for the apply method to many values between 0 and 1 and there was not much variation at all from the results you can see above.
Have I missed anything with regards to setting up the background subtraction or are the videos particularly hard examples to deal with? Where can I adjust the settings of the background subtraction to favour my setup (if anywhere)? I will repeat the fact that in both videos the camera is stationary.
My answer is in python but convert and try it. Approve if it works.
if (cap.isOpened() == False):
print("Error opening video stream or file")
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
min_thresh=800
max_thresh=10000
fgbg = cv2.createBackgroundSubtractorMOG2()
connectivity = 4
# Read until video is completed
while (cap.isOpened()):
# Capture frame-by-frame
ret, frame = cap.read()
if ret == True:
print("Frame detected")
frame1 = frame.copy()
fgmask = fgbg.apply(frame1)
fgmask = cv2.morphologyEx(fgmask, cv2.MORPH_OPEN, kernel)
output = cv2.connectedComponentsWithStats(
fgmask, connectivity, cv2.CV_32S)
for i in range(output[0]):
if output[2][i][4] >= min_thresh and output[2][i][4] <= max_thresh:
cv2.rectangle(frame, (output[2][i][0], output[2][i][1]), (
output[2][i][0] + output[2][i][2], output[2][i][1] + output[2][i][3]), (0, 255, 0), 2)
cv2.imshow('detection', frame)
cv2.imshow('detection', fgmask)
Update cv2.createBackgroundSubtractorMOG2 by changing history, varThreshold, and detectShadows=True. You can also change kernel sizel, remove noise etc.
Try using MOG subtractor instead of MOG2 background subtractor.. It might help you.
Because most times MOG subtractor would be handy. But the worst thing is MOG subtractor has been moved to bgsegm package. It's a contrib package. It is available in OpenCv git hub page itself.
https://github.com/Itseez/opencv_contrib

Image Fusion OpenCV

I am new to OpenCV and I am looking to fuse two images(Panchromatic and Multispectral) using OpenCV with C++. Note that I have already registered the reference image and now I just need to fuse the reference and the sensed image. I could not find any functions that could help me with this. Did I miss something or is there no direct way to fuse two images?
Please suggest any simple way to proceed with the fusion process.
Since you are trying to fuse together the panchromatic and multispectral images, you would need to :
Convert the input images into a suitable format (YUV works for me,
HSI might too).
Fuse the luminance or intensity values of the two images, leaving the color space untouched.
Combine the fused channel with the color information to produce the final image.
.
cvtColor(ref, tmp1, CV_BGR2GRAY, 0);
cvtColor(trans, tmp2, CV_BGR2GRAY, 0);
cv::Mat yuv;
cvtColor(ref, yuv, CV_BGR2YUV, 3);
vector <Mat> channels_ref;
split(yuv, channels_ref);
double alpha = 0.3;
double beta = 1 - alpha;
addWeighted(tmp1, alpha, tmp2, beta, 0.0, channels_ref[0]);
Mat merge[] = {channels_ref[0], channels_ref[1], channels_ref[2]};
cv::merge(merge, 3, output);
cvtColor(output, output, CV_YUV2BGR);
imshow("Linear Blend", output);
waitKey(0);
I revisited this question after a long time and decided to have a go at it as there was no sample imagery available before. In the meantime, I have generated some - see later.
So, let's say you have a hi-res, panchromatic image with 10m resolution something like this:
and a lo-res, multi-spectral image with 40m resolution of the same area, something like this:
Then, just using ImageMagick at the command-line for now (since it is installed on most Linux distros and is available for OSX and Windows), do what I was alluding to in the comments under your original question...
convert hi-res-panchromatic.tif \
\( lo-res-multispectral.tif -resize 400% -colorspace Lab -separate -delete 0 \) \
-set colorspace Lab -combine result.tif
So, that says... "Load up the hi-res image. Then, to one side, load the lo-res image and upsize it to 400% to account for the 40m resolution versus 10m resolution and convert it to Lab colorspace and separate the channels. Delete the Lightness (L) channel of the lo-res image. Now, returning to the main processing from the aside processing, we will have the hi-res image that we loaded first acting as the L channel along with the ab channels (i.e. colour information) of the lo-res image. Combine them from Lab back into RGB and save".
I see you haven't logged on in a year, so I will delay any OpenCV code-writing until anyone else expresses an interest in the question - but I hope the technique is understandable.
Note
As I don't happen to have any geo-registered panchromatic and multi-spectral imagery of the same place, I cheated somewhat... I took a single image and synthesised a panchromatic version using ImageMagick:
convert orig.tif -colorspace gray hi-res-panchromatic.tif
and I synthesised the lo-res multi-spectral image using:
convert orig.tif -resize 25% lo-res-multispectral.tif
Also, note that I just used Lab mode here to do the blending, because it is simpler, but in the comments I suggested using Principal Components Analysis. I may re-visit this again and implement that too...

OpenCV - HSV range of values for tracking red color

Could you please tell me how to what are ranges for Hue, Saturation and Value indices for intense red?
I try to use this values for color tracking and I couldn't find a specific answer via Google.
you can map any color to OpenCV HSV. Actually opencv use 1800 hue cylinder while ideally it is 360, on the orher hand MS paint use 2400 cyllinder.
So to get OpenCV HSV value, simply open MS paint, open mixer, and read the value of HSV, now to map this value into OpenCV HSV multiply it with 180/240.
the range to value for saturation and value is 00-1800
You are the only one who can answer this question, since we don't know your criteria for "intense red". Collect as many samples as you can, some of which you consider intense red and some which are close but just miss the cut. Convert them all to HSL. Study the pattern.
You might put together a small app that has sliders for the H, S, and L parameters and displays a block of color corresponding to the settings. That will tell you your limits very quickly.