I have a image that is taken with uneven lighting.. there is a Light obove and one below the camera, so as a result, the image is properly lit in the center (top to bottom) but quite dark left and right.
Is there a way to apply a brightening filter with a gradient? so the more it nears the outer edge, the brighter it gets?
Solved with the answer from Mannari this way:
decompose3 (OriginalImage, ImageR, ImageG, ImageB)
trans_from_rgb(ImageR, ImageG, ImageB, ImageH, ImageL, ImageS, 'hls')
ImageWhite:=ImageL
gen_rectangle1(Rectangle,0,0,ImageHeight-1,ImageWidth-1)
* paint a white rectangle
paint_region(Rectangle,ImageWhite,ImageWhite,255.0,'fill')
tuple_real(BrightenWidth,BrightenWidth)
gen_image_gray_ramp (ImageGrayRampL, 0, -(255/BrightenWidth), 128, 1, BrightenWidth/2, ImageWidth, ImageHeight)
gen_image_gray_ramp (ImageGrayRampR, 0, (255/BrightenWidth), 128, 1, ImageWidth-(BrightenWidth/2), ImageWidth, ImageHeight)
add_image (ImageGrayRampL,ImageGrayRampR,ImageGrayRampRaw,1,0)
mult_image (ImageL, ImageGrayRampRaw, ImageComp, 0.003, 0)
add_image (ImageL, ImageComp, BrightenedImageL, 1, 0)
add_image (ImageS, ImageComp, BrightenedImageS, 1, 0)
trans_to_rgb(ImageH,BrightenedImageL,BrightenedImageS,ImageR,ImageG,ImageB,'hls')
compose3(ImageR,ImageG,ImageB,CompensatedImage)
Yes, of course.
You can find an example in the demo multi_image.dev.
Here the demo code:
* This example demonstrates how to multiply two images using
* the operator 'mult_image'.
*
*
dev_close_window ()
dev_update_off ()
*
* Read an input image and generate a second input image
* by creating a gray value ramp
read_image (Scene00, 'autobahn/scene_00')
gen_image_gray_ramp (ImageGrayRamp, 0.5, 0.5, 128, 256, 256, 512, 512)
*
* Display the input images for the multiplication
dev_open_window_fit_image (Scene00, 0, 0, -1, -1, WindowHandle)
set_display_font (WindowHandle, 16, 'mono', 'true', 'false')
dev_display (Scene00)
disp_message (WindowHandle, 'Multiply the image with a gray value ramp', 'window', 12, 12, 'black', 'true')
disp_continue_message (WindowHandle, 'black', 'true')
stop ()
dev_display (ImageGrayRamp)
disp_message (WindowHandle, 'Created gray value ramp', 'window', 12, 12, 'black', 'true')
disp_continue_message (WindowHandle, 'black', 'true')
stop ()
*
* Multiply the images with factor 0.005 for the gray value
* adaption and display the resulting image
mult_image (Scene00, ImageGrayRamp, ImageResult, 0.005, 0)
dev_display (ImageResult)
disp_message (WindowHandle, 'Resulting image of the product', 'window', 12, 12, 'black', 'true')
I think it's better if you grab a reference photo (for example with a white paper) instead of creating a gradient image.
Related
When I run nppi and cv::cvtColor for color conversion, I get different results.
// *data_ptr = 1, 1, 1, 1
cv::Mat image1(1, 2, CV_8UC2, data_ptr, 2*2);
cv::Mat image2;
cv::cvtColor(image1, image2, cv::COLOR_YUV2RGB_UYVY);
NppiSize nppSize{2, 1};
nppiYUV422ToRGB_8u_C2C3R(
(Npp8u*)data_ptr, 2*2, (Npp8u*)dst_data_ptr, 2*3, nppSize
)
// ------------ Results ------------
// opencv: 0, 153, 0, 0, 153, 0
// nppi: 0, 124, 0, 0, 124, 0
Does anyone know what's going on?
There are several YUV sub-formats.
Have you tried cv::COLOR_YUV2RGB_I420, cv::COLOR_YUV420p2BGR etc. ?
One of them might give you result more similar to nppiYUV422ToRGB_8u_C2C3R.
I am trying to use Optical Flow on some videos. But it doesn't work at all when I don't resize the video.
According to the documentation I have set the parameters as:
calcOpticalFlowFarneback(prevgray, current, flow, 0.5, 1, 10, 2, 5, 1.1, 0);
In case of videos that are scaled-down or up it works fine:
But in case of keeping the original size of the videos it does not work at all:
I have tried changing the parameters of the function:
calcOpticalFlowFarneback(prevgray, current, flow, 0.5, 1, 4, 2, 3, 1.1, 0);
//or
calcOpticalFlowFarneback(prevgray, current, flow, 0.5, 1, 50, 2, 5, 1.2, 0);
//or
calcOpticalFlowFarneback(prevgray, current, flow, 0.5, 1, 100, 20, 7, 1.2, 0);
...
But none of them make any difference. The result for original size videos is still no flow.
The Lukas Kanade algorithm has the exact same problem:
When I scale down the 720 x 480 or other high-resolution videos to half-size (360 x 240), Optical Flow algorithms still work well. But they don't work at all for videos without scaling (original size), no matter how I set the parameters.
How can I make Optical Flow work for videos without resizing the video?
According to this post,
the problem was with the current and prev pointing to the same frame.
It was fixed by using frame.clone() instead of frame in the queue of frames.
deque<Mat> frames;
...
frames.push_back(frame.clone());
...
current = frame;
prev = frames[frames.size() - 5];
...
calcOpticalFlowFarneback(prevgray, current, flow, 0.5, 1, 10, 2, 5, 1.1, 0);
1.Some Information: I would like to develop a kind of circle recognition with the help of openCV. I successfully set up a connection between Swift, objc-c++, but strangely I have some problems with the circle recognition algorithm: Not all of the circles in my image gets detected!
2.Have a look at my code:
+(UIImage *)ConvertImage:(UIImage *)image {
cv::Mat matImage;
UIImageToMat(image, matImage);
cv::Mat modImage;
cv::medianBlur(matImage, matImage, 5);
cv::cvtColor(matImage, modImage, CV_RGB2GRAY);
cv::GaussianBlur(modImage, modImage, cv::Size(9, 9), 2, 2);
vector<Vec3f> circles;
cv::HoughCircles(modImage, circles, CV_HOUGH_GRADIENT, 1, 1, 100, 50, 0, 0);
for (auto i = circles.begin(); i != circles.end(); ++i)
std::cout << *i << ' ';
for( size_t i = 0; i < circles.size(); i++ )
{
cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
circle( matImage, center, 3, Scalar(0,255,0), -1, 8, 0 );
circle( matImage, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
UIImage *binImg = MatToUIImage(matImage);
return binImg;
}
As you can see in the image [click] there appears this issue :
Only 3 of 7 circles gets detected!
So in the docs I found the parameters explanation for this line:
cv::HoughCircles(modImage, circles, CV_HOUGH_GRADIENT, 1, 1, 100, 50, 0, 0);
dp = 1: The inverse ratio of resolution.
min_dist = modImage.rows/8: Minimum distance between detected centers.
param_1 = 200: Upper threshold for the internal Canny edge detector.
param_2 = 100*: Threshold for center detection.
min_radius = 0: Minimum radio to be detected. If unknown, put zero as default.
max_radius = 0: Maximum radius to be detected. If unknown, put zero as default.
3.My question
How to get rid of the issue mentioned above?
Any help would be very appreciated :)
For issue number 2 : The outline should be colored, not white!
What color should it be? At any rate you draw that circle in your code with this line.
circle( matImage, center, radius, Scalar(0,0,255), 3, 8, 0 );
If you want to change the color you can change the values you have declared in Scalar(0,0,255).
If you dont want the circle there at all you can remove that line of code.
Your images seems to be noise free. If the image is to contain circle always, You can extract the contours and fit circles using Least Squares
You can get the circle fit equations here. It is a straightforward implementation. Create a structure for the circle parameters (center and radius), fit circle and store the parameters in the structure and use it to draw circle using OpenCV.
You can also generate points on the circle using "ellipse2poly" function.
The "red" color-detection is not working yet. The following code is supposed to detect a red bar from an input-image and return a mask-image showing a white bar at the corresponding location.
The corresponding HSV-values of the "red" bar in the inputRGBimage are : H = 177, S = 252, V = 244
cv::Mat findColor(cv::Mat inputRGBimage) {
cv::Mat imageHSV(inputRGBimage.rows, inputRGBimage.cols, CV_8UC3);
cv::Mat imgThreshold(inputRGBimage.rows, inputRGBimage.cols, CV_8UC1);
// convert input-image to HSV-image
cvtColor(inputRGBimage, imageHSV, cv::COLOR_BGR2HSV);
// for red: (H < 14)
// cv::inRange(imageHSV, cv::Scalar(0, 53, 185, 0), cv::Scalar(14, 255, 255, 0), imgThreshold);
// or (H > 165) (...closing HSV-circle)
cv::inRange(imageHSV, cv::Scalar(165, 53, 185, 0), cv::Scalar(180, 255, 255, 0), imgThreshold);
return imgThreshold;
}
The two images below show the inputRGBimage (top) and the returned imgThreshold (bottom). As you can see, the mask is not showing the white bar at the expected color "red" but shows it for some unknown reason at the "blue" bar. Why ????
The following change of the cv::inRange line of code (i.e. H > 120) and its result again illustrates that the color detection is not actually acting as expected :
// or (H > 120) (...closing HSV-circle)
cv::inRange(imageHSV, cv::Scalar(120, 53, 185, 0), cv::Scalar(180, 255, 255, 0), imgThreshold);
As a third example: (H > 100):
// or (H > 100) (...closing HSV-circle)
cv::inRange(imageHSV, cv::Scalar(100, 53, 185, 0), cv::Scalar(180, 255, 255, 0), imgThreshold);
Why the unexpected order of colors in my 3 code-examples (decreasing the H-value from 165 to 100) showing mask orders of "blue->violet->red->orange" instead of the actually expected HSV-wheel rough order of "red->violet->blue->green->yellow->orange" ?????
HSV in OpenCV has ranges:
0 <= H <= 180,
0 <= S <= 255,
0 <= V <= 255, (not quite like in the illustrating graphic above - but the order of colors should be the same for OpenCV HSV-colors - or not ???)
Make sure that the image uses the channel order B, G, R. Also, for the color red you need check two ranges of values, one around H=0 and the other around H=180. You could try this function:
cv::Mat findColor(const cv::Mat & inputBGRimage, int rng=15)
{
// Make sure that your input image uses the channel order B, G, R (check not implemented).
cv::Mat input = inputBGRimage.clone();
cv::Mat imageHSV;//(input.rows, input.cols, CV_8UC3);
cv::Mat imgThreshold, imgThreshold0, imgThreshold1;//(input.rows, input.cols, CV_8UC1);
assert( ! input.empty() );
// convert input-image to HSV-image
cv::cvtColor( input, imageHSV, cv::COLOR_BGR2HSV );
// In the HSV-color space the color 'red' is located around the H-value 0 and also around the
// H-value 180. That is why you need to threshold your image twice and the combine the results.
cv::inRange(imageHSV, cv::Scalar( 0, 53, 185, 0), cv::Scalar(rng, 255, 255, 0), imgThreshold0);
if ( rng > 0 )
{
cv::inRange(imageHSV, cv::Scalar(180-rng, 53, 185, 0), cv::Scalar(180, 255, 255, 0), imgThreshold1);
cv::bitwise_or( imgThreshold0, imgThreshold1, imgThreshold );
}
else
{
imgThreshold = imgThreshold0;
}
return imgThreshold;
}
Good luck! :)
On an image frame, I use
void ellipse(Mat& img, Point center, Size axes, double angle, double startAngle, double endAngle, const Scalar& color, int thickness=1, int lineType=8, int shift=0)
to draw an ellipse and I want to set the ellipse color to green [ RGB value : (165, 206, 94) ].
So I set the parameter const Scalar& color to
cv::Scalar(94.0, 206.0, 165.0, 0.0); // as BGR order, suppose the value is 0.0 - 255.0
cv::Scalar(94.0/255.0, 206.0/255.0, 165.0/255.0, 0.0); // suppose the value is 0.0 - 1.0
I also tried RGB alternative.
CV_RGB(165.0, 206.0, 94.0); // as RGB order, suppose the value is 0.0 - 255.0
CV_RGB(165.0/255.0, 206.0/255.0, 94.0/255.0); // suppose the value is 0.0 - 1.0
But the color being displayed is white [ RGB value (255, 255, 255) ] , not the desired green one.
What I missed at this point? Any suggestion please. Thank you.
EDIT:
Let me put whole related code here. According to OpenCV iOS - Video Processing, this is the CvVideoCamera config in - (void)viewDidLoad;:
self.videoCamera = [[CvVideoCamera alloc] initWithParentView:imgView];
[self.videoCamera setDelegate:self];
self.videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionFront;
self.videoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPreset352x288;
self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationPortrait;
self.videoCamera.defaultFPS = 30;
self.videoCamera.grayscaleMode = NO;
[self.videoCamera adjustLayoutToInterfaceOrientation:UIInterfaceOrientationPortrait];
Then after [self.videoCamera start]; called, the (Mat&)image would be captured and can be processed in the CvVideoCameraDelegate method - (void)processImage:(Mat&)image; and here are the code to draw an ellipse:
- (void)processImage:(Mat&)image {
NSLog(#"image.type(): %d", image.type()); // got 24
// image.convertTo(image, CV_8UC3); // try to convert image type, but with or without this line result the same
NSLog(#"image.type(): %d", image.type()); // also 24
cv::Scalar colorScalar = cv::Scalar( 94, 206, 165 );
cv::Point center( image.size().width*0.5, image.size().height*0.5 );
cv::Size size( 100, 100 );
cv::ellipse( image, center, size, 0, 0, 360, colorScalar, 4, 8, 0 );
}
Eventually, the ellipse is still in white, not the desired green one.
Set alpha to 255 can fix this problem.
Scalar(94,206,165,255)
As mrgloom points correctly in the comment, it might be because of type of your image [ the Mat object where you want to draw, i.e Mat &img in ellipse() function].
cv::Scalar(94, 206, 165) is the desired green color for 8UC3 type images. Setting these values in 32FC3 image will result in white color.
you can use
src.convertTo(src, CV_8UC3);
Where CV_8UC3 means that you use 8 bits unsigned char and 3 color image representation.
More information you can find here OpenCV docs
after that your ellipse should be green, if it doesn't help post the whole code.
I was having similar problem and I have managed to fix it by first converting image to BGR. So in your case processImage function would look like as:
-(void)processImage:(Mat&)image
{
cvtColor(image, image, CV_RGBA2BGR);
cv::Scalar colorScalar = cv::Scalar( 94, 206, 165 );
cv::Point center( image.size().width*0.5, image.size().height*0.5 );
cv::Size size( 100, 100 );
cv::ellipse( image, center, size, 0, 0, 360, colorScalar, 4, 8, 0 );
}
The only line which I have included in your code is:
cvtColor(image, image, CV_RGBA2BGR);
If you also log channel, depth and type information in the above function as follows:
NSLog(#"Before conversion");
NSLog(#"channels %d", image.channels());
NSLog(#"depth %d", image.depth());
NSLog(#"type %d", image.type());
NSLog(#"element size %lu", image.elemSize());
cvtColor(image, image, CV_RGBA2BGR);
NSLog(#"After conversion");
NSLog(#"channels %d", image.channels());
NSLog(#"depth %d", image.depth());
NSLog(#"type %d", image.type());
NSLog(#"element size %lu", image.elemSize());
you will see before conversion:
channels 4
depth 0
type 24
element size 4
which I think is CV_8UC4 and after conversion it becomes:
channels 3
depth 0
type 16
element size 3
which is CV_8UC3.
I guess one of the reason why it does not work without cvtColor is that the opencv drawing functions don't support alpha transparency when the target image is 4-channel as mentioned in opencv documentation. So by converting CV_RGBA2BGR we take out alpha channel. However having said that I do not managed to get it work if I do:
cvtColor(image, image, CV_RGBA2RGB);
In this Red and Blue colors are inverted in the image. So although it seems to work but I am not sure if it is the actual reason.