How to increase brightness in specify part of image using Histogram Equalization - c++

Here I did coding for increasing brightness using Histogram Equalization but it changes the overall brightness of images. And I need brightness in specify location.
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main( int argc, const char** argv {
Mat img = imread("MyPic.JPG", CV_LOAD_IMAGE_COLOR);
if (img.empty())
{
cout << "Image cannot be loaded..!!" << endl;
return -1;
}
cvtColor(img, img, CV_BGR2GRAY);
Mat img_hist_equalized;
equalizeHist(img, img_hist_equalized); //equalize the histogram
namedWindow("Original Image", CV_WINDOW_AUTOSIZE);
namedWindow("Histogram Equalized", CV_WINDOW_AUTOSIZE);
imshow("Original Image", img);
imshow("Histogram Equalized", img_hist_equalized);
waitKey(0); //wait for key press
destroyAllWindows(); //destroy all open windows
return 0;}
Input image:
Output I gets:
But I expected output
The above code is based on the Histogram Equalization.Is there any other approach means specify here.

The answer on your question is in the method, used for generation of last image with expected output. Which commands of graphical editor have you used to generate this result? Just repeat it with OpenCV.
You can try something like this:
increase contrast,
reduce number of colors to 4,
replace all three "near black" colors by real black, and replace one
"near white" color by real white.
You will have something like this (top image after second step, bottom image after third step):
Is it good enought for your task? Do you need better quality? Any way, try to process your image by any graphical editor before coding. And only when you understand, which operations you need to apply to your image, try to implement it using OpenCV.
To understand better the problem, let's see the histogram of your initial image:
We can see, that difference between "expected black" and "expected white" colors are very small. More over, inside "white circle" we can see pixels with "expected black colors". So it is not enough to change histogram for fixing of this mix of pixels. We need to analyze surroundings of each pixel (in fact, I done it in first step - when increased contrast of the image). We can discuss methods for it, but first of all we need more information and efforts from you. Editing of the pallet without analyzing of surroundings will give you strange results like this:
So, you need to open any graphical editor and find way to convert your input to expected output.

Related

Finding connected components using OpenCV

I am trying to find and separate all edges in an edge detected image using python OpenCV. The edges can be in a form of contour but they don't have to. I just want all connected edges pixels to be grouped together. So technically the algorithm may procedurally sound like this:
For each edge pixel, find a neighbouring (connected) edge pixel and add it to a current subdivision of the image, until you can't find one anymore.
Then move on to the next unchecked edge pixel and start a new subdivision and do 1) again.
I have looked through cv.findContours but the results wasn't satisfying, maybe because it was intended for contours (enclosed edges) rather than free-ended ones. Here are the results:
Original Edge Detected:
After Contour Processing:
I expected the five edges would each be grouped into its own subdivision of the image, but apparently the cv2.findContours function breaks 2 of the edges even further into subdivisions which I don't want.
Here is the code I used to save these 2 images:
def contourForming(imgData):
cv2.imshow('Edge', imgData)
cv2.imwrite('EdgeOriginal.png', imgData)
contours = cv2.findContours(imgData, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cv2.imshow('Contours', imgData)
cv2.imwrite('AfterFindContour.png', imgData)
cv2.waitKey(0)
pass
There are restrictions to my implementation, however. I have to use Python 2.7 and OpenCV2. I cannot use any other revision or languages besides these. I say this because I know OpenCV 2 has a connectedComponent function using C++. I could have used that but the problem is, I cannot use it due to certain limitations.
So, any idea how I should approach the problem?
Using findContours is the correct approach, you're simply doing it wrong.
Take a closer look to the documentation:
Note: Source image is modified by this function.
Your "After Contour Processing" image is in fact the garbage result from findContours. Because of this, if you want the original image to be intact after the call to findContours, it's common practice to pass a cloned image to the function.
The meaningful result of findContours is in contours. You need to draw them using drawContours, usually on a new image.
This is the result I get:
with the following C++ code:
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int argc, char** argv)
{
// Load the grayscale image
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Prepare the result image, 3 channel, same size as img, all black
Mat3b res(img.rows, img.cols, Vec3b(0,0,0));
// Call findContours
vector<vector<Point>> contours;
findContours(img.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_NONE);
// Draw each contour with a random color
for (int i = 0; i < contours.size(); ++i)
{
drawContours(res, contours, i, Scalar(rand() & 255, rand() & 255, rand() & 255));
}
// Show results
imshow("Result", res);
waitKey();
return 0;
}
It should be fairly easy to port to Python (I'm sorry but I can't give you Python code, since I cannot test it). You can also have a look at the specific OpenCV - Python tutorial to check how to correctly use findContours and drawContours.

Merging rgb and depth images from a kinect

I'm creating a vision algorithm that is implemented in a Simulink S-function( which is c++ code). I accomplished every thing wanted except the alignment of the color and depth image.
My question is how can i make the 2 images correspond to each other. in other words how can i make a 3d image with opencv.
I know my question might be a little vague so i will include my code which will explain the question
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int argc, char** argv)
{
// reading in the color and depth image
Mat color = imread("whitepaint_col.PNG", CV_LOAD_IMAGE_UNCHANGED);
Mat depth = imread("whitepaint_dep.PNG", CV_LOAD_IMAGE_UNCHANGED);
// show bouth the color and depth image
namedWindow("color", CV_WINDOW_AUTOSIZE);
imshow("color", color);
namedWindow("depth", CV_WINDOW_AUTOSIZE);
imshow("depth", depth);
// thershold the color image for the color white
Mat onlywhite;
inRange(color, Scalar(200, 200, 200), Scalar(255, 255, 255), onlywhite);
//display the mask
namedWindow("onlywhite", CV_WINDOW_AUTOSIZE);
imshow("onlywhite", onlywhite);
// apply the mask to the depth image
Mat nocalibration;
depth.copyTo(nocalibration, onlywhite);
//show the result
namedWindow("nocalibration", CV_WINDOW_AUTOSIZE);
imshow("nocalibration", nocalibration);
waitKey(0);
destroyAllWindows;
return 0;
}
output of the program:
As can be seen in the output of my program when i apply the onlywhite mask to the depth image the quad copter body does not consist out of 1 color. The reason for this is that there is a miss match between the 2 images.
I know that i need calibration parameters of my camera and i got these from the last person who worked with this setup. Did the calibration in Matlab and this resulted in the following.
Matlab calibration esults:
I have spent allot of time reading the following opencv page about Camera Calibration and 3D Reconstruction ( cannot include the link because of stack exchange lvl)
But i cannot for the life of me figure out how i could accomplish my goal of adding the correct depth value to each colored pixel.
I tried using reprojectImageTo3D() but i cannot figure out the Q matrix.
i also tried allot of other functions from that page but i cannot seem to get my inputs correct.
As far as I know, Matlab has very good support for Kinect (especially for v1). You may use a function named alignColorToDepth, as follows:
[alignedFlippedImage,flippedDepthImage] = alignColorToDepth(depthImage,colorImage,depthDevice)
The returned values are alignedFlippedImage (the RGB registrated image) and flippedDepthImage (the registrated depth image). These two images are aligned and ready for you to process them.
You can find more at this MathWorks documentation page.
Hope it's what you need :)
As far as I can tell, you are missing the transformation between camera coordinate frames. The Kinect (v1 and v2) uses two separate camera systems to capture the depth and RGB data, and so there is a translation and rotation between them. You may be able to assume no rotation, but you will have to account for the translation to fix the misalignment you are seeing.
Try starting with this thread.

How to use BackgroundSubtractorMOG2 for images

I am pretty new to OpenCV and I am stuck at the moment. I am dealing with images, not a video. Since I will have same background in my project, I thought it would be easier to work, if I could remove my background. But first, I have to ask one thing. Can I use BackgroundSubtractorMOG2 for images? Because it is under video analysis/motion analysis title.
I read the documentation on opencv.org and looked through countless examples/tutorials but I am still having difficulty understanding how MOG2 works.
Quick question: What is history that in parameters?
So, I have written a simple code. I get a foreground mask. So, what is the next step? How can I remove the background and left with my object only? Shouldn't I load my background first, then the actual image, so that MOG2 could do the background subtraction?
I am using OpenCV 2.4.11.
Code:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/video/background_segm.hpp>
using namespace cv;
using namespace std;
//global variables
int history = 1;
float varThreshold = 16;
bool bShadowDetection = true;
Mat src; //source image
Mat fgMaskMOG2; //fg mask generated by MOG2 method
Ptr<BackgroundSubtractor> pMOG2; //MOG2 Background subtractor
int main(int argc, char* argv[])
{
//create GUI windows
namedWindow("Source");
namedWindow("FG Mask MOG 2");
src = imread("bluePaper1.png", 1);
//create Background Subtractor objects
pMOG2 = new BackgroundSubtractorMOG2(history, varThreshold, bShadowDetection); //MOG2 approach
pMOG2->setInt("nmixtures", 3);
pMOG2->setDouble("fTau", 0.5);
pMOG2->operator()(src, fgMaskMOG2);
imshow("Source", src);
imshow("FG Mask MOG 2", fgMaskMOG2);
waitKey(0);
return 0;
}
Source image:
fgMask that I get from MOG2:
Mixture of Gaussian method learns background according to history of frames in a fixed camera and so you can not use it for only one image. The history parameter shows how many frames would have effect on construction of the background.
Shadow detection is not a process which depends on BGS method and should be implemented alongside.
for example in MOG2 documentation we have:
The shadow is detected if the pixel is a darker version of the background. Tau is a threshold defining how much darker the shadow can be. Tau= 0.5 means that if a pixel is more than twice darker then it is not shadow
In case of your example the foreground could easily be obtained by a simple frame difference and you can easily remove shadows by the mentioned solution.
you can have the foreground by the following steps:
Subtract given image from known background and threshold the result to obtain the foreground mask
Apply AND operation on foreground mask and the given image to get your object with possible shadows.
Remove pixels which is darker (amount of it should be adjust) than their corresponding pixel in background.
Do some post processing like morphological and connected-component-labeling to have a better result.

Extracting Background Image Using GrabCut

I've an image (.jpg image), and I want to extract the background from the original image. I've googled a lot but have only found tutorials of extracting foreground image.
I've taken the code from another stackoverflow question. The code is working fine for me, and I've successfully extracted the foreground (as per my requirements). Now I want to completely remove this foreground from the original image. I want it to be something like this:-
Background = Original Image - Foreground
The empty space can be filled with black or white color. How can I achieve this?
I've tried using this technique:-
Mat background = image2 - foreground;
but it gives a complete black image.
Code:-
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main( )
{
// Open another image
Mat image;
image= cv::imread("images/abc.jpg");
Mat image2 = image.clone();
// define bounding rectangle
cv::Rect rectangle(40,90,image.cols-80,image.rows-170);
cv::Mat result; // segmentation result (4 possible values)
cv::Mat bgModel,fgModel; // the models (internally used)
// GrabCut segmentation
cv::grabCut(image, // input image
result, // segmentation result
rectangle,// rectangle containing foreground
bgModel,fgModel, // models
1, // number of iterations
cv::GC_INIT_WITH_RECT); // use rectangle
cout << "oks pa dito" <<endl;
// Get the pixels marked as likely foreground
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
// Generate output image
cv::Mat foreground(image.size(),CV_8UC3,cv::Scalar(255,255,255));
//cv::Mat background(image.size(),CV_8UC3,cv::Scalar(255,255,255));
image.copyTo(foreground,result); // bg pixels not copied
// draw rectangle on original image
cv::rectangle(image, rectangle, cv::Scalar(255,255,255),1);
imwrite("img_1.jpg",image);
imwrite("Foreground.jpg",foreground);
Mat background = image2 - foreground;
imwrite("Background.jpg",background);
return 0;
}
Note: I'm an opencv beginner and don't have much knowledge of it right now. I shall be very thankful to you if you can either post the complete code (as required by me) or just post the lines of code and tell me where these lines of code be placed. Thanks.
P.S. This is my second question at StackOverflow.com. apologies ... if not following any convention.
Instead of copying all the pixels that are foreground, it copies all pixels which are not foreground. You can do this by using ~, which negates the mask:
image.copyTo(background,~result);
What if you //Get the pixels marked as likely background:
// Get the pixels marked as likely background
cv::compare(result,cv::GC_PR_BGD,result,cv::CMP_EQ);
Edit: The above code is missing GC_BGD pixels. Despite a more efficient answer was given, let's finish what we started:
// Get the pixels marked as background
cv::compare(result,cv::GC_BGD,result_a,cv::CMP_EQ);
// Get the pixels marked as likely background
cv::compare(result,cv::GC_PR_BGD,result_b,cv::CMP_EQ);
// Final results
result=result_a+result_b;
Just a small suggestion,#William's
answer can be written more concisely as:
result = result & 1;
in order to get the binary mask.
Maybe another example helps, in which I assumed that the middle portion of the image is definitely foreground.
So try this link.
Example

How to detect the white gauge board for measuring the level of the water?

I work on a project where I need to measure water level using a white gauge board. Currently my approach is:
segmenting the white gauge board.
measure the water level against the gauge board.
But I get stuck in segmenting the gauge board. I avoid using color-based segmentation since I need it to be invariant with light changes, so I detect the edges using morphological operations instead. I've got this image:
The result from morphological operations seems promising. The edges on the white gauge board are sharper than others. But I still don't have any idea to properly segment the board. Can you suggest an algorithm to segment the board? Or please suggest if you have different algorithm for measuring the water level.
Here is my code:
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
int main()
{
cv::Mat src = cv::imread("image.jpg");
if (!src.data)
return -1;
cv::Mat bw;
cv::cvtColor(src, bw, CV_BGR2GRAY);
cv::medianBlur(bw, bw, 3);
cv::Mat dilated, eroded;
cv::dilate(bw, dilated, cv::Mat());
cv::erode(bw, eroded, cv::Mat());
bw = dilated - eroded;
cv::imshow("src", src);
cv::imshow("bw", bw);
cv::waitKey();
return 0;
}
I'm using C++, but I'm open to other implementations in Matlab/Mathematica.
If the camera is indeed stationary, you can use this type of quick and dirty approach:
im= rgb2gray(imread('img.jpg'));
imr=imrotate(im,1);
a=imr(100:342,150);
plot(a)
The minima that are shown in the plot are from 10 (left) to 1 (right) in the scale of the indicator. You can use a peak detector to locate their positions and interpolate the water level found between them.
So, there's no real need of fancy image processing...
Why are you segmenting the gauge board anyway? You just want to find it in the image, that's all. You don't need to find the relative location of segments. 5 is always going to be between 4 and 6.
As you've probably noticed, you can find the rough location of the gauge board by looking for an area with a high contrast level. Using matchTemplate you can then find the exact location. (Considering that the camera is fixed, you might be able to skip the first step and call matchTemplate directly).