Remapping image to different size and finding pixel location - c++

I am implementing an algorithm in which i have to use multi resolution analysis. What that paper says is that i have to perform some processing at lower scale, find some pixel locations and then remap the pixels according to the orignal scale. I really dont understand the remapping function in Open cv. If any one could help me that would be great.
Thanks.

If you want to resize picture in OpenCV, you can do this:
Mat img = imread(picturePath, CV_8UC1);
Mat resized(NEW_HEIGH, NEW_WIDTH, CV_32FC1);
img.convertTo(img, CV_32FC1);
resize(img, resized, resized.size());
If you want to acess a specific pixel, use:
img.at<var>(row, col)
in a CV_32FC1 format replace "var" with "float", for CV_8UC1 replace with "int"
Hope this will help.

Related

segmentation of the source image in opencv based on canny edge outline attained from processing the said source image

I have a source image. I need a particular portion to be segmented from it and save it as another image. I have the canny outline of the portion I need to be segmented out,but how do I use it to cut the portion from the source image? I have attached both the source image and the canny edge outline. Please help me and suggest me a solution.
EDIT-1: Alexander Kondratskiy,Is this what you meant by filling the boundary?
EDIT-2 : according to Kannat, I have done this
Now how do I separate the regions that are outside and inside of the contour into two separate images?
Edit 3- I thought of 'And-ing'the mask and the contour lined source image.Since I am using C, I am having a little difficulty.
this is the code I use to and:-
hsv_gray = cvCreateImage( cvSize(seg->width, seg->height), IPL_DEPTH_8U, 1 );
cvCvtColor( seg, hsv_gray, CV_BGR2GRAY );
hsv_mask=cvCloneImage(hsv_gray);
IplImage* contourImg =cvCreateImage( cvSize(hsv_mask->width, hsv_mask->height), IPL_DEPTH_8U, 3 );
IplImage* newImg=cvCreateImage( cvSize(hsv_mask->width, hsv_mask->height), IPL_DEPTH_8U, 3 );
cvAnd(contourImg, hsv_mask,newImg,NULL);
I always get an error of mismatch size or Type. I adjusted the size but I can't seem to adjust the type,since one(hsv_mask) is 1 channel and the others are 3 channels.
#kanat- I also tried your boundingrect but could not seem to get in right in C format.
Use cv::findContours on your second image to find the contour of the segment. Then use cv::boundingRect to find bounding box for this segment. After that you can create matrix and save in it cropped bounding box from your second image (as I see it is a binary image). To crop needed region use this:
cv::getRectSubPix(your_image, BB_size, cv::Point(BB.x + BB.width/2,
BB.y + BB.height/2), new_image).
Then you can save new_image using cv::imwrite. That's it.
EDIT:
If you found only one contour then use this (else you will iterate through elements of found contours). The following code shows the steps but sorry I can't test it now:
std::vector<std::vector<cv::Point>> contours;
// cv::findContours(..., contours, ...);
cv::Rect BB = cv::boundingRect(cv::Mat(contours[0]));
cv::Mat new_image;
cv::getRectSubPix(your_image, BB.size(), cv::Point(BB.x + BB.width/2,
BB.y + BB.height/2), new_image);
cv::imwrite("new_image_name.jpg", new_image);
You could fill the boundary created by the canny-edge detector, and use that as an alpha mask on the original image.

Image resize vs Image pyramid for generating ROI for pedestrian detection - OpenCV

For a OpenCV driving assistance application I want to generate ROIs as candidates for faster HoG classification of pedestrians. I'm running this on GPU. I do not want to use detectMultiscale function as it scan all over the image (including sky). Since the features are not scalable, which of the following functions I should use for resizing the images for generating the ROIs?
gpu::resize(const GpuMat& src, GpuMat& dst, Size dsize, double fx=0, double fy=0, int interpolation=INTER_LINEAR, Stream& stream=Stream::Null()) or
Image pyramids cv2.pyrUp(), cv2.pyrDown()
I couldn't find image pyramids in OpenCV GPU library(2.4.9).
Can anyone please suggest?
Thanks
First, you could directly set a ROI by using the cvRect (opencv rectangle) function to create the ROI image/submatrix, like that:
Mat image = imread("");
Rect region_of_interest = Rect(x, y, w, h);
Mat image_roi = image(region_of_interest);
But if you want to generate lowsamples smaller (less lines and columns), there are some differences between pyramids and resize:
-Pyramids are a kind of filter done by a convolution of all image with a gaussian convolution matrix, and after that it downsamples the image by rejecting even rows and columns.
-The resize function, do a geometric transformation and you could change the method to interpolate the pixel values.
in pratice: pyramids are a quick way to downsample by 4 sub-images; resize are more generic and could be used for upsampling too.

Open CV: Acces pixels of grayscale image

I know this question can be wired for experts, but I want to access pixels of grayscale image in openCV. I am using the following code:
cv::Mat img1 = cv::imread("bauckhage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Now if I want to print any pixel(say at position 255,255) using img1.at<float>(255,255) I get 0, which is not actually black pixel. I thought the image has been read as 2d Matrix.
Actually I want to do some calculation on each pixel, for that I need to access each pixel explicitly.
Any help or suggestion will be appreciated.
You will have to do this
int x = (int)(img.at<uchar>(j,i))
This is because of the way the image is stored and the way img.at works. Grayscale image is stored in CV_8UC1 format and hence it is basically an array of uchar type. So the return value is a uchar which you typecast into an int to operate on it as ints.
Also, this is similar to your questions:
accessing pixel value of gray scale image in OpenCV
float is the wrong type here.
if you read an image like this:
cv::Mat img1 = cv::imread("bauckhage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
then it's type will be CV_8U, (uchar, a single grayscale byte per pixel). so, to access it:
uchar &pixel = img1.at<uchar>(row,col);

OpenCV: HSV inRange returns binary image squashed and repeated

I'm having a bit of trouble with an image that I'm converting for colour recognition.
The function looks like this:
void PaintHSVWindow(cv::Mat img){
cv::Mat HSV, threshold;
cvtColor(img, HSV, COLOR_BGR2HSV);
inRange(HSV, cv::Scalar(HMin, SMin, VMin), cv::Scalar(HMax, SMax, VMax), threshold);
Mat erodeElement = getStructuringElement(MORPH_RECT, cv::Size(3, 3));
Mat dilateElement = getStructuringElement(MORPH_RECT, cv::Size(8, 8));
erode(threshold, threshold, erodeElement);
dilate(threshold, threshold, dilateElement);
cv::resize(threshold, threshold, cv::Size(360, 286));
MyForm::setHSVWindow(threshold);
}
And the output looks as follows:
On the left is the input. On the right is supposed to be the same image, converted to HSV, filtered between the given thresholds to find the yellow ball, eroded and dilated to remove the smaller contours, and displayed in half the size of the original image. Instead, it takes the expected image and squashes 3 of them in the same space.
Any guesses as to why this would happen?
UPDATE 1:
OK, since it appears that running findContours on the image on the right-hand size still gives me the proper output, i.e. the contours from the distorted, 3-times-copied right-side image can be pasted into the right position on the left-side input image, I've decided to just take the distorted image and crop it for display purposes. It will only ever be used to find the contours of a given HSV range in an image, and if it serves that purpose, I'm happy.
As #Nallath comments, this is apparently a channel issue. According to the documentation, the output of inRange() should be a 1-channel CV_8U image which is the logical AND of all channel inclusives.
Your result means that somewhere along the way threshold is being treated like a 3-channel plane-order image.
What version of OpenCV are you using?
I suggest that you show threshold between every step to find the place where this conversion happens. This might be a bug that should be reported.

Best practice, to detect if a Mat is black and White in Opencv

I would like to know if the image I read is a Black and white or colored image.
I use Opencv for all my process.
In order to detect it, I currently read my image, convert it from BGR2GRAY and I compare Histogram of the original(read as BGR) to the histogram of the second (known as B&W).
In pseudo code this looks like that:
cv::Mat img = read("img.png", -1);
cv::Mat bw = cvtColor(img.clone(), bw, CV_BGR2GRAY);
if (computeHistogram(img) == computeHistogram(bw))
cout << "Black And White !"<< endl;
Is there a better way to do it ? I am searching for the lightest algo I can Implement and best practices.
Thanks for the help.
Edit: I forgot to say that I convert my images in HSL in order to compare Luminance Histograms.
Storing grayscale images in RGB format causes all three fields to be equal. It means for every pixel in a grayscale image saved in RGB format we have R = G = B. So you can easily check this for your image.