i wrote a face & eye detection code
next step is put an image to the coordinates of the detected eye (for ex: eye
patch, eye glasses)
i couldn't find the function to combine the source frame and the image I want to add
any suggestions
thanks
You can use cvCopy with a mask to do this. If the the images do not have the same height and width set the ROI of the destination image before using cvCopy.
See OpenCV documentation:
cvCopy
cvSetImageROI
Related
As we all know,we can use the function cv::getOptimalNewCameraMatrix() with alpha = 1 to get a new CameraMatrix. Then we use the function cv::undistort() with the new CameraMatrix can get the image after dedistortion. However, I find the image after distortion is as large as the original image and some part of the image after distortion covered by black.
So my question is :Does this mean that the original image pixel is lost? and is there any way to avoid pixel loss or get the image whose size larger than origin image with opencv?
cv::Mat NewKMatrixLeft = cv::getOptimalNewCameraMatrix(KMatrixLeft,DistMatrixLeft ,cv::Size(image.cols,image.rows),1);
cv::undistort(image, show_image, KMatrixLeft, DistMatrixLeft,NewKMatrixLeft);
The size of image and show_image are both 640*480,however from my point of view,the size of image after distortion should be larger than 640*480 because some part of it is meaningless.
Thanks!
In order to correct distortion, you basically have to reverse the process that caused the initial distortion. This implies that pixels are stretched and squashed along various directions to correct the distortion. In some cases, this would move the pixels away from the image edge. In OpenCV, this is handled by inserting black pixels. There is nothing wrong with this approach. You can then choose how to crop it to remove the black pixels at the edges.
So far Ive succesfully calibrated a camera with Opencv by using the chessboard pattern in the tutorials, so I got the cameraMatrix, distCoefs, Rotation and Translation vectors.
Now Id like to display an image on top of my chessboard using the calibration parameters. How can I do that?
These are the steps Ive done so far:
1 - Get the chessboard corners
2 - projectPoints to get from world (640x480) to the warped frame seen during the calibration
3 - getPerspectiveTransform to get the transformation from world to warped image
4 - warpPerspective to get the image coordinates (the image Id like to display on top of the chessboard) to warp
5 - Create a mask on top of the chessboard
6 - flip the image Id like to display
7 - And finally copy the warped image to video frame on top of the area delimited by the mask.
Corners and mask are working fine. But Im not quite sure about the rest of the process.
Can anyone help me?
see this post i think my code will help you, i have an error but to understand how to make it i think the code will be good:
https://stackoverflow.com/questions/34785237/augmented-reality-projection-cube-error
good luck
PD : The code is in python !
I'm trying to analyse some images which have a lot of noise around the outside of the image, but a clear circular centre with a shape inside. The centre is the part I'm interested in, but the outside noise is affecting my binary thresholding of the image.
To ignore the noise, I'm trying to set up a circular mask of known centre position and radius whereby all pixels outside this circle are changed to black. I figure that everything inside the circle will now be easy to analyse with binary thresholding.
I'm just wondering if someone might be able to point me in the right direction for this sort of problem please? I've had a look at this solution: How to black out everything outside a circle in Open CV but some of my constraints are different and I'm confused by the method in which source images are loaded.
Thank you in advance!
//First load your source image, here load as gray scale
cv::Mat srcImage = cv::imread("sourceImage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
//Then define your mask image
cv::Mat mask = cv::Mat::zeros(srcImage.size(), srcImage.type());
//Define your destination image
cv::Mat dstImage = cv::Mat::zeros(srcImage.size(), srcImage.type());
//I assume you want to draw the circle at the center of your image, with a radius of 50
cv::circle(mask, cv::Point(mask.cols/2, mask.rows/2), 50, cv::Scalar(255, 0, 0), -1, 8, 0);
//Now you can copy your source image to destination image with masking
srcImage.copyTo(dstImage, mask);
Then do your further processing on your dstImage. Assume this is your source image:
Then the above code gives you this as gray scale input:
And this is the binary mask you created:
And this is your final result after masking operation:
Since you are looking for a clear circular center with a shape inside, you could use Hough Transform to get that area- a careful selection of parameters will help you get this area perfectly.
A detailed tutorial is here:
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html
For setting pixels outside a region black:
Create a mask image :
cv::Mat mask(img_src.size(),img_src.type());
Mark the points inside with white color :
cv::circle( mask, center, radius, cv::Scalar(255,255,255),-1, 8, 0 );
You can now use bitwise_AND and thus get an output image with only the pixels enclosed in mask.
cv::bitwise_and(mask,img_src,output);
I have done a stereo calibration and I got the validPixROI1 and 2 (green border). Now I want to use StereoSGBM but the rois from calibration (from stereoRectify) are not the same size. Anyone know how to solve this?
Actually I do somethine linke this:
Rect roiLeft(...);
Rect roiRight(...);
Mat cLeft(rLeft, roiLeft);
//Mat cRight(rRight, roiRight); // not same size...
Mat cRight(cRight, roiLeft);
stereoBM(cLeft,cRight, dst);
If I crop my images with that roi, will be the picture middle point be the same?
Here it works.
Why not run stereoBM on the (uncropped)calibrated images, then you can use those ROIs after to mask out the invalid bits of the result...
stereoBM(rLeft,rRight, disp);
//get intersection of both rois or use target image roi, if you know the target image
cv::Rect visibleRoi = roiLeft & roiRight;
cv::Mat cDisp(disp,visibleRoi);
Now you have no issues with different size inputs, or different centers and such.
Cheers
According to wiki
A point R at the intersection of the optical axis and the image plane. This point is referred to as the principal point or image center.
So I don't think the center will be same.
Refer to this site . Here in one of the examples the principal point is 302.71656,242.33386 for a 640x480 pixel camera which shows that the principal point and the image center are not the same.
Run the block matcher on the uncropped rectified images and then use.
cv::getValidDisparityROI(roi1, roi2, minDisparity, numberOfDisparities, SADWindowSize);
That call returns a cv::Rect that will be a bounding box for all the valid pixels in the left image and the disparity map. The valid pixels are only pixels that both cameras can "see" (caveat on occluded edges).
Once you have the disparity map the right image becomes useless.
Be aware that the roi's returned from stereoRectify are just valid pixels after the remap from the cameras intrinsics.
I have been working on a college project using OpenCV. I made a simple program which detects faces, by passing frames captured by a webcam in a function which detects faces.
On detection it draws black boxes on the faces when detected. However my project does not end here, i would like to be able to clip out those faces which get detected as soon as possible and save it in an image and then apply different image processing techniques [as per my need]. If this is too problematic i could use a simple image instead of using frames captured by a webcam.
I am just clueless about how to go about clipping those faces out that get detected.
For C++ version you can check this tutorial from OpenCV documentation.
In the function detectAndDisplay you can see the line
Mat faceROI = frame_gray( faces[i] );
where faceROI is clipped face and you can save it to file with imwrite function:
imwrite("face.jpg", faceROI);
http://nashruddin.com/OpenCV_Region_of_Interest_(ROI)
Check this link, you can crop the image using the dimensions of the black box, resize it and save as a new file.
Could you grab the frame and crop the photo with the X,Y coordinates of each corner?