How will the Camera Intrinsics change if an image is cropped/resized? - computer-vision

I have a recorded camera ROS bag file from a Realsense camera. The camera intrinsics for the recorded setting in already know. The initial resolution of the image is 848*480. Because of some visual obstruction in the FOV of the camera I would like to crop out the top of the image so it doesn't gets detected with Visual SLAM Algorithm I am using.
Since SLAM is heavily dependent on the Camera Intrinsics, I would like to know how will the camera parameters f_x, f_y, c_x and c_y change for :
Cropped Image
Resized Image (Image Scaling only)
There is no skew involved in the original camera parameters.
Will the new pricipal point c_x also change as Cropped_image_width?
I am bit confused as to how to calculate the new camera parameters ? Am I correct in assuming the following for the Case 1 - Cropped Case :

Cropping:
cx,cy reduced by the amount of pixels cropped off the left/top. Cropping off of right/bottom edges has no effect.
Scaling:
fx,fy are multiplied by the scaling factor
cx,cy are multiplied by the scaling factor
Remember, the principal point need not be in the center of the image.
Assuming top left image origin. Off-by-one/half (pixel) errors to be checked carefully against the specific scaling algorithm used, or just ignored.

For the case no. 1, cx is the same as the original image, but cy changes as newHeight/2.
For the case no. 2, f changes as fxscale.

Related

How to dedistort an image without pixel loss using opencv

As we all know,we can use the function cv::getOptimalNewCameraMatrix() with alpha = 1 to get a new CameraMatrix. Then we use the function cv::undistort() with the new CameraMatrix can get the image after dedistortion. However, I find the image after distortion is as large as the original image and some part of the image after distortion covered by black.
So my question is :Does this mean that the original image pixel is lost? and is there any way to avoid pixel loss or get the image whose size larger than origin image with opencv?
cv::Mat NewKMatrixLeft = cv::getOptimalNewCameraMatrix(KMatrixLeft,DistMatrixLeft ,cv::Size(image.cols,image.rows),1);
cv::undistort(image, show_image, KMatrixLeft, DistMatrixLeft,NewKMatrixLeft);
The size of image and show_image are both 640*480,however from my point of view,the size of image after distortion should be larger than 640*480 because some part of it is meaningless.
Thanks!
In order to correct distortion, you basically have to reverse the process that caused the initial distortion. This implies that pixels are stretched and squashed along various directions to correct the distortion. In some cases, this would move the pixels away from the image edge. In OpenCV, this is handled by inserting black pixels. There is nothing wrong with this approach. You can then choose how to crop it to remove the black pixels at the edges.

Radial Overexposure Fix Gradiant OpenCV

I have a camera and a lamp.
The camera takes pictures automatically and the lamp is rigid.
Each of my pictures has a bright spot in the middle and is getting darker on the outside (linear).
Is there an easy way to darken the middle, or brighten the outside to accommodate this (preferably with a gradient)?
I am using OpenCV with the C++ API.
Thank you for the help.
It's hard to say what exactly you want to do without an example. However, let's assume the effect is exactly the same in all images and you want to apply the same transformation to each of them.
You say the effect is linear, and assume you want to make the center darker by let's say 20% and the pixel furthest from the center brighter by 20%. Let's further assume the optical center is in the center of the image (needn't be true in practice).
So you have an image cv::Mat img; you want to manipulate, and I assume it contains data of type CV_32F (if not float or double-valued, convert, can be more than one channel). You create another cv::Mat
//first, make a mask image to multiply the image with
cv::Mat mask = cv::Mat::zeros(img.rows,img.cols,CV_32F);
float maxdist = std::sqrt(img.rows*img.rows+img.cols*img.cols)/2;
cv::Point2f center(img.cols*0.5,img.rows*0.5);
for (int j=0;j<img.rows;++j)
for (int i=0;i<img.cols;++i)
{
cv::Point2f p(i,j);
cv::Point2f diff(p-center);
float dist(std::sqrt(diff.dot(diff)));
float factor(0.8+0.4*dist/maxdist);
mask.at<float>(j,i) = factor;
}
//apply the transformation, to as many images as you like
img = img.mul(mask);
This doesn't check for overflows, you may or may not want to do this afterwards. But from your question, it would be a simple way to do this.

Rectangle detection / tracking using OpenCV

What I need
I'm currently working on an augmented reality kinda game. The controller that the game uses (I'm talking about the physical input device here) is a mono colored, rectangluar pice of paper. I have to detect the position, rotation and size of that rectangle in the capture stream of the camera. The detection should be invariant on scale and invariant on rotation along the X and Y axes.
The scale invariance is needed in case that the user moves the paper away or towards the camera. I don't need to know the distance of the rectangle so scale invariance translates to size invariance.
The rotation invariance is needed in case the user tilts the rectangle along its local X and / or Y axis. Such a rotation changes the shape of the paper from rectangle to trapezoid. In this case, the object oriented bounding box can be used to measure the size of the paper.
What I've done
At the beginning there is a calibration step. A window shows the camera feed and the user has to click on the rectangle. On click, the color of the pixel the mouse is pointing at is taken as reference color. The frames are converted into HSV color space to improve color distinguishing. I have 6 sliders that adjust the upper and lower thresholds for each channel. These thresholds are used to binarize the image (using opencv's inRange function).
After that I'm eroding and dilating the binary image to remove noise and unite nerby chunks (using opencv's erode and dilate functions).
The next step is finding contours (using opencv's findContours function) in the binary image. These contours are used to detect the smallest oriented rectangles (using opencv's minAreaRect function). As final result I'm using the rectangle with the largest area.
A short conclusion of the procedure:
Grab a frame
Convert that frame to HSV
Binarize it (using the color that the user selected and the thresholds from the sliders)
Apply morph ops (erode and dilate)
Find contours
Get the smallest oriented bouding box of each contour
Take the largest of those bounding boxes as result
As you may noticed, I don't make an advantage of the knowledge about the actual shape of the paper, simply because I don't know how to use this information properly.
I've also thought about using the tracking algorithms of opencv. But there were three reasons that prevented me from using them:
Scale invariance: as far as I read about some of the algorithms, some don't support different scales of the object.
Movement prediction: some algorithms use movement prediction for better performance, but the object I'm tracking moves completely random and therefore unpredictable.
Simplicity: I'm just looking for a mono colored rectangle in an image, nothing fancy like car or person tracking.
Here is a - relatively - good catch (binary image after erode and dilate)
and here is a bad one
The Question
How can I improve the detection in general and especially to be more resistant against lighting changes?
Update
Here are some raw images for testing.
Can't you just use thicker material?
Yes I can and I already do (unfortunately I can't access these pieces at the moment). However, the problem still remains. Even if I use material like cartboard. It isn't bent as easy as paper, but one can still bend it.
How do you get the size, rotation and position of the rectangle?
The minAreaRect function of opencv returns a RotatedRect object. This object contains all the data I need.
Note
Because the rectangle is mono colored, there is no possibility to distinguish between top and bottom or left and right. This means that the rotation is always in range [0, 180] which is perfectly fine for my purposes. The ratio of the two sides of the rect is always w:h > 2:1. If the rectangle would be a square, the range of roation would change to [0, 90], but this can be considered irrelevant here.
As suggested in the comments I will try histogram equalization to reduce brightness issues and take a look at ORB, SURF and SIFT.
I will update on progress.
The H channel in the HSV space is the Hue, and it is not sensitive to the light changing. Red range in about [150,180].
Based on the mentioned information, I do the following works.
Change into the HSV space, split the H channel, threshold and normalize it.
Apply morph ops (open)
Find contours, filter by some properties( width, height, area, ratio and so on).
PS. I cannot fetch the image you upload on the dropbox because of the NETWORK. So, I just use crop the right side of your second image as the input.
imgname = "src.png"
img = cv2.imread(imgname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
## Split the H channel in HSV, and get the red range
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
h[h<150]=0
h[h>180]=0
## normalize, do the open-morp-op
normed = cv2.normalize(h, None, 0, 255, cv2.NORM_MINMAX, cv2.CV_8UC1)
kernel = cv2.getStructuringElement(shape=cv2.MORPH_ELLIPSE, ksize=(3,3))
opened = cv2.morphologyEx(normed, cv2.MORPH_OPEN, kernel)
res = np.hstack((h, normed, opened))
cv2.imwrite("tmp1.png", res)
Now, we get the result as this (h, normed, opened):
Then find contours and filter them.
contours = cv2.findContours(opened, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
print(len(contours))[-2]
bboxes = []
rboxes = []
cnts = []
dst = img.copy()
for cnt in contours:
## Get the stright bounding rect
bbox = cv2.boundingRect(cnt)
x,y,w,h = bbox
if w<30 or h < 30 or w*h < 2000 or w > 500:
continue
## Draw rect
cv2.rectangle(dst, (x,y), (x+w,y+h), (255,0,0), 1, 16)
## Get the rotated rect
rbox = cv2.minAreaRect(cnt)
(cx,cy), (w,h), rot_angle = rbox
print("rot_angle:", rot_angle)
## backup
bboxes.append(bbox)
rboxes.append(rbox)
cnts.append(cnt)
The result is like this:
rot_angle: -2.4540319442749023
rot_angle: -1.8476102352142334
Because the blue rectangle tag in the source image, the card is splited into two sides. But a clean image will have no problem.
I know it's been a while since I asked the question. I recently continued on the topic and solved my problem (although not through rectangle detection).
Changes
Using wood to strengthen my controllers (the "rectangles") like below.
Placed 2 ArUco markers on each controller.
How it works
Convert the frame to grayscale,
downsample it (to increase performance during detection),
equalize the histogram using cv::equalizeHist,
find markers using cv::aruco::detectMarkers,
correlate markers (if multiple controllers),
analyze markers (position and rotation),
compute result and apply some error correction.
It turned out that the marker detection is very robust to lighting changes and different viewing angles which allows me to skip any calibration steps.
I placed 2 markers on each controller to increase the detection robustness even more. Both markers has to be detected only one time (to measure how they correlate). After that, it's sufficient to find only one marker per controller as the other can be extrapolated from the previously computed correlation.
Here is a detection result in a bright environment:
in a darker environment:
and when hiding one of the markers (the blue point indicates the extrapolated marker postition):
Failures
The initial shape detection that I implemented didn't perform well. It was very fragile to lighting changes. Furthermore, it required an initial calibration step.
After the shape detection approach I tried SIFT and ORB in combination with brute force and knn matcher to extract and locate features in the frames. It turned out that mono colored objects don't provide much keypoints (what a surprise). The performance of SIFT was terrible anyway (ca. 10 fps # 540p).
I drew some lines and other shapes on the controller which resulted in more keypoints beeing available. However, this didn't yield in huge improvements.

OpenCV measure rectangular image size

I have an app that finds an object in a frame and uses warpPerspective to correct the image to be square. In the course of doing so you specify an output image size. However, I want to know how to do so without harming its apparent size. How can I unwarp the 4-corners of the image without changing the size of the image? I don't need the image itself, I just want to measure its height and width in pixels within the original image.
Get a transform matrix that will square up the corners.
std::vector<cv::Point2f> transformedPoints;
cv::Mat M = cv::getPerspectiveTransform(points, objectCorners);
cv::perspectiveTransform(points, transformedPoints, M);
This will square up the image, but in terms of the objectCorners coordinate system. Which is -0.5f to 0.5f not the original image plane.
BoundingRect almost does what I want.
cv::Rect boundingRectangle = cv::boundingRect(points);
But as the documentation states
The function calculates and returns the minimal up-right bounding rectangle for the specified point set.
And what I want is the bounding rectangle after it has been squared-up, not without squaring it up.
According to my understanding to your post, here is something which should help you.
OpenCV perspective transform example.
Update if it still doesn't help you out in finding the height and width within the image
Minimum bounding rect of the points
cv::RotatedRect box = cv::minAreaRect(cv::Mat(points));
As the minAreaRect reference on OpenCV's website states
Finds a rotated rectangle of the minimum area enclosing the input 2D point set.
You can call box.size and get the width and height.

Stereo rectify - ROI have different sizes

I have done a stereo calibration and I got the validPixROI1 and 2 (green border). Now I want to use StereoSGBM but the rois from calibration (from stereoRectify) are not the same size. Anyone know how to solve this?
Actually I do somethine linke this:
Rect roiLeft(...);
Rect roiRight(...);
Mat cLeft(rLeft, roiLeft);
//Mat cRight(rRight, roiRight); // not same size...
Mat cRight(cRight, roiLeft);
stereoBM(cLeft,cRight, dst);
If I crop my images with that roi, will be the picture middle point be the same?
Here it works.
Why not run stereoBM on the (uncropped)calibrated images, then you can use those ROIs after to mask out the invalid bits of the result...
stereoBM(rLeft,rRight, disp);
//get intersection of both rois or use target image roi, if you know the target image
cv::Rect visibleRoi = roiLeft & roiRight;
cv::Mat cDisp(disp,visibleRoi);
Now you have no issues with different size inputs, or different centers and such.
Cheers
According to wiki
A point R at the intersection of the optical axis and the image plane. This point is referred to as the principal point or image center.
So I don't think the center will be same.
Refer to this site . Here in one of the examples the principal point is 302.71656,242.33386 for a 640x480 pixel camera which shows that the principal point and the image center are not the same.
Run the block matcher on the uncropped rectified images and then use.
cv::getValidDisparityROI(roi1, roi2, minDisparity, numberOfDisparities, SADWindowSize);
That call returns a cv::Rect that will be a bounding box for all the valid pixels in the left image and the disparity map. The valid pixels are only pixels that both cameras can "see" (caveat on occluded edges).
Once you have the disparity map the right image becomes useless.
Be aware that the roi's returned from stereoRectify are just valid pixels after the remap from the cameras intrinsics.