Fitting an Image to a ROI - c++

I have an ROI and an image. I have to fill the ROI with the image that I have. The image should scale according to the ROI shape and size and should fill the entire ROI without repeating the image. How can I achieve this using opencv? Is there any method in opencv to achieve this?
Suppose this white section is my ROI and
this is my input image
Is there any solution using imageMagick???

Finding optimal fit of one shape inside another is not trivial, but if you can settle for suboptimal result you can do the following:
import cv2
import numpy as np
from matplotlib import pyplot as plt
bg_contours, bg_hierarchy = cv2.findContours(bg_img, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
bg_contour = bg_contours[0]
bg_ellipse = cv2.fitEllipse(bg_contour)
p_contours, p_hierarchy = cv2.findContours(fruit_alpha, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
pear_hull = cv2.convexHull(p_contours[0])
pear_ellipse = cv2.fitEllipse(pear_hull)
min_ratio = min(bg_ellipse[1][0] / pear_ellipse[1][0], bg_ellipse[1][1] / pear_ellipse[1][1])
x_shift = bg_ellipse[0][0] - pear_ellipse[0][0] * min_ratio
y_shift = bg_ellipse[0][1] - pear_ellipse[0][1] * min_ratio
(Heuristic) Resize the fruit contour, start with an initial guess based on the ellipses, refine using the contour (this can be improved but it is a non trivial optimization problem, you can look more here):
r_contour = np.array([[[int(j) for j in i[0]]] for i in min_ratio * p_contours[max_c_ix]])
min_dist, bad_pt = GetMinDist(outer_contour=bg_contour, inner_contour=r_contour, offset=(int(x_shift), int(y_shift)))
mask_size = max(bg_ellipse[1][0], bg_ellipse[1][1])
scale = min_ratio * (mask_size + min_dist) / mask_size
r_contour = np.array([[[int(j) for j in i[0]]] for i in scale * p_contours[max_c_ix]])
Combine the images using the alpha channel:
combined = CombineImages(bg, fruit_rgb, fruit_alpha, scale, (int(x_shift), int(y_shift)))
Utility functions:
def GetMinDist(outer_contour, inner_contour, offset):
min_dist = 10000
bad_pt = (0,0)
for i_pt in inner_contour:
#pt = (float(i_pt[0][0]), float(i_pt[0][1]))
pt = (i_pt[0][0] + int(offset[0]), i_pt[0][1] + int(offset[1]))
dst = cv2.pointPolygonTest(outer_contour, pt, True)
if dst < min_dist:
min_dist = dst
bad_pt = pt
return min_dist, bad_pt
def CombineImages(mask_img, fruit_img, fruit_alpha, scale, offset):
mask_height, mask_width, mask_dim = mask_img.shape
combined_img = np.copy(mask_img)
resized_fruit = np.copy(mask_img)
resized_fruit[:] = 0
resized_alpha = np.zeros( (mask_height, mask_width), fruit_alpha.dtype)
f_height, f_width, f_dim = fruit_img.shape
r_fruit = cv2.resize(fruit_img, (int(f_width*scale), int(f_height*scale)) )
r_alpha = cv2.resize(fruit_alpha, (int(f_width*scale), int(f_height*scale)) )
height, width, channels = r_fruit.shape
roi_x_from = offset[0]
roi_x_to = offset[0] + width
roi_y_from = offset[1]
roi_y_to = offset[1] + height
resized_fruit[roi_y_from:roi_y_to, roi_x_from:roi_x_to, :] = r_fruit
resized_alpha[roi_y_from:roi_y_to, roi_x_from:roi_x_to] = r_alpha
for y in range(0,mask_height):
for x in range(0, mask_width):
if resized_alpha[y,x] > 0:
combined_img[y,x,:] = resized_fruit[y,x,:]
return combined_img
I Hope that helps.
(I omitted parts of the code that do not contribute to the understanding of the flow)

Related

How to fix the reprojection from EASE-2 grid product SMAP to geographic coordinates?

I'm have been working with SMAP data satellite, specially for moisture and soil proporties.
I follow the idea of use GDAL solve everything, and make something similar to this published in Link to first approach to download SMAP data
Modifing the code and testing:
import os
import h5py
import numpy as np
from osgeo import gdal, gdal_array, osr
# the file to download
https://n5eil01u.ecs.nsidc.org/SMAP/SPL4SMAU.003/2017.08.01/SMAP_L4_SM_aup_20170801T030000_Vv3030_001.h5
path = "/path/to/data"
h5File = h5py.File(path + "SMAP_L4_SM_aup_20170801T030000_Vv3030_001.h5", 'r')
data = h5File.get('Analysis_Data/sm_rootzone_analysis')
lat = h5File.get("cell_lat")
lon = h5File.get("cell_lon")
np_data = np.array(data)
np_lat = np.array(lat)
np_lon = np.array(lon)
num_cols = float(np_data.shape[1])
num_rows = float(np_data.shape[0])
xmin = np_lon.min()
xmax = np_lon.max()
ymin = np_lat.min()
ymax = np_lat.max()
xres = (xmax - xmin) / num_cols
yres = (ymax - ymin) / num_rows
nrows, ncols = np_data.shape
xres = (xmax - xmin) / float(ncols)
yres = (ymax - ymin) / float(nrows)
geotransform = (xmin, xres, 0, ymax, 0, -xres)
dataFileOutput = path + "sm_rootzone_analysis.tif"
output_raster = gdal.GetDriverByName('GTiff').Create(dataFileOutput, ncols, nrows, 1, gdal.GDT_Float32) # Open the file
output_raster.SetGeoTransform(geotransform)
srs = osr.SpatialReference()
srs.ImportFromEPSG(4326)
output_raster.SetProjection(srs.ExportToWkt())
output_raster.GetRasterBand(1).WriteArray(np_data) # Writes my array to the raster
del output_raster
So, using this approach, the result is a global map with many problems of projections, as for example the image below, produced by the python code above.
To compare with a correct data, the same image was extract from h5, using HEG nasa software.
If the data is really in the EASE2 Global grid, you shouldn't assign EPSG:4326 as a the coordinate system with lat/lon degrees in the geotransform.
If you convert the lat/lon coordinates to the EASE2 Grid at 9km, your geotransform should be something like:
geotransform = (-17367530.44516138, 9000, 0, 7314540.79258289, 0, -9000.0)
and the srs:
srs.ImportFromEPSG(6933)

Labelling in MLP

I have a problem with the labeling on the MLP, first I thought it would be the same as the SVM labeling but after trying the code below :
Mat labels(numSamples, 3 , CV_32FC1, Scalar(3,0));
labels.rowRange(0, numcar - 1) = Scalar (1.0);
labels.rowRange(numcar, numcar + numbus - 1) = Scalar (2.0);
labels.rowRange(numcar + numbus, numSamples) = Scalar (3.0);
decide that the predictions on the same value even if I had to replace the image with another image. After I searched for it turns out there's a difference. The label must use a vector,I do not know how to label it using vectors because I newbie in this case.
below is the code for training
Mat layers = Mat(4, 1 ,CV_32SC1);
int sz = data.cols;
layers.row(0) = Scalar(sz);
layers.row(1) = Scalar(10);
layers.row(2) = Scalar(10);
layers.row(3) = Scalar(3);
CvANN_MLP mlp;
CvANN_MLP_TrainParams params;
CvTermCriteria criteria;
criteria.max_iter = 1000;
criteria.epsilon = 0.0001;
criteria.type = CV_TERMCRIT_ITER | CV_TERMCRIT_EPS;
params.train_method = CvANN_MLP_TrainParams::BACKPROP;
params.bp_dw_scale = 0.5f;
params.bp_moment_scale = 0.5f;
params.term_crit = criteria;
mlp.create(layers, CvANN_MLP::SIGMOID_SYM);
mlp.train(data , labels ,Mat(),Mat(),params);
and predictions
Mat response(1, 3, cv_32FC1);
mlp.predict (sample, response);
cout << response << endl;
I here want to label cars, buses, and trucks.
Help me to solve this problem, thanks for attention

matplotlib legend at the bottom of the figure with twinx

I am trying to draw a legend under two plots (created using twinx). I want the legend to draw at the bottom center aligned with 4 columns. So far no success. How can I make the legend with respect to the entire plot, not just with a single axis object. Any help ?
import matplotlib.pyplot as plt;
import numpy as np;
from matplotlib import rc;
filename = 'ml.pdf';
fig, ax1 = plt.subplots(frameon=False);
rc('mathtext', default='regular');
rc('lines',lw=2.6);
rc('lines',mew=2.4);
rc('text', usetex=True);
x = np.array([5,10,20,50]);
dp_g = np.array([23.43, 29.93, 36.50, 46.07]);
mr_g = np.array([25.33, 31.83, 38.39, 47.75]);
md_g = np.array([24.94, 31.33, 37.80, 47.10]);
sb_g = np.array([27.01, 34.86, 43.18, 54.35]);
lns1 = ax1.plot(x,dp_g,'bs:', label="MD\n($\lambda$=.8)");
lns2 = ax1.plot(x,mr_g,'bs--',label="MR\n($\lambda$=.1)");
lns3 = ax1.plot(x,md_g,'bs-.',label='MD');
lns4 = ax1.plot(x,sb_g,'bs-',label="SB\n($\gamma$=.1)");
ax1.set_ylabel('CG ($\times$ 100)',color='b',size=14);
ax1.set_ylim([20,57]);
ax1.set_xlim([4,51]);
ax1.set_xticks(x);
ax1.tick_params(axis='y', which=u'both', length=0, labelsize=14, colors='b');
ax1.tick_params(axis='x', which=u'both', length=0, labelsize=14);
ax2 = ax1.twinx();
dp_d = np.array([18.84, 19.55, 20.09, 20.08]);
mr_d = np.array([19.42, 19.73, 20.06, 20.04]);
md_d = np.array([19.02, 19.75, 20.28, 20.29]);
sb_d = np.array([20.81, 19.77, 19.20, 19.03]);
lns6 = ax2.plot(x,dp_d,'rv:',label="MD\n($\lambda$=.8)");
lns7 = ax2.plot(x,mr_d,'rv--',label="MR\n($\lambda$=.1)");
lns8 = ax2.plot(x,md_d,'rv-.',label='MD');
lns9 = ax2.plot(x,sb_d,'rv-',label="SB\n($\gamma$=.1)");
lns = lns1 + lns2 + lns3 + lns4 + lns6 + lns7 + lns8 + lns9;
labs = [l.get_label() for l in lns];
ax2.set_ylabel('LD ($\times$ 100)',color='r',size=14);
ax2.set_ylim([15,23]);
ax2.set_xlim([4,51]);
ax2.set_xticks(x);
ax2.tick_params(axis='y', which=u'both', length=0, labelsize=14, colors='r');
ax2.tick_params(axis='x', which=u'both', length=0, labelsize=14);
ax1.set_xlabel('\# of items',size=14);
borderaxespad=2.5, ncol = 1, fontsize='11.5');
lgd = ax1.legend(lns, labs, bbox_to_anchor=(1.01,1.0), loc='lower center', borderaxespad=2.5, ncol = 4, fontsize='14');
fig.savefig(filename,format='pdf',transparent=True, bbox_extra_artists=(lgd,), bbox_inches='tight');
Apart from the broken line borderaxespad=2.5, ncol = 1, fontsize='11.5');, I believe what you want to do is to just remove the bbox_to_anchor=(1.01, 1.0) from the legend-definition. Doing so will put the legend at the bottom center of the plot (however the legend is very wide so it will span the entire width of the plot).

Rotate an image without cropping in OpenCV in C++

I'd like to rotate an image, but I can't obtain the rotated image without cropping
My original image:
Now I use this code:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
// Compile with g++ code.cpp -lopencv_core -lopencv_highgui -lopencv_imgproc
int main()
{
cv::Mat src = cv::imread("im.png", CV_LOAD_IMAGE_UNCHANGED);
cv::Mat dst;
cv::Point2f pc(src.cols/2., src.rows/2.);
cv::Mat r = cv::getRotationMatrix2D(pc, -45, 1.0);
cv::warpAffine(src, dst, r, src.size()); // what size I should use?
cv::imwrite("rotated_im.png", dst);
return 0;
}
And obtain the following image:
But I'd like to obtain this:
My answer is inspired by the following posts / blog entries:
Rotate cv::Mat using cv::warpAffine offsets destination image
http://john.freml.in/opencv-rotation
Main ideas:
Adjusting the rotation matrix by adding a translation to the new image center
Using cv::RotatedRect to rely on existing opencv functionality as much as possible
Code tested with opencv 3.4.1:
#include "opencv2/opencv.hpp"
int main()
{
cv::Mat src = cv::imread("im.png", CV_LOAD_IMAGE_UNCHANGED);
double angle = -45;
// get rotation matrix for rotating the image around its center in pixel coordinates
cv::Point2f center((src.cols-1)/2.0, (src.rows-1)/2.0);
cv::Mat rot = cv::getRotationMatrix2D(center, angle, 1.0);
// determine bounding rectangle, center not relevant
cv::Rect2f bbox = cv::RotatedRect(cv::Point2f(), src.size(), angle).boundingRect2f();
// adjust transformation matrix
rot.at<double>(0,2) += bbox.width/2.0 - src.cols/2.0;
rot.at<double>(1,2) += bbox.height/2.0 - src.rows/2.0;
cv::Mat dst;
cv::warpAffine(src, dst, rot, bbox.size());
cv::imwrite("rotated_im.png", dst);
return 0;
}
Just try the code below, the idea is simple:
You need to create a blank image with the maximum size you're expecting while rotating at any angle. Here you should use Pythagoras as mentioned in the above comments.
Now copy the source image to the newly created image and pass it to warpAffine. Here you should use the centre of newly created image for rotation.
After warpAffine if you need to crop exact image for this translate four corners of source image in enlarged image using rotation matrix as described here
Find minimum x and minimum y for top corner, and maximum x and maximum y for bottom corner from the above result to crop image.
This is the code:
int theta = 0;
Mat src,frame, frameRotated;
src = imread("rotate.png",1);
cout<<endl<<endl<<"Press '+' to rotate anti-clockwise and '-' for clockwise 's' to save" <<endl<<endl;
int diagonal = (int)sqrt(src.cols*src.cols+src.rows*src.rows);
int newWidth = diagonal;
int newHeight =diagonal;
int offsetX = (newWidth - src.cols) / 2;
int offsetY = (newHeight - src.rows) / 2;
Mat targetMat(newWidth, newHeight, src.type());
Point2f src_center(targetMat.cols/2.0F, targetMat.rows/2.0F);
while(1){
src.copyTo(frame);
double radians = theta * M_PI / 180.0;
double sin = abs(std::sin(radians));
double cos = abs(std::cos(radians));
frame.copyTo(targetMat.rowRange(offsetY, offsetY + frame.rows).colRange(offsetX, offsetX + frame.cols));
Mat rot_mat = getRotationMatrix2D(src_center, theta, 1.0);
warpAffine(targetMat, frameRotated, rot_mat, targetMat.size());
//Calculate bounding rect and for exact image
//Reference:- https://stackoverflow.com/questions/19830477/find-the-bounding-rectangle-of-rotated-rectangle/19830964?noredirect=1#19830964
Rect bound_Rect(frame.cols,frame.rows,0,0);
int x1 = offsetX;
int x2 = offsetX+frame.cols;
int x3 = offsetX;
int x4 = offsetX+frame.cols;
int y1 = offsetY;
int y2 = offsetY;
int y3 = offsetY+frame.rows;
int y4 = offsetY+frame.rows;
Mat co_Ordinate = (Mat_<double>(3,4) << x1, x2, x3, x4,
y1, y2, y3, y4,
1, 1, 1, 1 );
Mat RotCo_Ordinate = rot_mat * co_Ordinate;
for(int i=0;i<4;i++){
if(RotCo_Ordinate.at<double>(0,i)<bound_Rect.x)
bound_Rect.x=(int)RotCo_Ordinate.at<double>(0,i); //access smallest
if(RotCo_Ordinate.at<double>(1,i)<bound_Rect.y)
bound_Rect.y=RotCo_Ordinate.at<double>(1,i); //access smallest y
}
for(int i=0;i<4;i++){
if(RotCo_Ordinate.at<double>(0,i)>bound_Rect.width)
bound_Rect.width=(int)RotCo_Ordinate.at<double>(0,i); //access largest x
if(RotCo_Ordinate.at<double>(1,i)>bound_Rect.height)
bound_Rect.height=RotCo_Ordinate.at<double>(1,i); //access largest y
}
bound_Rect.width=bound_Rect.width-bound_Rect.x;
bound_Rect.height=bound_Rect.height-bound_Rect.y;
Mat cropedResult;
Mat ROI = frameRotated(bound_Rect);
ROI.copyTo(cropedResult);
imshow("Result", cropedResult);
imshow("frame", frame);
imshow("rotated frame", frameRotated);
char k=waitKey();
if(k=='+') theta+=10;
if(k=='-') theta-=10;
if(k=='s') imwrite("rotated.jpg",cropedResult);
if(k==27) break;
}
Cropped Image
Thanks Robula!
Actually, you do not need to compute sine and cosine twice.
import cv2
def rotate_image(mat, angle):
# angle in degrees
height, width = mat.shape[:2]
image_center = (width/2, height/2)
rotation_mat = cv2.getRotationMatrix2D(image_center, angle, 1.)
abs_cos = abs(rotation_mat[0,0])
abs_sin = abs(rotation_mat[0,1])
bound_w = int(height * abs_sin + width * abs_cos)
bound_h = int(height * abs_cos + width * abs_sin)
rotation_mat[0, 2] += bound_w/2 - image_center[0]
rotation_mat[1, 2] += bound_h/2 - image_center[1]
rotated_mat = cv2.warpAffine(mat, rotation_mat, (bound_w, bound_h))
return rotated_mat
Thanks #Haris! Here's the Python version:
def rotate_image(image, angle):
'''Rotate image "angle" degrees.
How it works:
- Creates a blank image that fits any rotation of the image. To achieve
this, set the height and width to be the image's diagonal.
- Copy the original image to the center of this blank image
- Rotate using warpAffine, using the newly created image's center
(the enlarged blank image center)
- Translate the four corners of the source image in the enlarged image
using homogenous multiplication of the rotation matrix.
- Crop the image according to these transformed corners
'''
diagonal = int(math.sqrt(pow(image.shape[0], 2) + pow(image.shape[1], 2)))
offset_x = (diagonal - image.shape[0])/2
offset_y = (diagonal - image.shape[1])/2
dst_image = np.zeros((diagonal, diagonal, 3), dtype='uint8')
image_center = (diagonal/2, diagonal/2)
R = cv2.getRotationMatrix2D(image_center, angle, 1.0)
dst_image[offset_x:(offset_x + image.shape[0]), \
offset_y:(offset_y + image.shape[1]), \
:] = image
dst_image = cv2.warpAffine(dst_image, R, (diagonal, diagonal), flags=cv2.INTER_LINEAR)
# Calculate the rotated bounding rect
x0 = offset_x
x1 = offset_x + image.shape[0]
x2 = offset_x
x3 = offset_x + image.shape[0]
y0 = offset_y
y1 = offset_y
y2 = offset_y + image.shape[1]
y3 = offset_y + image.shape[1]
corners = np.zeros((3,4))
corners[0,0] = x0
corners[0,1] = x1
corners[0,2] = x2
corners[0,3] = x3
corners[1,0] = y0
corners[1,1] = y1
corners[1,2] = y2
corners[1,3] = y3
corners[2:] = 1
c = np.dot(R, corners)
x = int(c[0,0])
y = int(c[1,0])
left = x
right = x
up = y
down = y
for i in range(4):
x = int(c[0,i])
y = int(c[1,i])
if (x < left): left = x
if (x > right): right = x
if (y < up): up = y
if (y > down): down = y
h = down - up
w = right - left
cropped = np.zeros((w, h, 3), dtype='uint8')
cropped[:, :, :] = dst_image[left:(left+w), up:(up+h), :]
return cropped
Increase the image canvas (equally from the center without changing the image size) so that it can fit the image after rotation, then apply warpAffine:
Mat img = imread ("/path/to/image", 1);
double offsetX, offsetY;
double angle = -45;
double width = img.size().width;
double height = img.size().height;
Point2d center = Point2d (width / 2, height / 2);
Rect bounds = RotatedRect (center, img.size(), angle).boundingRect();
Mat resized = Mat::zeros (bounds.size(), img.type());
offsetX = (bounds.width - width) / 2;
offsetY = (bounds.height - height) / 2;
Rect roi = Rect (offsetX, offsetY, width, height);
img.copyTo (resized (roi));
center += Point2d (offsetX, offsetY);
Mat M = getRotationMatrix2D (center, angle, 1.0);
warpAffine (resized, resized, M, resized.size());
After searching around for a clean and easy to understand solution and reading through the answers above trying to understand them, I eventually came up with a solution using trigonometry.
I hope this helps somebody :)
import cv2
import math
def rotate_image(mat, angle):
height, width = mat.shape[:2]
image_center = (width / 2, height / 2)
rotation_mat = cv2.getRotationMatrix2D(image_center, angle, 1)
radians = math.radians(angle)
sin = math.sin(radians)
cos = math.cos(radians)
bound_w = int((height * abs(sin)) + (width * abs(cos)))
bound_h = int((height * abs(cos)) + (width * abs(sin)))
rotation_mat[0, 2] += ((bound_w / 2) - image_center[0])
rotation_mat[1, 2] += ((bound_h / 2) - image_center[1])
rotated_mat = cv2.warpAffine(mat, rotation_mat, (bound_w, bound_h))
return rotated_mat
EDIT: Please refer to #Remi Cuingnet's answer below.
A python version of rotating an image and take control of the padded black coloured region you can use the scipy.ndimage.rotate. Here is an example:
from skimage import io
from scipy import ndimage
image = io.imread('https://www.pyimagesearch.com/wp-
content/uploads/2019/12/tensorflow2_install_ubuntu_header.jpg')
io.imshow(image)
plt.show()
rotated = ndimage.rotate(image, angle=234, mode='nearest')
rotated = cv2.resize(rotated, (image.shape[:2]))
# rotated = cv2.cvtColor(rotated, cv2.COLOR_BGR2RGB)
# cv2.imwrite('rotated.jpg', rotated)
io.imshow(rotated)
plt.show()
If you have a rotation and a scaling of the image:
#include "opencv2/opencv.hpp"
#include <functional>
#include <vector>
bool compareCoords(cv::Point2f p1, cv::Point2f p2, char coord)
{
assert(coord == 'x' || coord == 'y');
if (coord == 'x')
return p1.x < p2.x;
return p1.y < p2.y;
}
int main(int argc, char** argv)
{
cv::Mat image = cv::imread("lenna.png");
float angle = 45.0; // degrees
float scale = 0.5;
cv::Mat_<float> rot_mat = cv::getRotationMatrix2D( cv::Point2f( 0.0f, 0.0f ), angle, scale );
// Image corners
cv::Point2f pA = cv::Point2f(0.0f, 0.0f);
cv::Point2f pB = cv::Point2f(image.cols, 0.0f);
cv::Point2f pC = cv::Point2f(image.cols, image.rows);
cv::Point2f pD = cv::Point2f(0.0f, image.rows);
std::vector<cv::Point2f> pts = { pA, pB, pC, pD };
std::vector<cv::Point2f> ptsTransf;
cv::transform(pts, ptsTransf, rot_mat );
using namespace std::placeholders;
float minX = std::min_element(ptsTransf.begin(), ptsTransf.end(), std::bind(compareCoords, _1, _2, 'x'))->x;
float maxX = std::max_element(ptsTransf.begin(), ptsTransf.end(), std::bind(compareCoords, _1, _2, 'x'))->x;
float minY = std::min_element(ptsTransf.begin(), ptsTransf.end(), std::bind(compareCoords, _1, _2, 'y'))->y;
float maxY = std::max_element(ptsTransf.begin(), ptsTransf.end(), std::bind(compareCoords, _1, _2, 'y'))->y;
float newW = maxX - minX;
float newH = maxY - minY;
cv::Mat_<float> trans_mat = (cv::Mat_<float>(2,3) << 0, 0, -minX, 0, 0, -minY);
cv::Mat_<float> M = rot_mat + trans_mat;
cv::Mat warpedImage;
cv::warpAffine( image, warpedImage, M, cv::Size(newW, newH) );
cv::imshow("lenna", image);
cv::imshow("Warped lenna", warpedImage);
cv::waitKey();
cv::destroyAllWindows();
return 0;
}
Thanks to everyone for this post, it has been super useful. However, I have found some black lines left and up (using Rose's python version) when rotating 90º. The problem seemed to be some int() roundings. In addition to that, I have changed the sign of the angle to make it grow clockwise.
def rotate_image(image, angle):
'''Rotate image "angle" degrees.
How it works:
- Creates a blank image that fits any rotation of the image. To achieve
this, set the height and width to be the image's diagonal.
- Copy the original image to the center of this blank image
- Rotate using warpAffine, using the newly created image's center
(the enlarged blank image center)
- Translate the four corners of the source image in the enlarged image
using homogenous multiplication of the rotation matrix.
- Crop the image according to these transformed corners
'''
diagonal = int(math.ceil(math.sqrt(pow(image.shape[0], 2) + pow(image.shape[1], 2))))
offset_x = (diagonal - image.shape[0])/2
offset_y = (diagonal - image.shape[1])/2
dst_image = np.zeros((diagonal, diagonal, 3), dtype='uint8')
image_center = (float(diagonal-1)/2, float(diagonal-1)/2)
R = cv2.getRotationMatrix2D(image_center, -angle, 1.0)
dst_image[offset_x:(offset_x + image.shape[0]), offset_y:(offset_y + image.shape[1]), :] = image
dst_image = cv2.warpAffine(dst_image, R, (diagonal, diagonal), flags=cv2.INTER_LINEAR)
# Calculate the rotated bounding rect
x0 = offset_x
x1 = offset_x + image.shape[0]
x2 = offset_x + image.shape[0]
x3 = offset_x
y0 = offset_y
y1 = offset_y
y2 = offset_y + image.shape[1]
y3 = offset_y + image.shape[1]
corners = np.zeros((3,4))
corners[0,0] = x0
corners[0,1] = x1
corners[0,2] = x2
corners[0,3] = x3
corners[1,0] = y0
corners[1,1] = y1
corners[1,2] = y2
corners[1,3] = y3
corners[2:] = 1
c = np.dot(R, corners)
x = int(round(c[0,0]))
y = int(round(c[1,0]))
left = x
right = x
up = y
down = y
for i in range(4):
x = c[0,i]
y = c[1,i]
if (x < left): left = x
if (x > right): right = x
if (y < up): up = y
if (y > down): down = y
h = int(round(down - up))
w = int(round(right - left))
left = int(round(left))
up = int(round(up))
cropped = np.zeros((w, h, 3), dtype='uint8')
cropped[:, :, :] = dst_image[left:(left+w), up:(up+h), :]
return cropped
Go version (using gocv) of #robula and #remi-cuingnet
func rotateImage(mat *gocv.Mat, angle float64) *gocv.Mat {
height := mat.Rows()
width := mat.Cols()
imgCenter := image.Point{X: width/2, Y: height/2}
rotationMat := gocv.GetRotationMatrix2D(imgCenter, -angle, 1.0)
absCos := math.Abs(rotationMat.GetDoubleAt(0, 0))
absSin := math.Abs(rotationMat.GetDoubleAt(0, 1))
boundW := float64(height) * absSin + float64(width) * absCos
boundH := float64(height) * absCos + float64(width) * absSin
rotationMat.SetDoubleAt(0, 2, rotationMat.GetDoubleAt(0, 2) + (boundW / 2) - float64(imgCenter.X))
rotationMat.SetDoubleAt(1, 2, rotationMat.GetDoubleAt(1, 2) + (boundH / 2) - float64(imgCenter.Y))
gocv.WarpAffine(*mat, mat, rotationMat, image.Point{ X: int(boundW), Y: int(boundH) })
return mat
}
I rotate in the same matrice in-memory, make a new matrice if you don't want to alter it
For anyone using Emgu.CV or OpenCvSharp wrapper in .NET, there's a C# implement of Lars Schillingmann's answer:
Emgu.CV:
using Emgu.CV;
using Emgu.CV.CvEnum;
using Emgu.CV.Structure;
public static class MatExtension
{
/// <summary>
/// <see>https://stackoverflow.com/questions/22041699/rotate-an-image-without-cropping-in-opencv-in-c/75451191#75451191</see>
/// </summary>
public static Mat Rotate(this Mat src, float degrees)
{
degrees = -degrees; // counter-clockwise to clockwise
var center = new PointF((src.Width - 1) / 2f, (src.Height - 1) / 2f);
var rotationMat = new Mat();
CvInvoke.GetRotationMatrix2D(center, degrees, 1, rotationMat);
var boundingRect = new RotatedRect(new(), src.Size, degrees).MinAreaRect();
rotationMat.Set(0, 2, rotationMat.Get<double>(0, 2) + (boundingRect.Width / 2f) - (src.Width / 2f));
rotationMat.Set(1, 2, rotationMat.Get<double>(1, 2) + (boundingRect.Height / 2f) - (src.Height / 2f));
var rotatedSrc = new Mat();
CvInvoke.WarpAffine(src, rotatedSrc, rotationMat, boundingRect.Size);
return rotatedSrc;
}
/// <summary>
/// <see>https://stackoverflow.com/questions/32255440/how-can-i-get-and-set-pixel-values-of-an-emgucv-mat-image/69537504#69537504</see>
/// </summary>
public static unsafe void Set<T>(this Mat mat, int row, int col, T value) where T : struct =>
_ = new Span<T>(mat.DataPointer.ToPointer(), mat.Rows * mat.Cols * mat.ElementSize)
{
[(row * mat.Cols) + col] = value
};
public static unsafe T Get<T>(this Mat mat, int row, int col) where T : struct =>
new ReadOnlySpan<T>(mat.DataPointer.ToPointer(), mat.Rows * mat.Cols * mat.ElementSize)
[(row * mat.Cols) + col];
}
OpenCvSharp:
OpenCvSharp already has Mat.Set<> method that functions same as mat.at<> in the original OpenCV, so we don't have to copy these methods from How can I get and set pixel values of an EmguCV Mat image?
using OpenCvSharp;
public static class MatExtension
{
/// <summary>
/// <see>https://stackoverflow.com/questions/22041699/rotate-an-image-without-cropping-in-opencv-in-c/75451191#75451191</see>
/// </summary>
public static Mat Rotate(this Mat src, float degrees)
{
degrees = -degrees; // counter-clockwise to clockwise
var center = new Point2f((src.Width - 1) / 2f, (src.Height - 1) / 2f);
var rotationMat = Cv2.GetRotationMatrix2D(center, degrees, 1);
var boundingRect = new RotatedRect(new(), new Size2f(src.Width, src.Height), degrees).BoundingRect();
rotationMat.Set(0, 2, rotationMat.Get<double>(0, 2) + (boundingRect.Width / 2f) - (src.Width / 2f));
rotationMat.Set(1, 2, rotationMat.Get<double>(1, 2) + (boundingRect.Height / 2f) - (src.Height / 2f));
var rotatedSrc = new Mat();
Cv2.WarpAffine(src, rotatedSrc, rotationMat, boundingRect.Size);
return rotatedSrc;
}
}
Also, you may want to mutate the src param instead of returning a new clone of it during rotation, for that you can just set the det param of WrapAffine() as the same with src: c++, opencv: Is it safe to use the same Mat for both source and destination images in filtering operation?
CvInvoke.WarpAffine(src, src, rotationMat, boundingRect.Size);
This is being called as in-place mode: https://answers.opencv.org/question/24/do-all-opencv-functions-support-in-place-mode-for-their-arguments/
Can the OpenCV function cvtColor be used to convert a matrix in place?
If it is just to rotate 90 degrees, maybe this code could be useful.
Mat img = imread("images.jpg");
Mat rt(img.rows, img.rows, CV_8U);
Point2f pc(img.cols / 2.0, img.rows / 2.0);
Mat r = getRotationMatrix2D(pc, 90, 1);
warpAffine(img, rt, r, rt.size());
imshow("rotated", rt);
Hope it's useful.
By the way, for 90º rotations only, here is a more efficient + accurate function:
def rotate_image_90(image, angle):
angle = -angle
rotated_image = image
if angle == 0:
pass
elif angle == 90:
rotated_image = np.rot90(rotated_image)
elif angle == 180 or angle == -180:
rotated_image = np.rot90(rotated_image)
rotated_image = np.rot90(rotated_image)
elif angle == -90:
rotated_image = np.rot90(rotated_image)
rotated_image = np.rot90(rotated_image)
rotated_image = np.rot90(rotated_image)
return rotated_image

OpenCV StereoRectifyUncalibrated to 3D Point Cloud

I have some code that works out all of the parts up to calculating values with cv::stereoRectifyUncalibrated. However, I am not sure where to go from there to get a 3D Point cloud from it.
I have code that works with the calibrated version that gives me a Q matrix and I then use that with reprojectImageTo3D and StereoBM to give me a point cloud.
I want to compare the results of the two different methods as sometimes I will not be able to calibrate the camera.
http://blog.martinperis.com/2012/01/3d-reconstruction-with-opencv-and-point.html I think this will be helpful. It has a code which converts Disparity Image to Point cloud using PCL and shows in 3D viewer.
Since you have Q, two images and other camera params(from calibration), you should use ReprojectTo3D to get depth map.
Use StereoBM or stereoSGBM to get Disparity Map and use that Disparit Map and Q to get depth image.
StereoBM sbm;
sbm.state->SADWindowSize = 9;
sbm.state->numberOfDisparities = 112;
sbm.state->preFilterSize = 5;
sbm.state->preFilterCap = 61;
sbm.state->minDisparity = -39;
sbm.state->textureThreshold = 507;
sbm.state->uniquenessRatio = 0;
sbm.state->speckleWindowSize = 0;
sbm.state->speckleRange = 8;
sbm.state->disp12MaxDiff = 1;
sbm(g1, g2, disp); // g1 and g2 are two gray images
reprojectImageTo3D(disp, Image3D, Q, true, CV_32F);
And that code basically converts depth map to Point cloud.
pcl::PointCloud<pcl::PointXYZRGB>::Ptr point_cloud_ptr (new pcl::PointCloud<pcl::PointXYZRGB>);
double px, py, pz;
uchar pr, pg, pb;
for (int i = 0; i < img_rgb.rows; i++)
{
uchar* rgb_ptr = img_rgb.ptr<uchar>(i);
uchar* disp_ptr = img_disparity.ptr<uchar>(i);
double* recons_ptr = recons3D.ptr<double>(i);
for (int j = 0; j < img_rgb.cols; j++)
{
//Get 3D coordinates
uchar d = disp_ptr[j];
if ( d == 0 ) continue; //Discard bad pixels
double pw = -1.0 * static_cast<double>(d) * Q32 + Q33;
px = static_cast<double>(j) + Q03;
py = static_cast<double>(i) + Q13;
pz = Q23;
// Normalize the points
px = px/pw;
py = py/pw;
pz = pz/pw;
//Get RGB info
pb = rgb_ptr[3*j];
pg = rgb_ptr[3*j+1];
pr = rgb_ptr[3*j+2];
//Insert info into point cloud structure
pcl::PointXYZRGB point;
point.x = px;
point.y = py;
point.z = pz;
uint32_t rgb = (static_cast<uint32_t>(pr) << 16 |
static_cast<uint32_t>(pg) << 8 | static_cast<uint32_t>(pb));
point.rgb = *reinterpret_cast<float*>(&rgb);
point_cloud_ptr->points.push_back (point);
}
}
point_cloud_ptr->width = (int) point_cloud_ptr->points.size();
point_cloud_ptr->height = 1;
//Create visualizer
boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer;
viewer = createVisualizer( point_cloud_ptr );