How to crop a triangle - c++

I am working on some "Face Normalization" Project.
What I did till now is:
Face detection
Facial Landmark detection (68)
Split the face is a few triangles by connecting the several landmarks (Delaunay Triangulation -->AAM)
Create some 3D Model of a generic face (consists of 68 (same as Landmarks) Points) in 3D and also did some Delaunay Triangulation
Now what i need to do now:
I know all the Landmark coordinates and all the 3D coordinates so i want to crop each triangle in 2D and put it on its right place on the 3D generic model to generate a 3D Model of the detected face.
Question:
1.)Does anyone know a way to crop a single Triangle by knowing all three coords?
2.)And what kind of transformation do i have to use to "copy" the cropped triangle on its right place on the generic 3D model?
I am programming in c++ and took dlib and openCV for the facial landmark detection and on the 3D side I am working with openGL
EDIT:
Maybe it is better to "see" the problem. This is what i have already
and now i just want to crop all these triangles separately. So how can I crop a triangle (when i know all 3 coords) from a picture and safe it in another window?

In order to crop a triangle, we need to use warpaffine method.
http://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/warp_affine/warp_affine.html
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
using namespace cv;
using namespace std;
/// Global variables
char* source_window = "Source image";
char* warp_window = "Warp";
char* warp_rotate_window = "Warp + Rotate";
/** #function main */
int main( int argc, char** argv )
{
Point2f srcTri[3];
Point2f dstTri[3];
Mat rot_mat( 2, 3, CV_32FC1 );
Mat warp_mat( 2, 3, CV_32FC1 );
Mat src, warp_dst, warp_rotate_dst;
/// Load the image
src = imread( argv[1], 1 );
/// Set the dst image the same type and size as src
warp_dst = Mat::zeros( src.rows, src.cols, src.type() );
/// Set your 3 points to calculate the Affine Transform
srcTri[0] = Point2f( 0,0 );
srcTri[1] = Point2f( src.cols - 1, 0 );
srcTri[2] = Point2f( 0, src.rows - 1 );
dstTri[0] = Point2f( src.cols*0.0, src.rows*0.33 );
dstTri[1] = Point2f( src.cols*0.85, src.rows*0.25 );
dstTri[2] = Point2f( src.cols*0.15, src.rows*0.7 );
/// Get the Affine Transform
warp_mat = getAffineTransform( srcTri, dstTri );
/// Apply the Affine Transform just found to the src image
warpAffine( src, warp_dst, warp_mat, warp_dst.size() );
/** Rotating the image after Warp */
/// Compute a rotation matrix with respect to the center of the image
Point center = Point( warp_dst.cols/2, warp_dst.rows/2 );
double angle = -50.0;
double scale = 0.6;
/// Get the rotation matrix with the specifications above
rot_mat = getRotationMatrix2D( center, angle, scale );
/// Rotate the warped image
warpAffine( warp_dst, warp_rotate_dst, rot_mat, warp_dst.size() );
/// Show what you got
namedWindow( source_window, CV_WINDOW_AUTOSIZE );
imshow( source_window, src );
namedWindow( warp_window, CV_WINDOW_AUTOSIZE );
imshow( warp_window, warp_dst );
namedWindow( warp_rotate_window, CV_WINDOW_AUTOSIZE );
imshow( warp_rotate_window, warp_rotate_dst );
/// Wait until user exits the program
waitKey(0);
return 0;
}

Related

How to rotate the image to 180 degree always using minAreaRect, c++

My Images;
Requirement:
I am not able to understand how axis is decided to make the image always horizontal.
Algorithm:
Read the image
Find external contour
Draw the contours
Use the external contour to detect minArearect (bounding box will not help for me)
get the rotation matrix and rotate the image
Extract the required patch from the rotated image
My code:
//read the image
img = imread("90.jpeg")
cv::Mat contourOutput = img.clone();
// detect external contour(images will have noise, although example images doesn't have)
std::vector<std::vector<cv::Point> > contours;
cv::findContours(contourOutput, contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
int largest_area = 0;
int largest_contour_index = 0;
for (size_t i = 0; i < contours.size(); i++) {
double area = contourArea(contours[i]);
// copy the largest contour index
if (area > largest_area) {
largest_area = area;
largest_contour_index = i;
}
}
//draw contours
drawContours(img, contours, largest_contour_index, Scalar(255, 0, 0),
2);
// detect minimum area rect to get the angle and centre
cv::RotatedRect box = cv::minAreaRect(cv::Mat(contours[largest_contour_index]));
// take the box angle
double angle = box.angle;
if (angle < -45) {
box.angle += 90;
}
angle = box.angle;
// create rotation matrix
cv::Mat rot_mat = cv::getRotationMatrix2D(box.center, angle, 1);
// Apply the transformation
cv::Mat rotated;
cv::warpAffine(img, rotated, rot_mat, img.size(), cv::INTER_CUBIC);
cv::Size box_size = box.size;
if (box.angle < -45.)
std::swap(box_size.width, box_size.height);
// get the cropped image
cv::Mat cropped;
cv::getRectSubPix(rotated, box_size, box.center, cropped);
// Display the image
namedWindow("image2", WINDOW_NORMAL);
imshow("image2", cropped);
waitKey(0);
The idea is to compute the rotated bounding box angle using minAreaRect then deskew the image with getRotationMatrix2D and warpAffine. One final step is to rotate by 90 degrees if we are working with a vertical image. Here's the results with before (left) and after (right) and the angle of rotation:
-39.999351501464844
38.52387619018555
1.6167902946472168
1.9749339818954468
I implemented it in Python but you can adapt the same approach into C++
Code
import cv2
import numpy as np
# Load image, grayscale, and Otsu's threshold
image = cv2.imread('4.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Compute rotated bounding box
coords = np.column_stack(np.where(thresh > 0))
angle = cv2.minAreaRect(coords)[-1]
# Determine rotation angle
if angle < -45:
angle = -(90 + angle)
else:
angle = -angle
print(angle)
# Rotate image to deskew
(h, w) = image.shape[:2]
center = (w // 2, h // 2)
M = cv2.getRotationMatrix2D(center, angle, 1.0)
rotated = cv2.warpAffine(image, M, (w, h), flags=cv2.INTER_CUBIC, borderMode=cv2.BORDER_REPLICATE)
# Vertical image so rotate to horizontal
h, w, _ = rotated.shape
if h > w:
rotated = cv2.rotate(rotated, cv2.ROTATE_90_CLOCKWISE)
cv2.imshow('rotated', rotated)
cv2.imwrite('rotated.png', rotated)
cv2.waitKey()

OpenCV cv::Circle comes out gray on iPhone UI

Currently I am using OpenCV to process images from an AVCaptureSession. The app right now takes these images and draws cv::Circles on the blobs. The tracking is working but when I draw the circle, it comes out as this gray, distorted circle when it should be green. Is it that OpenCV drawing functions don't work properly with iOS apps? Or is there something I can do to fix it?
Any help would be appreciated.
Here is a screen shot: (Ignore that giant green circle on the bottom)
The cv::Circle is around the outside of the black circle.
Here is where I converted the CMSampleBuffer into a cv::Mat:
enter code here CVImageBufferRef pixelBuff = CMSampleBufferGetImageBuffer(sampleBuffer);
cv::Mat cvMat;
CVPixelBufferLockBaseAddress(pixelBuff, 0);
int bufferWidth = CVPixelBufferGetWidth(pixelBuff);
int bufferHeight = CVPixelBufferGetHeight(pixelBuff);
unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuff);
cvMat = cv::Mat(bufferHeight, bufferWidth, CV_8UC4, pixel);
cv::Mat grayMat;
cv::cvtColor(cvMat, grayMat, CV_BGR2GRAY);
CVPixelBufferUnlockBaseAddress(pixelBuff, 0);
This is the cv::Circle command:
if (keypoints.size() > 0) {
cv::Point p(keypoints[0].pt.x, keypoints[0].pt.y);
printf("x: %f, y: %f\n",keypoints[0].pt.x, keypoints[0].pt.y);
cv::circle(cvMat, p, keypoints[0].size/2, cv::Scalar(0,255,0), 2, 8, 0);
}
Keypoints is the vector of blobs that have been detected.

How to find correspondence of 3d points and 2d points

I have a set of 3d points in world coordinates and respective correspondences with 2d points in an image. I want to find a matrix that gives me the transformation between these set of points. How can I do this in OpenCV?
cv::solvePnP() is what you are looking for, it finds an object pose from 3D-2D point correspondences and results a rotation vector (rvec), that, together with translation vector (tvec), brings points from the model coordinate system to the camera coordinate system.
you can use solvePnP for this:
// camMatrix based on img size
int max_d = std::max(img.rows,img.cols);
Mat camMatrix = (Mat_<double>(3,3) <<
max_d, 0, img.cols/2.0,
0, max_d, img.rows/2.0,
0, 0, 1.0);
// 2d -> 3d correspondence
vector<Point2d> pts2d = ...
vector<Point3d> pts3d = ...
Mat rvec,tvec;
solvePnP(pts3d, pts2d, camMatrix, Mat(1,4,CV_64F,0.0), rvec, tvec, false, SOLVEPNP_EPNP);
// get 3d rot mat
Mat rotM(3, 3, CV_64F);
Rodrigues(rvec, rotM);
// push tvec to transposed Mat
Mat rotMT = rotM.t();
rotMT.push_back(tvec.reshape(1, 1));
// transpose back, and multiply
return camMatrix * rotMT.t();

How to create a semi transparent shape?

I would like to know how to draw semi-transparent shapes in OpenCV, similar to those in the image below (from http://tellthattomycamera.wordpress.com/)
I don't need those fancy circles, but I would like to be able to draw a rectangle, e.g, on a 3 channel color image and specify the transparency of the rectangle, something like
rectangle (img, Point (100,100), Point (300,300), Scalar (0,125,125,0.4), CV_FILLED);
where 0,125,125 is the color of the rectangle and 0.4 specifies the transparency.
However OpenCV doesn't have this functionality built into its drawing functions. How can I draw shapes in OpenCV so that the original image being drawn on is partially visible through the shape?
The image below illustrates transparency using OpenCV. You need to do an alpha blend between the image and the rectangle. Below is the code for one way to do this.
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
int main( int argc, char** argv )
{
cv::Mat image = cv::imread("IMG_2083s.png");
cv::Mat roi = image(cv::Rect(100, 100, 300, 300));
cv::Mat color(roi.size(), CV_8UC3, cv::Scalar(0, 125, 125));
double alpha = 0.3;
cv::addWeighted(color, alpha, roi, 1.0 - alpha , 0.0, roi);
cv::imshow("image",image);
cv::waitKey(0);
}
In OpenCV 3 this code worked for me:
cv::Mat source = cv::imread("IMG_2083s.png");
cv::Mat overlay;
double alpha = 0.3;
// copy the source image to an overlay
source.copyTo(overlay);
// draw a filled, yellow rectangle on the overlay copy
cv::rectangle(overlay, cv::Rect(100, 100, 300, 300), cv::Scalar(0, 125, 125), -1);
// blend the overlay with the source image
cv::addWeighted(overlay, alpha, source, 1 - alpha, 0, source);
Source/Inspired by: http://bistr-o-mathik.org/2012/06/13/simple-transparency-in-opencv/
Adding to Alexander Taubenkorb's answer, you can draw random (semi-transparent) shapes by replacing the cv::rectangle line with the shape you want to draw.
For example, if you want to draw a series of semi-transparent circles, you can do it as follows:
cv::Mat source = cv::imread("IMG_2083s.png"); // loading the source image
cv::Mat overlay; // declaring overlay matrix, we'll copy source image to this matrix
double alpha = 0.3; // defining opacity value, 0 means fully transparent, 1 means fully opaque
source.copyTo(overlay); // copying the source image to overlay matrix, we'll be drawing shapes on overlay matrix and we'll blend it with original image
// change this section to draw the shapes you want to draw
vector<Point>::const_iterator points_it; // declaring points iterator
for( points_it = circles.begin(); points_it != circles.end(); ++points_it ) // circles is a vector of points, containing center of each circle
circle(overlay, *points_it, 1, (0, 255, 255), -1); // drawing circles on overlay image
cv::addWeighted(overlay, alpha, source, 1 - alpha, 0, source); // blending the overlay (with alpha opacity) with the source image (with 1-alpha opacity)
For C++, I personally like the readability of overloaded operators for scalar multiplication and matrix addition:
... same initial lines as other answers above ...
// blend the overlay with the source image
source = source * (1.0 - alpha) + overlay * alpha;

projecting light and shadows on a surface

Hi I am trying to extract the lighting and the shadow from one surface and apply it to another type of surface. I convert the image to HSV and extract the Hue component and plot it which seems to give me a good indication of where the lighting and shadows are. However when I swap the hue component of the original image with my final image I get all sorts of greens and blues that are not desired. Are there any other techniques that can be used to project shadow and lighting?
cvtColor( img0, hsv, CV_BGR2HSV );
components[0].create( hsv.size(), 1);
components[1].create( hsv.size(), 1);
components[2].create( hsv.size(), 1);
split(hsv, components);
...
cvtColor( drawing, hsv_output, CV_BGR2HSV );
components_output[0].create( hsv.size(), 1);
components_output[1].create( hsv.size(), 1);
components_output[2].create( hsv.size(), 1);
split(hsv_output, components_output);
components_output[0] = 0.5 * components_output[0] + 0.5 * components[0];
int ch[] = {0 , 0};
mixChannels(&components_output[0], 1, &hsv_output, 1, ch, 1);
cvtColor( hsv_output, drawing, CV_HSV2BGR );