Initialize a vector of Rect with some Rect objects - Opencv C++ - c++

I'm working on a face tracking project with Kalman Filter. Basically, I want to store the result of my tracking application (int x, int y, int width, int heigth) on a vector of Rectangles, (i.e. each face will be stored on a Rect, then all the Rects will be stored on a vector of Rect).
The following code is what I tried to do:
Rect faceTracked(Estimated_int.at<int>(0, 0), Estimated_int.at<int>(
1, 0), Estimated_int.at<int>(2, 0), Estimated_int.at<int>(3, 0));
std::vector <Rect> facesVector;
facesVector[i] = faceTracked;
Where "Estimated_int" is the result matrix (4,1) of KF. When I run this code the following error is displayed on Android Studio Logcat, then the app crashes:
11-21 17:36:43.729 10735-11321/com.example.android.ndkopencvtest1 A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x0 in tid 11321 (Thread-5)
That error only happens when the statement facesVector[i] = faceTracked is called. What am I doing wrong? The entire function code is shown below:
void trackFace (Mat& frame, std::vector<Rect> faces) {
for (size_t i = 0; i < faces.size(); i++) {
X = A * X_p;
transpose(A, A_transpose);
P = A * P_p * A_transpose;
if (faces.size() > 0) {
Mat Z = (Mat_<float>(4, 1) << faces[i].x, faces[i].y, faces[i].x + faces[i].width,
faces[i].y + faces[i].height);
Y = Z - H * X;
transpose(H, H_transpose);
S = H * P * H_transpose + R;
invert(S, S_inverse);
K = P * H_transpose * S_inverse;
X_p = X + K * Y;
Estimated = H * X_p;
P_p = (Ident - K * H) * P;
Mat Estimated_int = (Mat_<int>(4, 1) << cvRound(Estimated.at<float>(0, 0)), cvRound(
Estimated.at<float>(1, 0)), cvRound(Estimated.at<float>(2, 0)), cvRound(
Estimated.at<float>(3, 0)));
rectangle(frame, Point((Estimated_int.at<int>(0, 0)), (Estimated_int.at<int>(1, 0))),
Point((Estimated_int.at<int>(2, 0)), (Estimated_int.at<int>(3, 0))),
Scalar(255, 255, 102, 255), 2, 8, 0);
Rect faceTracked(Estimated_int.at<int>(0, 0), Estimated_int.at<int>(
1, 0), Estimated_int.at<int>(2, 0), Estimated_int.at<int>(3, 0));
std::vector <Rect> facesVector;
facesVector[i] = faceTracked;
}
}
}
#edit: All matrix were properly initialized on a header file. It was tested before and it is working.

Related

Snake active contour algorithm with C++ and OpenCV 3

I am trying to implement the snake algorithm for active contour using C++ and OpenCV 3. I am working with the version that uses the gradient descent. As base test I am trying to draw a contour of a lip. This is the base image.
This is the evolution of the contour without external forces (alpha = 0.001, beta = 3, step-size=0.3).
When I add the external force, this is the result.
As external force I have used just the edge detection with Sobel derivative.
This is the code I use for points update.
array<Mat, 2> edges = edgeMatrices(croppedImage);
const float ALPHA = 0.001, BETA = 3, GAMMA = 0.3, // Gamma is step size.
a = GAMMA * ALPHA, b = GAMMA * BETA;
const uint16_t CYCLES = 1000;
const float p = b, q = -a - 4 * b, r = 1 + 2 * a + 6 * b;
Mat pMatrix = pentadiagonalMatrix(POINTS_NUM, p, q, r).inv();
for (uint16_t i = 0; i < CYCLES; ++i) {
// Extract the x and y derivatives for current points.
auto externalForces = external(edges, x, y);
x = pMatrix * (x + GAMMA * externalForces[0]);
y = pMatrix * (y + GAMMA * externalForces[1]);
// Draw the points.
if (i % 200 == 0 && i > 0)
drawPoints(croppedImage, x, y, { 0.2f * i, 0.2f * i, 0 });
}
This is the code for computing the derivatives.
array<Mat, 2> edgeMatrices(Mat &img) {
// Convert image.
Mat gray;
cvtColor(img, gray, COLOR_BGR2GRAY);
// Apply scharr filter.
Mat grad_x, grad_y, blurred_x, blurred_y;
int scale = 1;
int delta = 0;
int ddepth = CV_16S;
int kernSize = 3;
Sobel(gray, grad_x, ddepth, 1, 0, kernSize, scale, delta, BORDER_DEFAULT);
Sobel(gray, grad_y, ddepth, 0, 1, kernSize, scale, delta, BORDER_DEFAULT);
GaussianBlur(grad_x, blurred_x, Size(5, 5), 30);
GaussianBlur(grad_y, blurred_y, Size(5, 5), 30);
return { blurred_x, blurred_y };
}
array<Mat, 2> external(array<Mat, 2> &edgeMat, Mat &x, Mat &y) {
array<Mat, 2> ext;
ext[0] = { Size{ 1, POINTS_NUM }, CV_32FC1 };
ext[1] = { Size{ 1, POINTS_NUM }, CV_32FC1 };
for (size_t i = 0; i < POINTS_NUM; ++i) {
ext[0].at<float>(0, i) = - edgeMat[0].at<short>(y.at<float>(0, i), x.at<float>(0, i));
ext[1].at<float>(0, i) = - edgeMat[1].at<short>(y.at<float>(0, i), x.at<float>(0, i));
}
return ext;
}
As you can see, the contour points converge in a very strange way and not towards the edge of the lip (that was the result I would expect).
I am not able to understand if it is an error about implementation or about tuning the parameters or it is just is normal behaviour and I misunderstood something about the algorithm.
I have some doubts on the derivative matrices, I think that they should be regularized in some way, but I am not sure which is the right one. Can someone help me?
The only implementations I have found are of the greedy method.

Hough Circular Transform

Im trying to implement Hough Transform using gradient direction. I know that there is an implementation in OpenCv but I want to do it myself.
I'm using Sobel to get the X and Y gradient. Then for every pixel the
magnitute ---> sqrt(sobelX^2 + sobelY^2)
directions --> atan2(sobelY,sobelX) * 180/PI
if the magnitude is higher then 220 (so almost black) this is the edge.
And then the direction is used on the circle equation.
But the results are not acceptable. Any help?
I know there are the cv::polar and cv::cartToPolar, but I want to optimize code so that all equations will be calculated on fly, no empty loops.
cv::Mat sobelX,sobelY;
Sobel(mat, sobelX, CV_32F, 1, 0, kernelSize, 1, 0, cv::BORDER_REPLICATE);
Sobel(mat, sobelY, CV_32F, 0, 1, kernelSize, 1, 0, cv::BORDER_REPLICATE);
//cv::Canny(mat,mat,100,200,kernelSize,false);
debug::showImage("sobelX",sobelX);
debug::showImage("SobelY",sobelY);
debug::showImage("MAT",mat);
cv::Mat magnitudeMap,angleMap;
magnitudeMap = cv::Mat::zeros(mat.rows,mat.cols,mat.type());
angleMap = cv::Mat::zeros(mat.rows,mat.cols,mat.type());
std::vector<cv::Mat> hough_spaces(max);
for(int i=0; i<max; ++i)
{
hough_spaces[i] = cv::Mat::zeros(mat.rows,mat.cols,mat.type());
}
for(int x=0; x<mat.rows; ++x)
{
for(int y=0; y<mat.cols; ++y)
{
const float magnitude = sqrt(sobelX.at<uchar>(x,y)*sobelX.at<uchar>(x,y)+sobelY.at<uchar>(x,y)*sobelY.at<uchar>(x,y));
const float theta= atan2(sobelY.at<uchar>(x,y),sobelX.at<uchar>(x,y)) * 180/CV_PI;
magnitudeMap.at<uchar>(x,y) = magnitude;
if(magnitude > 225)//mat.at<const uchar>(x,y) == 255)
{
for(int radius=min; radius<max; ++radius)
{
const int a = x - radius * cos(theta);//lookup::cosArray[static_cast<int>(theta)];//+ 0.5f;
const int b = y - radius * sin(theta);//lookup::sinArray[static_cast<int>(theta)]; //+ 0.5f;
if(a >= 0 && a <hough_spaces[radius].rows && b >= 0 && b<hough_spaces[radius].cols) {
hough_spaces[radius].at<uchar>(a,b)+=10;
}
}
}
}
}
debug::showImage("magnitude",magnitudeMap);
for(int radius=min; radius<max; ++radius)
{
double min_f,max_f;
cv::Point min_loc,max_loc;
cv::minMaxLoc(hough_spaces[radius],&min_f,&max_f,&min_loc,&max_loc);
if(max_f>=treshold)
{
circles.emplace_back(cv::Point3f(max_loc.x,max_loc.y,radius));
// debug::showImage(std::to_string(radius).c_str(),hough_spaces[radius]);
}
}
circles.shrink_to_fit();

Shrinking images using recursion incorrect positioning

I am writing code that takes an image and creates subimages within the original image. I am doing this recursively with a rectangle class that keeps track of starting and stopping positions of the subimage. After the 3rd recursive call is where i run into trouble. The new subimages are being placed in the incorrect spots. They should be shrinking as they approach the top right corner of the image. I have run through the debugger and watched the start and stop positions change with each call and they reflect movement toward the top right corner. The only place I think the error could be is where I create a new rectangle called rRight to be put into the recursive call.
int main()
{
CImage original("test256.gif");
Rectangle rPrev(0, 0, original.getRows(), original.getCols());
Rectangle r(0, 0, (original.getRows() / 2), (original.getCols() / 2));
CImage final = fractal(original, r, rPrev);
final.output("output.gif");
system("PAUSE");
return 0;
}
CImage fractal(CImage &origin, Rectangle &r, Rectangle &rPrev)
{
if (r.getX2() - r.getX1() > 0 && r.getY2() - r.getY1() > 0)
{
drawTopLeft(origin, r, rPrev);
Rectangle rRight(0, r.getY2(), (r.getX2() / 2), (r.getY2() + ((r.getY2() - r.getY1()) / 2)));
fractal(origin, rRight, r);
}
return origin;
}
void drawTopLeft(CImage &origin, Rectangle &r, Rectangle &rPrev)
{
for (int row = rPrev.getX1(); row < rPrev.getX2(); row += 2)
{
for (int col = rPrev.getY1(); col < rPrev.getY2(); col += 2)
{
pixel p1 = origin.getPixel(row, col);
pixel p2 = origin.getPixel(row + 1, col);
pixel p3 = origin.getPixel(row, col + 1);
pixel p4 = origin.getPixel(row + 1, col + 1);
int avgRed = (p1.red + p2.red + p3.red + p4.red) / 4;
int avgGreen = (p1.green + p2.green + p3.green + p4.green) / 4;
int avgBlue = (p1.blue + p2.blue + p3.blue + p4.blue) / 4;
origin.setPixel((row / 2) + r.getX1(), (col / 2) + r.getY1(), avgRed, avgGreen, avgBlue);
}
}
}

Anisotropic Diffusion

I have converted this Matlab Anisotropic Diffusion code to C++ but I am not getting the desired output. All I am getting is a black image. Can someone please check my code and give any suggestions? Below is my code:
const double lambda = 1 / 7;
const double k = 30;
const int iter = 1;
int ahN[3][3] = { {0, 1, 0}, {0, -1, 0}, {0, 0, 0} };
int ahS[3][3] = { {0, 0, 0}, {0, -1, 0}, {0, 1, 0} };
int ahE[3][3] = { {0, 0, 0}, {0, -1, 1}, {0, 0, 0} };
int ahW[3][3] = { {0, 0, 0}, {1, -1, 0}, {0, 0, 0} };
int ahNE[3][3] = { {0, 0, 1}, {0, -1, 0}, {0, 0, 0} };
int ahSE[3][3] = { {0, 0, 0}, {0, -1, 0}, {0, 0, 1} };
int ahSW[3][3] = { {0, 0, 0}, {0, -1, 0}, {1, 0, 0} };
int ahNW[3][3] = { {1, 0, 0}, {0, -1, 0}, {0, 0, 0} };
Mat hN = Mat(3, 3, CV_32FC1, &ahN);
Mat hS = Mat(3, 3, CV_32FC1, &ahS);
Mat hE = Mat(3, 3, CV_32FC1, &ahE);
Mat hW = Mat(3, 3, CV_32FC1, &ahW);
Mat hNE = Mat(3, 3, CV_32FC1, &ahNE);
Mat hSE = Mat(3, 3, CV_32FC1, &ahSE);
Mat hSW = Mat(3, 3, CV_32FC1, &ahSW);
Mat hNW = Mat(3, 3, CV_32FC1, &ahNW);
void anisotropicDiffusion(Mat &output, int width, int height) {
//mat initialisation
Mat nablaN, nablaS, nablaW, nablaE, nablaNE, nablaSE, nablaSW, nablaNW;
Mat cN, cS, cW, cE, cNE, cSE, cSW, cNW;
//depth of filters
int ddepth = -1;
//center pixel distance
double dx = 1, dy = 1, dd = sqrt(2);
double idxSqr = 1.0 / (dx * dx), idySqr = 1.0 / (dy * dy), iddSqr = 1 / (dd * dd);
for (int i = 0; i < iter; i++) {
//filters
filter2D(output, nablaN, ddepth, hN);
filter2D(output, nablaS, ddepth, hS);
filter2D(output, nablaW, ddepth, hW);
filter2D(output, nablaE, ddepth, hE);
filter2D(output, nablaNE, ddepth, hNE);
filter2D(output, nablaSE, ddepth, hSE);
filter2D(output, nablaSW, ddepth, hSW);
filter2D(output, nablaNW, ddepth, hNW);
//exponential flux
cN = nablaN / k;
cN.mul(cN);
cN = 1.0 / (1.0 + cN);
//exp(-cN, cN);
cS = nablaS / k;
cS.mul(cS);
cS = 1.0 / (1.0 + cS);
//exp(-cS, cS);
cW = nablaW / k;
cW.mul(cW);
cW = 1.0 / (1.0 + cW);
//exp(-cW, cW);
cE = nablaE / k;
cE.mul(cE);
cE = 1.0 / (1.0 + cE);
//exp(-cE, cE);
cNE = nablaNE / k;
cNE.mul(cNE);
cNE = 1.0 / (1.0 + cNE);
//exp(-cNE, cNE);
cSE = nablaSE / k;
cSE.mul(cSE);
cSE = 1.0 / (1.0 + cSE);
//exp(-cSE, cSE);
cSW = nablaSW / k;
cSW.mul(cSW);
cSW = 1.0 / (1.0 + cSW);
//exp(-cSW, cSW);
cNW = nablaNW / k;
cNW.mul(cNW);
cNW = 1.0 / (1.0 + cNW);
//exp(-cNW, cNW);
output = output + lambda * (idySqr * cN.mul(nablaN) + idySqr * cS.mul(nablaS) +
idxSqr * cW.mul(nablaW) + idxSqr * cE.mul(nablaE) +
iddSqr * cNE.mul(nablaNE) + iddSqr * cSE.mul(nablaSE) +
iddSqr * cNW.mul(nablaNW) + iddSqr * cSW.mul(nablaSW));
}
}
Resolved in c#. Easy of translate to c++
You need this variables:
IMAGE[height, width] = integer array with stored Image
height = height of images in pixels
width = width of images in pixels
/// <summary>Perona & Malik anisotropic difusion filter. (squared formula)</summary>
/// <param name="data">Image data</param>
/// <param name="dt">Heat difusion value. Upper = more rapid convergence.</param>
/// <param name="lambda">The shape of the diffusion coefficient g(), controlling the Perona Malik diffusion g(delta) = 1/((1 + delta2) / lambda2). Upper = more blurred image & more noise removed</param>
/// <param name="interations">Determines the maximum number of iteration steps of the filter. Upper = less speed & more noise removed</param>
private void PeronaMalik(int[,] image, double dt, int lambda, int interations)
{
try
{
//test parameters
if (dt < 0)
throw new Exception("DT negative value not allowed");
if (lambda < 0)
throw new Exception("lambda must be upper of 0");
if (interations <= 0)
throw new Exception("Iterations must be upper of 0");
//Make temp image
int[,] temp = new int[height, width];
Array.Copy(image, temp, image.Length);
//Precalculate tables (for speed up)
double[] precal = new double[512];
double lambda2 = lambda * lambda;
for (int f = 0; f < 512; f++)
{
int diff = f - 255;
precal[f] = -dt * diff * lambda2 / (lambda2 + diff * diff);
}
//Apply the filter
for (int n = 0; n < interations; n++)
{
for (int h = 0; h < height; h++)
for (int w = 0; w < width; w++)
{
int current = temp[h, w];
int px = w - 1;
int nx = w + 1;
int py = h - 1;
int ny = h + 1;
if (px < 0)
px = 0;
if (nx >= width)
nx = width - 1;
if (py < 0)
py = 0;
if (ny >= height)
ny = height - 1;
image[h, w] = (int)(precal[255 + current - temp[h, px]] +
precal[255 + current - temp[h, nx]] +
precal[255 + current - temp[py, w]] +
precal[255 + current - temp[ny, w]]) +
temp[h, w];
}
}
}
catch (Exception ex) { throw new Exception(ex.Message + "\r\nIn PeronaMalik"); }
}
The solution is for equation 2. If you want equation 1 (exponential), you can change the ecuation in precal table for this:
precal[f] = -dt * delta * Math.Exp(-(delta * delta / lambda2));
Looks like you need to assign multiplication result:
Mat C = A.mul(B);
And
int ahN[3][3] ....
should be
float ahN[3][3] ....

Principle range of object orientation using image moments

I am trying to extract the angle of a shape in my image using moments in opencv/c++. I am able to extract the angle, but the issue is that the principal range of this angle is 180 degrees. This makes the orientation of the object ambiguous with respect to 180 degree rotations. The code I am using to extract the angle currently is,
findContours(frame, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
vector<vector<Point2i> > hull(contours.size());
int maxArea = 0;
int maxI = -1;
int M20 = 0;
int M02 = 0;
int M11 = 0;
for (int i = 0; i < contours.size(); i++)
{
convexHull(contours[i], hull[i], false);
approxPolyDP(hull[i], contourVertices, arcLength(hull[i], true)*0.1, true);
shapeMoments = moments(hull[i], false);
if(shapeMoments.m00 <= areaThreshold || shapeMoments.m00 >= MAX_AREA)
continue;
if(contourVertices.size() <= 3 || contourVertices.size() >= 7)
continue;
if(shapeMoments.m00 >= maxArea)
{
maxArea = shapeMoments.m00;
maxI = i;
}
}
if(maxI == -1)
return false;
fabricContour = hull[maxI];
approxPolyDP(hull[maxI], contourVertices, arcLength(hull[maxI], true)*0.02,true);
shapeMoments = moments(hull[maxI], false);
centerOfMass = Point2f(shapeMoments.m10/shapeMoments.m00, shapeMoments.m01/shapeMoments.m00);
drawContours(contourFrame, hull, maxI, Scalar(24, 35, 140), CV_FILLED, CV_AA);
drawContours(originalFrame, hull, maxI, Scalar(255, 0, 0), 8, CV_AA);
circle(contourFrame, centerOfMass, 4, Scalar(0, 0, 0), 10, 8, 0);
posX = centerOfMass.x;
posY = centerOfMass.y;
M11 = shapeMoments.mu11/shapeMoments.m00;
M20 = shapeMoments.mu20/shapeMoments.m00;
M02 = shapeMoments.mu02/shapeMoments.m00;
num = double(2)*M11;
den = M20 - M02;
angle = (int(-1*(180/(2*M_PI))*atan2(num, den)) + 45 + 180)%180;
//angle = int(-1*(180/(2*M_PI))*atan2(num, den));
area = shapeMoments.m00;
Is there any way I can remove the ambiguity from this extracted angle? I tries using the third order moments, but they do not seem to be very reliable.