Wrong cv::face FacemarkLBF instantiation - c++

I am using OpenCV 4.4.0 on Ubuntu 20.04 with the installed latest opencv_contrib extra modules. For detecting the face landmarks (based on this tutorial) I use the following #include and namespace sections related to the extra face module :
#include <opencv2/face.hpp>
using namespace cv::face;
The face.hpp file is detected (therefore I assume correct installation of the opencv_contrib modules), but e.g. the line
Ptr<facemark> facemark = FacemarkLBF::create();
throws an error
error: ‘facemark’ was not declared in this scope
I have already tried installing the extra modules with both cmake-gui and with the cmake terminal command. The results are the same. I assume there is an error related to the namespace cv::face. Any ideas on what kind of mistake I am doing here?
The minimal code is here:
#include <opencv2/opencv.hpp>
#include <opencv2/face.hpp>
#include "drawLandmarks.hpp"
using namespace std;
using namespace cv;
using namespace cv::face;
int main(int argc,char** argv)
{
// Load Face Detector
CascadeClassifier faceDetector("haarcascade_frontalface_alt2.xml");
// Create an instance of Facemark
Ptr<facemark> facemark = FacemarkLBF::create();
// Load landmark detector
facemark->loadModel("lbfmodel.yaml");
// Set up webcam for video capture
VideoCapture cam(0);
// Variable to store a video frame and its grayscale
Mat frame, gray;
// Read a frame
while(cam.read(frame))
{
// Find face
vector<rect> faces;
// Convert frame to grayscale because
// faceDetector requires grayscale image.
cvtColor(frame, gray, COLOR_BGR2GRAY);
// Detect faces
faceDetector.detectMultiScale(gray, faces);
// Variable for landmarks.
// Landmarks for one face is a vector of points
// There can be more than one face in the image. Hence, we
// use a vector of vector of points.
vector< vector<point2f> > landmarks;
// Run landmark detector
bool success = facemark->fit(frame,faces,landmarks);
if(success)
{
// If successful, render the landmarks on the face
for(int i = 0; i < landmarks.size(); i++)
{
drawLandmarks(frame, landmarks[i]);
}
}
// Display results
imshow("Facial Landmark Detection", frame);
// Exit loop if ESC is pressed
if (waitKey(1) == 27) break;
}
return 0;
}

As #john and #idclev 463035818 have suggested, the way for creating an instance of Facemark is
Ptr<FacemarkLBF> facemark = FacemarkLBF::create();
e.g. if I want to call it facemark.

Related

Unable to detect ArUco markers with OpenCV 3.1.0

I am trying to code a simple C++ routine to first write a predefined dictionary of ArUco markers (e.g. 4x4_100) to a folder and then detect ArUco markers in a specific image selected from the folder using OpenCV 3.1 and Visual Studio 2017. I have compiled all the OpenCV-contrib libraries required to use ArUco markers. My routine builds without any error, but I am having trouble detecting the markers even after supplying all the correct arguments (e.g. image, Dictionary, etc.) to the in-built "aruco::detectMarkers" function. Could you please help me understand what`s wrong with my approach? Below is a minimal working example and the test image is attached here "4x4Marker_40.jpg":
#include "opencv2\core.hpp"
#include "opencv2\imgproc.hpp"
#include "opencv2\imgcodecs.hpp"
#include "opencv2\aruco.hpp"
#include "opencv2\highgui.hpp"
#include <sstream>
#include <fstream>
#include <iostream>
using namespace cv;
using namespace std;
// Function to write ArUco markers
void createArucoMarkers()
{
// Define variable to store the output markers
Mat outputMarker;
// Choose a predefined Dictionary of markers
Ptr< aruco::Dictionary> markerDictionary = aruco::getPredefinedDictionary(aruco::PREDEFINED_DICTIONARY_NAME::DICT_4X4_50);
// Write each of the markers to a '.jpg' image file
for (int i = 0; i < 50; i++)
{
aruco::drawMarker(markerDictionary, i, 500, outputMarker, 1);
ostringstream convert;
string imageName = "4x4Marker_";
convert << imageName << i << ".jpg";
imwrite(convert.str(), outputMarker);
}
}
// Main body of the routine
int main(int argv, char** argc)
{
createArucoMarkers();
// Read a specific image
Mat frame = imread("4x4Marker_40.jpg", CV_LOAD_IMAGE_UNCHANGED);
// Define variables to store the output of marker detection
vector<int> markerIds;
vector<vector<Point2f>> markerCorners, rejectedCandidates;
// Define a Dictionary type variable for marker detection
Ptr<aruco::Dictionary> markerDictionary = aruco::getPredefinedDictionary(aruco::PREDEFINED_DICTIONARY_NAME::DICT_4X4_50);
// Detect markers
aruco::detectMarkers(frame, markerDictionary, markerCorners, markerIds);
// Display the image
namedWindow("Webcam", CV_WINDOW_AUTOSIZE);
imshow("Webcam", frame);
// Draw detected markers on the displayed image
aruco::drawDetectedMarkers(frame, markerCorners, markerIds);
cout << "\nmarker ID is:\t"<<markerIds.size();
waitKey();
}
There are a few problems in your code:
You are displaying the image with imshow before calling drawDetectedMarkers so you'll never see the detected marker.
You are displaying the size of the markerIds vector instead of the value contained within it.
(This is the main problem) Your marker has no white space around it so it's impossible to detect.
One suggestion: use forward slashes, not backslashes in your #include statements. Forward slashes work everywhere, backslashes only work on Windows.
This worked on my machine. Note that I loaded the image as a color image to make it easier to see the results of drawDetectedMarkers.
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/aruco.hpp>
#include <opencv2/highgui.hpp>
#include <sstream>
#include <fstream>
#include <iostream>
using namespace cv;
using namespace std;
// Function to write ArUco markers
void createArucoMarkers()
{
// Create image to hold the marker plus surrounding white space
Mat outputImage(700, 700, CV_8UC1);
// Fill the image with white
outputImage = Scalar(255);
// Define an ROI to write the marker into
Rect markerRect(100, 100, 500, 500);
Mat outputMarker(outputImage, markerRect);
// Choose a predefined Dictionary of markers
Ptr< aruco::Dictionary> markerDictionary = aruco::getPredefinedDictionary(aruco::PREDEFINED_DICTIONARY_NAME::DICT_4X4_50);
// Write each of the markers to a '.jpg' image file
for (int i = 0; i < 50; i++)
{
//Draw the marker into the ROI
aruco::drawMarker(markerDictionary, i, 500, outputMarker, 1);
ostringstream convert;
string imageName = "4x4Marker_";
convert << imageName << i << ".jpg";
// Note we are writing outputImage, not outputMarker
imwrite(convert.str(), outputImage);
}
}
// Main body of the routine
int main(int argv, char** argc)
{
createArucoMarkers();
// Read a specific image
Mat frame = imread("4x4Marker_40.jpg", CV_LOAD_IMAGE_COLOR);
// Define variables to store the output of marker detection
vector<int> markerIds;
vector<vector<Point2f>> markerCorners, rejectedCandidates;
// Define a Dictionary type variable for marker detection
Ptr<aruco::Dictionary> markerDictionary = aruco::getPredefinedDictionary(aruco::PREDEFINED_DICTIONARY_NAME::DICT_4X4_50);
// Detect markers
aruco::detectMarkers(frame, markerDictionary, markerCorners, markerIds);
// Display the image
namedWindow("Webcam", CV_WINDOW_AUTOSIZE);
// Draw detected markers on the displayed image
aruco::drawDetectedMarkers(frame, markerCorners, markerIds);
// Show the image with the detected marker
imshow("Webcam", frame);
// If a marker was identified, show its ID
if (markerIds.size() > 0) {
cout << "\nmarker ID is:\t" << markerIds[0] << endl;
}
waitKey(0);
}

Contours opencv c++ Assertation failed (scn == 3 || scn == 4) [duplicate]

I read alot from other solution but i still confused what should i do with mine...
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
int main(int argc, const char** argv)
{
//create the cascade classifier object used for the face detection
CascadeClassifier face_cascade;
//use the haarcascade_frontalface_alt.xml library
face_cascade.load("haarcascade_frontalface_alt.xml");
//setup video capture device and link it to the first capture device
VideoCapture captureDevice;
captureDevice.open(0);
//setup image files used in the capture process
Mat captureFrame;
Mat grayscaleFrame;
//create a window to present the results
namedWindow("outputCapture", 1);
//create a loop to capture and find faces
while (true)
{
//capture a new image frame
captureDevice >> captureFrame;
//convert captured image to gray scale and equalize
cvtColor(captureFrame, grayscaleFrame, CV_BGR2GRAY);
equalizeHist(grayscaleFrame, grayscaleFrame);
//create a vector array to store the face found
std::vector<Rect> faces;
//find faces and store them in the vector array
face_cascade.detectMultiScale(grayscaleFrame, faces, 1.1, 3, CV_HAAR_FIND_BIGGEST_OBJECT | CV_HAAR_SCALE_IMAGE, Size(30, 30));
//draw a rectangle for all found faces in the vector array on the original image
for (int i = 0; i < faces.size(); i++)
{
Point pt1(faces[i].x + faces[i].width, faces[i].y + faces[i].height);
Point pt2(faces[i].x, faces[i].y);
rectangle(captureFrame, pt1, pt2, cvScalar(0, 255, 0, 0), 1, 8, 0);
}
//print the output
imshow("outputCapture", captureFrame);
//pause for 33ms
waitKey(33);
}
return 0;
}
This is the error
OpenCV Error: Assertion failed in unknown function
It seems the error happened after "captureDevice >> captureFrame;" Please guide me, this is taking image from camera.
It seems like VideoCapture can't grab frame from your camera. Add this code to check result of frame grabbing:
//create a loop to capture and find faces
while (true)
{
//capture a new image frame
captureDevice >> captureFrame;
if (captureFrame.empty())
{
cout << "Failed to grab frame" << endl;
break;
}
...
If it is a problem of VideoCapture check that you installed drivers for your camera.
Okay, I think I know what happened. I tried running the program and it worked perfectly.
You have probably not linked the required DLL's.. Make sure (in case you are using Windows) your opencv/bin is included in your environment variables. This is my CMakeLists.txt file to make things easier for you,
cmake_minimum_required(VERSION 2.6 FATAL_ERROR)
project(SO_example)
FIND_PACKAGE(OpenCV REQUIRED)
SET (SOURCES
#VTKtoPCL.h
main.cpp
)
add_executable(so_example ${SOURCES})
target_link_libraries(so_example ${OpenCV_LIBS})

OpenCV Error: Assertion failed <scn == 3 ::scn == 4> in unknown function,

I read alot from other solution but i still confused what should i do with mine...
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
int main(int argc, const char** argv)
{
//create the cascade classifier object used for the face detection
CascadeClassifier face_cascade;
//use the haarcascade_frontalface_alt.xml library
face_cascade.load("haarcascade_frontalface_alt.xml");
//setup video capture device and link it to the first capture device
VideoCapture captureDevice;
captureDevice.open(0);
//setup image files used in the capture process
Mat captureFrame;
Mat grayscaleFrame;
//create a window to present the results
namedWindow("outputCapture", 1);
//create a loop to capture and find faces
while (true)
{
//capture a new image frame
captureDevice >> captureFrame;
//convert captured image to gray scale and equalize
cvtColor(captureFrame, grayscaleFrame, CV_BGR2GRAY);
equalizeHist(grayscaleFrame, grayscaleFrame);
//create a vector array to store the face found
std::vector<Rect> faces;
//find faces and store them in the vector array
face_cascade.detectMultiScale(grayscaleFrame, faces, 1.1, 3, CV_HAAR_FIND_BIGGEST_OBJECT | CV_HAAR_SCALE_IMAGE, Size(30, 30));
//draw a rectangle for all found faces in the vector array on the original image
for (int i = 0; i < faces.size(); i++)
{
Point pt1(faces[i].x + faces[i].width, faces[i].y + faces[i].height);
Point pt2(faces[i].x, faces[i].y);
rectangle(captureFrame, pt1, pt2, cvScalar(0, 255, 0, 0), 1, 8, 0);
}
//print the output
imshow("outputCapture", captureFrame);
//pause for 33ms
waitKey(33);
}
return 0;
}
This is the error
OpenCV Error: Assertion failed in unknown function
It seems the error happened after "captureDevice >> captureFrame;" Please guide me, this is taking image from camera.
It seems like VideoCapture can't grab frame from your camera. Add this code to check result of frame grabbing:
//create a loop to capture and find faces
while (true)
{
//capture a new image frame
captureDevice >> captureFrame;
if (captureFrame.empty())
{
cout << "Failed to grab frame" << endl;
break;
}
...
If it is a problem of VideoCapture check that you installed drivers for your camera.
Okay, I think I know what happened. I tried running the program and it worked perfectly.
You have probably not linked the required DLL's.. Make sure (in case you are using Windows) your opencv/bin is included in your environment variables. This is my CMakeLists.txt file to make things easier for you,
cmake_minimum_required(VERSION 2.6 FATAL_ERROR)
project(SO_example)
FIND_PACKAGE(OpenCV REQUIRED)
SET (SOURCES
#VTKtoPCL.h
main.cpp
)
add_executable(so_example ${SOURCES})
target_link_libraries(so_example ${OpenCV_LIBS})

watershed segmentation opencv xcode

I am now learning a code from the opencv codebook (OpenCV 2 Computer Vision Application Programming Cookbook): Chapter 5, Segmenting images using watersheds, page 131.
Here is my main code:
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
using namespace std;
class WatershedSegmenter {
private:
cv::Mat markers;
public:
void setMarkers(const cv::Mat& markerImage){
markerImage.convertTo(markers, CV_32S);
}
cv::Mat process(const cv::Mat &image){
cv::watershed(image,markers);
return markers;
}
};
int main ()
{
cv::Mat image = cv::imread("/Users/yaozhongsong/Pictures/IMG_1648.JPG");
// Eliminate noise and smaller objects
cv::Mat fg;
cv::erode(binary,fg,cv::Mat(),cv::Point(-1,-1),6);
// Identify image pixels without objects
cv::Mat bg;
cv::dilate(binary,bg,cv::Mat(),cv::Point(-1,-1),6);
cv::threshold(bg,bg,1,128,cv::THRESH_BINARY_INV);
// Create markers image
cv::Mat markers(binary.size(),CV_8U,cv::Scalar(0));
markers= fg+bg;
// Create watershed segmentation object
WatershedSegmenter segmenter;
// Set markers and process
segmenter.setMarkers(markers);
segmenter.process(image);
imshow("a",image);
std::cout<<".";
cv::waitKey(0);
}
However, it doesn't work. How could I initialize a binary image? And how could I make this segmentation code work?
I am not very clear about this part of the book.
Thanks in advance!
There's a couple of things that should be mentioned about your code:
Watershed expects the input and the output image to have the same size;
You probably want to get rid of the const parameters in the methods;
Notice that the result of watershed is actually markers and not image as your code suggests; About that, you need to grab the return of process()!
This is your code, with the fixes above:
// Usage: ./app input.jpg
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
using namespace std;
class WatershedSegmenter{
private:
cv::Mat markers;
public:
void setMarkers(cv::Mat& markerImage)
{
markerImage.convertTo(markers, CV_32S);
}
cv::Mat process(cv::Mat &image)
{
cv::watershed(image, markers);
markers.convertTo(markers,CV_8U);
return markers;
}
};
int main(int argc, char* argv[])
{
cv::Mat image = cv::imread(argv[1]);
cv::Mat binary;// = cv::imread(argv[2], 0);
cv::cvtColor(image, binary, CV_BGR2GRAY);
cv::threshold(binary, binary, 100, 255, THRESH_BINARY);
imshow("originalimage", image);
imshow("originalbinary", binary);
// Eliminate noise and smaller objects
cv::Mat fg;
cv::erode(binary,fg,cv::Mat(),cv::Point(-1,-1),2);
imshow("fg", fg);
// Identify image pixels without objects
cv::Mat bg;
cv::dilate(binary,bg,cv::Mat(),cv::Point(-1,-1),3);
cv::threshold(bg,bg,1, 128,cv::THRESH_BINARY_INV);
imshow("bg", bg);
// Create markers image
cv::Mat markers(binary.size(),CV_8U,cv::Scalar(0));
markers= fg+bg;
imshow("markers", markers);
// Create watershed segmentation object
WatershedSegmenter segmenter;
segmenter.setMarkers(markers);
cv::Mat result = segmenter.process(image);
result.convertTo(result,CV_8U);
imshow("final_result", result);
cv::waitKey(0);
return 0;
}
I took the liberty of using Abid's input image for testing and this is what I got:
Below is the simplified version of your code, and it works fine for me. Check it out :
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
using namespace std;
int main ()
{
Mat image = imread("sofwatershed.jpg");
Mat binary = imread("sofwsthresh.png",0);
// Eliminate noise and smaller objects
Mat fg;
erode(binary,fg,Mat(),Point(-1,-1),2);
// Identify image pixels without objects
Mat bg;
dilate(binary,bg,Mat(),Point(-1,-1),3);
threshold(bg,bg,1,128,THRESH_BINARY_INV);
// Create markers image
Mat markers(binary.size(),CV_8U,Scalar(0));
markers= fg+bg;
markers.convertTo(markers, CV_32S);
watershed(image,markers);
markers.convertTo(markers,CV_8U);
imshow("a",markers);
waitKey(0);
}
Below is my input image :
Below is my output image :
See the code explanation here : Simple watershed Sample in OpenCV
I had the same problem as you, following the exact same code sample of the cookbook (great book btw).
Just to place the matter I was coding under Visual Studio 2013 and OpenCV 2.4.8. After a lot of searching and no solutions I decided to change the IDE.
It's still Visual Studio BUT it's 2010!!!! And boom it works!
Becareful of how you configure Visual Studio with OpenCV. Here's a great tutorial for installation here
Good day to all

OpenCV Gives an error when using the imgproc functions

when I compile and run this code, I get an error. It compiles, but when I try to run it, it gives the following error:
The application has requested the Runtime to terminate in an unusual way.
This is the code:
#include <opencv2/opencv.hpp>
#include <string>
int main() {
cv::VideoCapture c(0);
double rate = 10;
bool stop(false);
cv::Mat frame;
cv::namedWindow("Hi!");
int delay = 1000/rate;
cv::Mat corners;
while(!stop){
if(!c.read(frame))
break;
cv::cornerHarris(frame,corners,3,3,0.1);
cv::imshow("Hi!",corners);
if(cv::waitKey(delay)>=0)
stop = true;
}
return 0;
}
BTW, I get the same error when using the Canny edge detector.
Your corners matrix is declared as a variable, but there is no memory allocated to it. The same with your frame variable. First you have to create a matrix big enough for the image to fit into it.
I suggest you first take a look at cvCreateImage so you can learn how basic images are created and handled, before you start working with video streams.
Make sure the capture is ready, and the image is ok:
if(!cap.IsOpened())
break;
if(!c.read(frame))
break;
if(frame.empty())
break;
You need to convert the image to grayscale before you use the corner detector:
cv::Mat frameGray;
cv::cvtColor(frame, frameGray, CV_RGB2GRAY);