Error in HOG descriptor - c++

I am using HOG descriptor for feature extraction. I am using visual studio 2012 and opencv 2.4.9 version. I am getting run time error in hog.compute function.
int main()
{
Mat img_raw = imread("p1.jpg", 1); // load as color image.
Mat img;
cvtColor(img_raw, img, CV_RGB2GRAY);
HOGDescriptor hog;
vector<float> descriptor;
vector<Point>locations;
hog.compute(img, descriptor,Size(32,32),Size(0,0),locations);
cout << "HOG descriptor size is " << hog.getDescriptorSize() << endl;
cout << "img dimensions: " << img.cols << " width x " << img.rows << "height" << endl;
cout << "Found " << descriptor.size() << " descriptor values" << endl;
cout << "Nr of locations specified : " << locations.size() << endl;
return 0;
}

Related

Surface Matching never get any results with any data, why?

Im trying to get working the Surface Matching Sample code from surface_matching. Im able to compile the code then when run with ./surface_matching /home/surface_matching/coke.ply /home/surface_matching/01.ply
Im running the same code sample as in the link. Code is the followving
#include "opencv2/surface_matching.hpp"
#include <iostream>
#include "opencv2/surface_matching/ppf_helpers.hpp"
#include "opencv2/core/utility.hpp"
using namespace std;
using namespace cv;
using namespace ppf_match_3d;
static void help(const string& errorMessage)
{
cout << "Program init error : "<< errorMessage << endl;
cout << "\nUsage : ppf_matching [input model file] [input scene file]"<< endl;
cout << "\nPlease start again with new parameters"<< endl;
}
int main(int argc, char** argv)
{
// welcome message
cout << "****************************************************" << endl;
cout << "* Surface Matching demonstration : demonstrates the use of surface matching"
" using point pair features." << endl;
cout << "* The sample loads a model and a scene, where the model lies in a different"
" pose than the training.\n* It then trains the model and searches for it in the"
" input scene. The detected poses are further refined by ICP\n* and printed to the "
" standard output." << endl;
cout << "****************************************************" << endl;
if (argc < 3)
{
help("Not enough input arguments");
exit(1);
}
#if (defined __x86_64__ || defined _M_X64)
cout << "Running on 64 bits" << endl;
#else
cout << "Running on 32 bits" << endl;
#endif
#ifdef _OPENMP
cout << "Running with OpenMP" << endl;
#else
cout << "Running without OpenMP and without TBB" << endl;
#endif
string modelFileName = (string)argv[1];
string sceneFileName = (string)argv[2];
Mat pc = loadPLYSimple(modelFileName.c_str(), 1);
// Now train the model
cout << "Training..." << endl;
int64 tick1 = cv::getTickCount();
ppf_match_3d::PPF3DDetector detector(0.025, 0.05);
detector.trainModel(pc);
int64 tick2 = cv::getTickCount();
cout << endl << "Training complete in "
<< (double)(tick2-tick1)/ cv::getTickFrequency()
<< " sec" << endl << "Loading model..." << endl;
// Read the scene
Mat pcTest = loadPLYSimple(sceneFileName.c_str(), 1);
// Match the model to the scene and get the pose
cout << endl << "Starting matching..." << endl;
vector<Pose3DPtr> results;
tick1 = cv::getTickCount();
// orig detector.match(pcTest, results, 1.0/40.0, 0.05);
detector.match(pcTest, results, 1.0/40.0, 0.05);
tick2 = cv::getTickCount();
cout << endl << "PPF Elapsed Time " <<
(tick2-tick1)/cv::getTickFrequency() << " sec" << endl;
// Get only first N results
int N = 2;
vector<Pose3DPtr> resultsSub(results.begin(),results.begin()+N);
// Create an instance of ICP
ICP icp(100, 0.005f, 2.5f, 8);
int64 t1 = cv::getTickCount();
// Register for all selected poses
cout << endl << "Performing ICP on " << N << " poses..." << endl;
icp.registerModelToScene(pc, pcTest, resultsSub);
int64 t2 = cv::getTickCount();
cout << endl << "ICP Elapsed Time " <<
(t2-t1)/cv::getTickFrequency() << " sec" << endl;
cout << "Poses: " << endl;
// debug first five poses
for (size_t i=0; i<resultsSub.size(); i++)
{
Pose3DPtr result = resultsSub[i];
cout << "Pose Result " << i << endl;
result->printPose();
if (i==0)
{
Mat pct = transformPCPose(pc, result->pose);
writePLY(pct, "para6700PCTrans.ply");
}
}
return 0;
}
It runs but never gets any results. I got this error
$ ./surface_matching /home/admini/surface_matching/coke.ply /home/admini/surface_matching/01.ply
****************************************************
* Surface Matching demonstration : demonstrates the use of surface matching using point pair features.
* The sample loads a model and a scene, where the model lies in a different pose than the training.
* It then trains the model and searches for it in the input scene. The detected poses are further refined by ICP
* and printed to the standard output.
****************************************************
Running on 64 bits
Running without OpenMP and without TBB
Training...
Training complete in 0.100169 sec
Loading model...
Starting matching...
Segmentation fault (core dumped)
Training is complete and starts matching but then Segmentation fault. What can be the problem? Here the object model file:
And the scene file
What can be the problem?

Saving detected facial landmarks into a file

I'm trying to import facial landmark points from webcam or video
using dlib into a file . I can display all detected landmarks on the terminal
but it is only saving the first and second landmark ponits (x,y) into the
output file , and not saving all the detected landmarks into
the output file
#include <dlib/opencv.h>
#include <opencv2/highgui/highgui.hpp>
#include <dlib/image_processing/frontal_face_detector.h>
#include <dlib/image_processing/render_face_detections.h>
#include <dlib/image_processing.h>
#include <dlib/gui_widgets.h>
using namespace dlib;
using namespace std;
int main()
{
try
{
cv::VideoCapture cap(0);
if (!cap.isOpened())
{
cerr << "Unable to connect to camera" << endl;
return 1;
}
image_window win;
frontal_face_detector detector = get_frontal_face_detector();
shape_predictor pose_model;
deserialize("shape_predictor_68_face_landmarks.dat") >> pose_model;
while(!win.is_closed())
{
// Grab a frame
cv::Mat temp;
cap >> temp;
cv_image<bgr_pixel> cimg(temp);
std::vector<rectangle> faces = detector(cimg);
std::vector<full_object_detection> shapes;
for (unsigned long i = 0; i < faces.size(); ++i)
{
full_object_detection shape = pose_model(cimg, faces[i]);
cout << "number of parts: "<< shape.num_parts() << endl;
cout << "pixel position of first part: " << shape.part(0) << endl;
cout << "pixel position of second part: " << shape.part(1) << endl;
shapes.push_back(pose_model(cimg, faces[i]));
const full_object_detection& d = shapes[0];
ofstream outputfile;
outputfile.open("data1.txt");
outputfile<< shape.part(0).x() << " " << shape.part(0).y() << endl;
outputfile<< shape.part(1).x() << " " << shape.part(1).y() << endl;
}
win.clear_overlay();
win.set_image(cimg);
win.add_overlay(render_face_detections(shapes));
}
}
catch(serialization_error& e)
{
cout << "You need dlib's default face landmarking model file to run this example." << endl;
cout << "You can get it from the following URL: " << endl;
cout << " http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2" << endl;
cout << endl << e.what() << endl;
}
catch(exception& e)
{
cout << e.what() << endl;
}
}
Am I wrong or you want to save all landmarks when you have only:
ofstream outputfile;
outputfile.open("data1.txt");
outputfile<< shape.part(0).x() << " " << shape.part(0).y() << endl;
outputfile<< shape.part(1).x() << " " << shape.part(1).y() << endl;
And even not closing the file correctly. Try with for statement.

Issue with imread. Strange float values

I'm trying to simple load an image (TIFF) and display a pixel value.
If I open the image using ImageJ the values are 32-bit float. But opening the same image using opencv I get really strange float values, e.g. 4.2039e-44.
If I read the value of a specific pixel using "int" the presented value is correct. Below is the code I use to test. And here a link for the image: https://goo.gl/Wmv9xE.
Thanks in advance.
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/core/core.hpp>
#include <iostream>
int main(int argc, char **argv) {
std::string imageFile = "image.tiff";
cv::Mat image;
image = cv::imread(imageFile, CV_LOAD_IMAGE_ANYDEPTH); // Read the file
if (!image.data) // Check for invalid input
{
std::cout << "Could not open or find the image: " << imageFile << std::endl;
return -1;
}
std::cout << "Image:" << image.rows << " x " << image.cols << " Channels: " << image.channels() << " Depth: " << image.depth() << std::endl;
std::cout << "Value at 0,0: " << image.at<float>(0,1)<< std::endl; // Strange Value
std::cout << "Value at 0,0: " << image.at<int>(0,1)<< std::endl; // Correct Value
return (EXIT_SUCCESS);
}
* Update *
Trying to move forward with the code, I decided to create a function to convert the data to "int" reading from the file.
As a temporary solution this worked, but I'm still looking for the reason why the data is being loaded wrong.
int main(int argc, char **argv) {
std::string imageFile = "/home/slepicka/XSConfig/image.tiff";
cv::Mat image = openImage(imageFile);
if (!image.data) // Check for invalid input
{
std::cout << "Could not open or find the image: " << imageFile << std::endl;
return -1;
}
std::cout << "Image:" << image.rows << " x " << image.cols << " Channels: " << image.channels() << " Depth: " << image.depth() << " Type: " << image.type() << std::endl;
std::cout << "Value at 0,0: " << image.at<int>(0,1)<< std::endl; // Correct Value
//std::cout << "Data: " << image << std::endl;
return (EXIT_SUCCESS);
}
cv::Mat openImage(std::string filename){
cv::Mat imageLoad = cv::imread(filename, CV_LOAD_IMAGE_ANYDEPTH);
if(imageLoad.type() == CV_32F){
return convertToInt(imageLoad);
}
return imageLoad;
}
cv::Mat convertToInt(cv::Mat source){
int r, c;
cv::Mat converted;
converted.create(source.rows, source.cols, CV_32SC1);
for (r=0; r<source.rows;r++) {
for (c=0; c<source.cols;c++) {
converted.at<int>(r, c) = source.at<int>(r, c);
}
}
return converted;
}

OpenCV copyTo assert error

While copying one Mat into the region of interest of another I came accross an error I've never seen before. Googling it didn't turn up many results and none of them seems to be relevant.
I have included a screenshot of the error as well as some properties of the Mat's.
This is the code:
std::cout << "size height,width: " << size.height << ", " << size.width << std::endl;
cv::Mat tempResult(size.width, size.height, result.type());
std::cout << "tempResult cols,rows: " << tempResult.cols << ", " << tempResult.rows << std::endl;
std::cout << "tempResult type: " << tempResult.type() << std::endl;
std::cout << "tempResult channels: " << tempResult.channels() << std::endl;
std::cout << "result cols,rows: " << result.cols << ", " << result.rows << std::endl;
std::cout << "result type: " << result.type() << std::endl;
std::cout << "result channels: " << result.channels() << std::endl;
cv::Rect rect(0, 0, result.cols-1, result.rows-1);
std::cout << "rect size: " << rect.size() << std::endl;
result.copyTo(tempResult(rect));
The cv::Mat::operator(cv::Rect roi) method extract a submatrix with the same size of the cv::Rect roi.
But you defined a cv::Rect object with 1 row and 1 col missing, so the output matrix returned by tempResult(rect) is smaller the the matrix result. cv::Mat::CopyTo launch an exception because the input to copy is smaller than the output argument.
To fix this :
cv::Rect rect(0, 0, result.cols, result.rows);
For cv::Rect, its format is (x, y, width, height), not (x1, y1, x2, y2). That's why, in my opinion, you get the error.
If yes, you will need to change rect to:
cv::Rect rect(0, 0, result.cols, result.rows);
If not (i.e. you really means rect(x, y, width-1, height-1)), you can do like this:
result(rect).copyTo(tempResult(rect));

reshaping a matrix failed in OpenCV 2.4.3

I am using OpenCV 2.4.3 to create and reshape a matrix like this:
cv::Mat testMat = cv::Mat::zeros ( 500, 200, CV_8UC3 );
std::cout << "size of testMat: " << testMat.rows << " x " << testMat.cols << std::endl;
testMat.reshape ( 0, 1 );
std::cout << " size of reshaped testMat: " << testMat.rows << " x " << testMat.cols << std::endl;
Then from the output, I see there is no change for the reshaped testMat. I used "reshape" many times in older version of OpenCV, but with this new version, I couldn't see any changes. Is this a bug? Or am I using it incorrectly here?
reshape returns a new Mat header
cv::Mat testMat = cv::Mat::zeros ( 500, 200, CV_8UC3 );
std::cout << "size of testMat: " << testMat.rows << " x " << testMat.cols << std::endl;
cv::Mat result = testMat.reshape ( 0, 1 );
std::cout << " size of original testMat: " << testMat.rows << " x " << testMat.cols << std::endl;
std::cout << " size of reshaped testMat: " << result.rows << " x " << result.cols << std::endl;