I'm not sure if I call this a problem or not but I'm trying to use Face Recognizer from OpenCV tutorial and it works fine.
Steps:
1) load CSV (file containing the path of all image) and train the
algorithm.
2) load the xml file for face detection (Haar Cascade)
3) open camera and crop the face
4) predtect
It works fine when I use a familiar face but when show another face I mean, unknown face, it should give me an (-1) for an unknown face, but it gives me an approximation or the closest face label.
I know it suppose to work like that, it's about proximity for that they put a threshold, but I can't set it , I'm using OpenCV 3.1, I know that because I set the num_components and the threshold in the creation methode:
int num_components = 10;
double threshold = 10.0;
Ptr<FaceRecognizer> model = createEigenFaceRecognizer(num_components,threshold );
printf("thresh= %d", model->getThreshold());
it gives me 0 ,even when I use 0 and 0.0 , it's the same, and I never get (-1), and they are no set("thersh",val) methode in this version.
any ideas or correct me if I'm getting things wrong
thank you in advance.
Related
I am trying to get surface normal from my kinect2 data using PCL in ROS. I am having trouble in visualizing normal data.
I am using Following Viewer to view real time point cloud.
I have added point normal code of PCL to this code to calculate and visualize normal.
I am getting following runtime error:
ERROR: In /home/chandan_main/Downloads/VTK-7.1.0/Rendering/OpenGL2/vtkOpenGLPolyDataMapper.cxx, line 1794
vtkOpenGLPolyDataMapper (0xa1ea5e0): failed after UpdateShader 1 OpenGL errors detected
0 : (1281) Invalid value
[pcl::IntegralImageNormalEstimation::setInputCloud] Input dataset is not organized (height = 1).
[pcl::IntegralImageNormalEstimation::initCompute] Input dataset is not organized (height = 1).
[addPointCloudNormals] The number of points differs from the number of normals!
[pcl::IntegralImageNormalEstimation::setInputCloud] Input dataset is not organized (height = 1).
[pcl::IntegralImageNormalEstimation::initCompute] Input dataset is not organized (height = 1).
[addPointCloudNormals] The number of points differs from the number of normals!
[pcl::IntegralImageNormalEstimation::setInputCloud] Input dataset is not organized (height = 1).
[pcl::IntegralImageNormalEstimation::initCompute] Input dataset is not organized (height = 1).
[addPointCloudNormals] The number of points differs from the number of normals!
I am able to get the normal now.I have just used
while(!viewer->wasStopped())
{
viewer->spinOnce (100); boost::this_thread::sleep(boost::posix_time::microseconds (100000));
}
becuse I was trying to get normal in real time.It was showing errors.I also rebuilt VTK library which had issues.
I am trying to run SLAM algorithm (ElasticFusion) using my custom .klg file.
I tried the following 2 ways :
The first way was about to build .klg file manually from separate depth and rgb image (.png) files and their time stamp informations . I tried conversion script on this Sequence 'freiburg1_desk' dataset and then run ElasticFusion . I get good result and point cloud. But when I tried to record environment on my own device with following the same steps, I did not get desired result or point cloud. The result which i am getting in live logging is much better . I guess it is because of the code that i am using for depth image visualization.
np.clip(depth, 0, 2**10-1, depth)
depth2=depth/2842.5
depth2+=1.1863
depth=np.tan(depth2)
depth*=0.1236
depth-=0.037
depth*=1000
#depth = 0.1236 * math.tan(depth / 2842.5 + 1.1863);
depth = depth.astype(np.uint16)
return depth
I get above formula from here
A better approximation is given by Stéphane Magnenat in this post: distance = 0.1236 * tan(rawDisparity / 2842.5 + 1.1863) in meters. Adding a final offset term of -0.037 centers the original ROS data.
The second way that i tried was using this Logger which is suggested by Thomas Whelan (ElasticFusion). I run this Logger without any error.
Number devices connected: 1
1. device on bus 001:14 is a Xbox NUI Camera (2AE) from Microsoft (45E) with serial id 'A00366911101042A'
searching for device with index = 1
Opened 'Xbox NUI Camera' on bus 1:14 with serial number 'A00366911101042A'
But i am getting black screen for Depth and RGB image.
I am using Ubuntu 16.04 and Kinect1. Any kind of suggestion or help will be appreciated
Solved
Second way worked after re-installing OpenNI. Probably in previous runs Logger somehow was not able to find OpenNI for streaming the depth and rgb.
I'm trying to to add noise to an Image & then denoised to see the difference in my object detection algorithm. So I developed OpenCV code in C++ for detection some objects in the image. I would like to test the robustness of the code, so tried to add some noises. In that way would like to check how the object detection rate changed when add noises to the image. So , first added some random Gaussian Noises like this
cv::Mat noise(src.size(),src.type());
float m = (10,12,34);
float sigma = (1,5,50);
cv::randn(noise, m, sigma); //mean and variance
src += noise;
I got this images:
The original:
The noisy one
So is there any better model for noises? Then how to Denoise it. Is there any DeNoising algorithms?
OpenCV comes with Photo package in which you can find an implementation of Non-local Means Denoising algorithm. The documentation can be found here:
http://docs.opencv.org/3.0-beta/modules/photo/doc/denoising.html
As far as I know it's the only suitable denoising algorithm both in OpenCV 2.4 and OpenCV 3.x
I'm not aware of any other noise models in OpenCV than randn. It shouldn't be a problem however to add a custom function that does that. There are some nice examples in python (you should have no problem rewriting it to C++ as the OpenCV API remains roughly identical) How to add noise (Gaussian/salt and pepper etc) to image in Python with OpenCV
There's also one thing I don't understand: If you can generate noise, why would you denoise the image using some algorithm if you already have the original image without noise?
Check this tutorial it might help you.
http://docs.opencv.org/trunk/d5/d69/tutorial_py_non_local_means.html
Specially this part:
OpenCV provides four variations of this technique.
cv2.fastNlMeansDenoising() - works with a single grayscale images
cv2.fastNlMeansDenoisingColored() - works with a color image.
cv2.fastNlMeansDenoisingMulti() - works with image sequence captured
in short period of time (grayscale images)
cv2.fastNlMeansDenoisingColoredMulti() - same as above, but for color
images.
Common arguments are:
h : parameter deciding filter strength. Higher h value removes noise
better, but removes details of image also. (10 is ok)
hForColorComponents : same as h, but for color images only. (normally
same as h)
templateWindowSize : should be odd. (recommended 7)
searchWindowSize : should be odd. (recommended 21)
And to add gaussian noise to image, maybe this thread will be helpful:
How to add Noise to Color Image - Opencv
I am trying to implement face detection program with webcam input using Viola-Jones face detector in OpenCV and it works fine except that it becomes about 10 times slower when no face is detected in the frame.
This is really weird because if there is no face in the frame, most of the windows will be rejected in the earlier stages of the cascade and it should be slightly faster I guess (NOT SLOWER!).
I am using detectMultiScale function (not cvHaarDetectObjects function) for some reasons but I don't think this should matter in any way.
Can anyone give me an advice on this problem please?
Thanks in advance.
Did you try to add the min and the max size of the face rectangle to be detected ?
You can also check your pyramid scale value, it must be > 1, and if it is too slow, try to use a highter value, the detection will not be as good but it will be faster.
cv::CascadeClassifier cascade;
// ... init classifier
cv::Mat grayImage;
// ... set image
std::vector<cv::Rect> > results;
cv::Size minSize(60,60);
cv::Size maxSize(80,80);
int minNeighbors= 3; // edited
float pyramidScale = 1.1f;
cascade.detectMultiScale(grayImage, results, pyramidScale, minNeighbors,
0 , minSize, maxSize);
I’m trying to develop (using C++ - MSVS 12.0) a function that discover which pixels (from a raster image) have its center inside a polygon (previously populated using a shapefile). I’m using GDAL 1.11.0 (just installed, using devinstall) building from source and using the option INCLUDE_OGR_FRMTS=YES. I can use GDAL and most of OGR functions without problem. However, when I use the following code:
if (polygon->Contains(tmpPoint))
I receive the error message: ERROR 6: GEOS support not enabled
Anybody knows how to solve this issue?
I’m using:
#include "ogrsf_frmts.h"
and my function is declared:
void FindPixels(GDALDataset *image, OGRLayer *poLayer, OGRPolygon *polygon)
and part of my code is:
OGRPoint *tmpPoint = NULL
OGRSpatialReference *spatialReference = NULL;
spatialReference = polygon->getSpatialReference();
tmpPoint = new OGRPoint();
tmpPoint->assignSpatialReference(spatialReference);
loop begin:
tmpPoint->setX(imgTLX + (j * imgRes) + imgResHalf);
tmpPoint->setY(imgTLY - (i * imgRes) - imgResHalf);
if (polygon->Contains(tmpPoint))
Thanks in advance!
MB
Use GDALRasterizeLayers to burn the image of polygon onto a raster. This way you will find all the pixels that fall into the polygon, or not. The default is to burn the pixel only if the centre intersects a polygon.
If the source layer has multiple polygons, you may need to distinguish them by either setting an attribute filter, or using burned attribute ID fields (although this won't work if the polygons overlap).