I am working on detect a 2D Barcode on a PCB board. The environment is Visual Studio 2012.
We met some problems and can’t filter out the 2D barcode image successfully.
Loading the figure: Original Image Size is 1600*1200.
After we load the figure and staring a series of processing as following steps:
1. Finding threshold value by auto-threshold method.
2. Doing binary threshold to image.
3. Doing Opening to make image clearly.
Opening:
dst = open(src,element) = dilate(erode(src, element))
4. Filter out the rectangle except the squares.
Then we can get a collection of squares.
As the following image, after the steps 1-4 we can find squares on the image.
5. Using a similar Data Matrix Template compare with squares respectively by the histogram analysis.
5.1 Calculate the histogram
void calcHist( const Mat* images, int nimages,
const int* channels, InputArray mask,
OutputArray hist, int dims, const int* histSize,
const float** ranges, bool uniform=true, bool accumulate=false );
5.2 Normalize the value range of an array
void normalize( InputArray src, OutputArray dst, double alpha=1, double beta=0,
int norm_type=NORM_L2, int dtype=-1, InputArray mask=noArray());
5.3 Compare two histograms with correlation.
double compareHist( InputArray H1, InputArray H2, CV_COMP_CORREL );
6. After the processing we can’t filter the correct image from the square collection.
6.1 We have adjusted the bins of histogram from 256 to 64/32 but the results without robustness, the correlation values are very low even less than 0.5.
6.2 We also try to use the EMD (Earth Mover's Distance) to estimate the similarity of two squares and it’s not solving this problem.
[[Question]]:
Is it possible to share us some suggestion to improve our detection method?
Why not use libraries?
datamatrix opencv module
zxing Cpp
libdmtx
Otherwise, you can study the code in these libs and try to optimise your own code.
Related
I am doing Inverse Perspective Mapping using opencv C++. I am following this code to get the desired result. Please have a look at the result.
[
I am using opencv c++ remap function. In addition to the current result I need to how to project a pixel from the source image to the distination image. i.e if I click on the pixel (320, 140), how would I get the corresponding pixel i.e (0, 0) in the distination picture.
void remap(InputArray src, OutputArray dst, InputArray map1, InputArray map2, int interpolation, int borderMode=BORDER_CONSTANT, const Scalar& borderValue=Scalar())
I have the calculated the arguments map1, map2. I guess I have to use them but i don't know how.
If I want to crop image center at (x,y) with window size ws using
void cvWarpAffine(const CvArr* src, CvArr* dst, const CvMat* map_matrix,
int flags=CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS, CvScalar fillval=cvScalarAll(0) )
in openCV, how should I set the parameters of transformation matrix? I am confused about how to set the default parameters of other transformations except crop?
P.S. In my situation I also want to crop the point centre at edge and need padding in the fix window. So use cv:rect will be more complecated to deal with the edge.
I'm working on code that computes dense SIFT features from a set of images, based on SIFT flow: http://people.csail.mit.edu/celiu/SIFTflow/
I'd like to try building a FLANN index on these images by comparing the "energy" between each image in SIFT flow representation.
I have the code to compute the energy from here: http://richardt.name/publications/video-deanaglyph/
Is there a way to create my own distance function for the indexing?
RELATED NOTE:
I was finally able to get an alternate (but not custom) distance function working with flann::Index. The trick is you need to use flann::GenericIndex like so:
flann::GenericIndex<cvflann::ChiSquareDistance<int>> flannIndex(descriptors, cvflann::KDTreeIndexParams());
But you need to give it CV_32S descriptors.
And if you use knnSearch with custom distance function, you have to provide a CV_32S results Mat and CV_32F distances Mat.
Here's my full code in case it's helpful (not a lot of documentation out there):
Mat samples;
loadDescriptors(samples); // loading descriptors from .yml file
samples *= 100000; // scaling up my descriptors to be int
samples.convertTo(samples, CV_32S); // convert float to int
// create flann index
flann::GenericIndex<cvflann::ChiSquareDistance<int>> flannIndex(samples, cvflann::KDTreeIndexParams());
// NOTE lack of distance type in constructor parameters
// (unlike flann::index)
// now try knnSearch
int k=10; // find 10 nearest neighbors
Mat results(1,10,CV_32S), dists(1,10,CV_32F);
// (1,10) Mats for the output, types CV_32S and CV_32F
Mat responseHistogram;
responseHistogram = samples.row(60);
// choose a random row from the descriptors Mat
// to find nearest neighbors
flannIndex.knnSearch(responseHist, results, dists, k, cvflann::SearchParams(200) );
cout << results << endl;
cout << dists << endl;
flannIndex.save(ofToDataPath("indexChi2.txt"));
Using Chi Squared actually seems to work better for me than L2 distance. My feature vectors are BoW histograms in this case.
I am using openCV to do some dense feature extraction. For example, The code
DenseFeatureDetector detector(12.f, 1, 0.1f, 10);
I don't really understand the parameters in the above constructor. What does it mean ? Reading the opencv documentation about it does not help much either. In the documentation the arguments are:
DenseFeatureDetector( float initFeatureScale=1.f, int featureScaleLevels=1,
float featureScaleMul=0.1f,
int initXyStep=6, int initImgBound=0,
bool varyXyStepWithScale=true,
bool varyImgBoundWithScale=false );
What are they supposed to do ? i.e. what is the meaning of scale, initFeatureScale, featureScaleLevels etc ? How do you know the grid or grid spacing etc for the dense sampling.
I'm using opencv with dense detector too and I think I can help you with something. I'm not sure about what I'm going to say but the experience learnt me that.
When I use Dense detector I pass there the gray scale image. The detector makes some threshold filters where opencv uses a gray minimum value with is used to transform the image. The píxels where have a more gray level than the threshold will be made like black points and the others are white point. This action is repeated in a loop where the threshold will be bigger and bigger. So the parameter initFeatureScale determine the first threshold you put to do this loop, the featureScaleLevels parameter indicates how much this threshold is bigger between one loop iteration and the next one and featureScaleMul is a multiply factor to calculate the next threshold.
Anyway if you are looking for a your optimal parameters to use Dense detector to detect any particular points You would offer a program I made for that. It is liberated in github. This is a program where you can test some detectors (Dense detector is one of them) and check how it works if you change their parameters thanks to a user interface that let you change the detectors parameters as long as you are executing the program. You will see how the detected points will be change. For try it just click on the link, and download the files. You might need almost all the files to execute the program.
Apologies in advance, i'm predominantly using Python so i'll avoid embarressing myself by referring to C++.
DenseFeatureDetector populates a vector with KeyPoints to pass to compute feature descriptors. These keypoints have a point vector and their scale set. In the documentation, scale is the pixel radius of the keypoint.
KeyPoints are evenly spaced across the width and height of the image matrix passed to DenseFeatureVector.
Now to the arguments:
initFeatureScale
Set the initial KeyPoint feature radius in pixels (as far as I am aware this has no effect)
featureScaleLevels
Number of scales overwhich we wish to make keypoints
featureScaleMuliplier
Scale adjustment for initFeatureScale over featureScaleLevels, this scale adjustment can also be applied to the border (initImgBound) and the step size (initxystep). So when we set featureScaleLevels>1 then this multiplier will be applied to successive scales, to adjust feature scale, step and the boundary around the image.
initXyStep
moving column and row step in pixels. Self explanatory I hope.
initImgBound
row/col bounding region to ignore around the image (pixels), So a 100x100 image, with an initImgBound of 10, would create keypoints in the central 80x80 portion of the image.
varyXyStepWithScale
Boolean, if we have multiple featureScaleLevels do we want to adjust the step size using featureScaleMultiplier.
varyImgBoundWithScale
Boolean,as varyXyStepWithScale, but applied to the border.
Here is the DenseFeatureDetector source code from detectors.cpp in the OpenCV 2.4.3 source, which will probably explain better than my words:
DenseFeatureDetector::DenseFeatureDetector( float _initFeatureScale, int _featureScaleLevels,
float _featureScaleMul, int _initXyStep,
int _initImgBound, bool _varyXyStepWithScale,
bool _varyImgBoundWithScale ) :
initFeatureScale(_initFeatureScale), featureScaleLevels(_featureScaleLevels),
featureScaleMul(_featureScaleMul), initXyStep(_initXyStep), initImgBound(_initImgBound),
varyXyStepWithScale(_varyXyStepWithScale), varyImgBoundWithScale(_varyImgBoundWithScale)
{}
void DenseFeatureDetector::detectImpl( const Mat& image, vector<KeyPoint>& keypoints, const Mat& mask ) const
{
float curScale = static_cast<float>(initFeatureScale);
int curStep = initXyStep;
int curBound = initImgBound;
for( int curLevel = 0; curLevel < featureScaleLevels; curLevel++ )
{
for( int x = curBound; x < image.cols - curBound; x += curStep )
{
for( int y = curBound; y < image.rows - curBound; y += curStep )
{
keypoints.push_back( KeyPoint(static_cast<float>(x), static_cast<float>(y), curScale) );
}
}
curScale = static_cast<float>(curScale * featureScaleMul);
if( varyXyStepWithScale ) curStep = static_cast<int>( curStep * featureScaleMul + 0.5f );
if( varyImgBoundWithScale ) curBound = static_cast<int>( curBound * featureScaleMul + 0.5f );
}
KeyPointsFilter::runByPixelsMask( keypoints, mask );
}
You might expect a call to compute would calculate additional KeyPoint characteristics using the relevant keypoint detection algorithm (e.g. angle), based on the KeyPoints generated by DenseFeatureDetector. Unfortunately this isn't the case for SIFT under Python - i've not looked at at the other feature detectors, nor looked at the behaviour in C++.
Also note that DenseFeatureDetector is not in OpenCV 3.2 (unsure at which release it was removed).
I'm working in OpenCV C++ to filtering image color. I want to filter the image using my own matrix. See this code:
img= "c:/Test/tes.jpg";
Mat im = imread(img);
And then i want to filtering/multiply with my matrix (this matrix can replaced with another matrix 3x3)
Mat filter = (Mat_<double>(3, 3) <<17.8824, 43.5161, 4.11935,
3.45565, 27.1554, 3.86714,
0.0299566, 0.184309, 1.46709);
How to multiply the img mat matrix with my own matrix? I'm still not understand how to multiply 3 channel (RGB) matrix with another matrix (single channel) and resulted image with new color.
you should take a look at the opencv documentation. You could use this function:
filter2D(InputArray src, OutputArray dst, int ddepth, InputArray kernel, Point anchor=Point(-1,-1), double delta=0, int borderType=BORDER_DEFAULT )
which would give you something like this in your code:
Mat output;
filter2D(im, output, -1, filter);
About your question for 3-channel matrix; it is specified in the documentation:
kernel – convolution kernel (or rather a correlation kernel), a single-channel floating point matrix; if you want to apply different kernels to different channels, split the image into separate color planes using split() and process them individually.
So by default your "filter" matrix will be applied equally to each color plane.
EDIT You find a fully functional example on the opencv site: http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/filter_2d/filter_2d.html