Color sequence recognition using opencv - c++

What could be the possible machine vision solution for correct color recognition using opencv?
I must check if the color sequence of the connector bellow is correct.
Is it better to use color regonition technique or pattern match technique?
Is there any better approach to solve this?
In the image bellow is connector with colored wires, how to check correct sequence of wires?

I suggest doing following steps (with simple code ilustration):
converting to Lab color space;
https://en.wikipedia.org/wiki/Lab_color_space/
cv::cvtColor(img,img,CV_BGR2Lab);
take subimage which contains only wires
img = img(cv::Rect(x,y,width,height)); // detect wires
compute mean values for each column and get 1D vector of values
std::vector<cv::Vec3f> aggregatedVector;
for(int i=0;i<img.cols;i++)
{
cv::Vec3f sum = cv::Vec3f(0,0,0);
for(int j=0;j<img.rows;j++)
{
sum[0]+= img.at<Vecb>(j,i)[0]);
sum[1]+= img.at<Vecb>(j,i)[1];
sum[2]+= img.at<Vecb>(j,i)[2];
}
sum = sum/img.rows;
aggregatedVector.push_back(sum);
}
extract uniform fields using, for example gradient and get vector with 20
values
std::vector<Vec3f> fields
cv::Vec3f mean = 0;
int counter =0;
for(int i=0;i<aggregatedVector.size();i++)
{
mean+= aggregatedVector[i];
if(cv::norm(aggregatedVector[i+1] - aggregatedVector[i]) > /*thresh here */
{
fields.push_back(mean/(double)counter);
mean = cv::Vec3f(0,0,0);
counter=0;
}
counter++
}
compute vector of color distances between calculated vector and reference
double totalError = 0;
for(int i=0;i<fields.size();i++)
{
totalError+= cv::mean(reference[i]-fields[i]);
}
Then you can make decision based on error vector values. Have fun!

Related

OpenCV 3.1 Stitch images in order they were taken

I am building an Android app to create panoramas. The user captures a set of images and those images
are sent to my native stitch function that was based on https://github.com/opencv/opencv/blob/master/samples/cpp/stitching_detailed.cpp.
Since the images are in order, I would like to match each image only to the next image in the vector.
I found an Intel article that was doing just that with following code:
vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(try_gpu, match_conf);
Mat matchMask(features.size(),features.size(),CV_8U,Scalar(0));
for (int i = 0; i < num_images -1; ++i)
{
matchMask.at<char>(i,i+1) =1;
}
matcher(features, pairwise_matches,matchMask);
matcher.collectGarbage();
Problem is, this wont compile. Im guessing its because im using OpenCV 3.1.
Then I found somewhere that this code would do the same:
int range_width = 2;
BestOf2NearestRangeMatcher matcher(range_width, try_cuda, match_conf);
matcher(features, pairwise_matches);
matcher.collectGarbage();
And for most of my samples this works fine. However sometimes, especially when im stitching
a large set of images (around 15), some objects appear on top of eachother and in places they shouldnt.
I've also noticed that the "beginning" (left side) of the end result is not the first image in the vector either
which is strange.
I am using "orb" as features_type and "ray" as ba_cost_func. Seems like I cant use SURF on OpenCV 3.1.
The rest of my initial parameters look like this:
bool try_cuda = false;
double compose_megapix = -1; //keeps resolution for final panorama
float match_conf = 0.3f; //0.3 default for orb
string ba_refine_mask = "xxxxx";
bool do_wave_correct = true;
WaveCorrectKind wave_correct = detail::WAVE_CORRECT_HORIZ;
int blend_type = Blender::MULTI_BAND;
float blend_strength = 5;
double work_megapix = 0.6;
double seam_megapix = 0.08;
float conf_thresh = 0.5f;
int expos_comp_type = ExposureCompensator::GAIN_BLOCKS;
string seam_find_type = "dp_colorgrad";
string warp_type = "spherical";
So could anyone enlighten me as to why this is not working and how I should match my features? Any help or direction would be much appreciated!
TL;DR : I want to stitch images in the order they were taken, but above codes are not working for me, how can I do that?
So I found out that the issue here is not with the order the images are stitched but rather the rotation that is estimated for the camera parameters in the Homography Based Estimator and the Bundle Ray Adjuster.
Those rotation angles are estimated considering a self rotating camera and my use case envolves an user rotating the camera (which means that will be some translation too.
Because of that (i guess) horizontal angles (around Y axis) are highly overestimated which means that the algorithm considers the set of images cover >= 360 degrees which results in some overlapped areas that shouldnt be overlapped.
Still havent found a solution for that problem though.
matcher() takes UMat as mask instead of Mat object, so try the following code:
vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(try_gpu, match_conf);
Mat matchMask(features.size(),features.size(),CV_8U,Scalar(0));
for (int i = 0; i < num_images -1; ++i)
{
matchMask.at<char>(i,i+1) =1;
}
UMat umask = matchMask.getUMat(ACCESS_READ);
matcher(features, pairwise_matches, umask);
matcher.collectGarbage();

Logistic regression for fault detection in an image

Basically, I want to detect a fault in an image using logistic regression. I'm hoping to get so feedback on my approach, which is as follows:
For training:
Take a small section of the image marked "bad" and "good"
Greyscale them, then break them up into a series of 5*5 pixel segments
Calculate the histogram of pixel intensities for each of these segments
Pass the histograms along with the labels to the Logistic Regression class for training
Break the whole image into 5*5 segments and predict "good"/"bad" for each segment.
Using the sigmod function the linear regression equation is:
1/ (1 - e^(xθ))
Where x is the input values and theta (θ) is the weights. I use gradient descent to train the network. My code for this is:
void LogisticRegression::Train(float **trainingSet,float *labels, int m)
{
float tempThetaValues[m_NumberOfWeights];
for (int iteration = 0; iteration < 10000; ++iteration)
{
// Reset the temp values for theta.
memset(tempThetaValues,0,m_NumberOfWeights*sizeof(float));
float error = 0.0f;
// For each training set in the example
for (int trainingExample = 0; trainingExample < m; ++trainingExample)
{
float * x = trainingSet[trainingExample];
float y = labels[trainingExample];
// Partial derivative of the cost function.
float h = Hypothesis(x) - y;
for (int i =0; i < m_NumberOfWeights; ++i)
{
tempThetaValues[i] += h*x[i];
}
float cost = h-y; //Actual J(theta), Cost(x,y), keeps giving NaN use MSE for now
error += cost*cost;
}
// Update the weights using batch gradient desent.
for (int theta = 0; theta < m_NumberOfWeights; ++theta)
{
m_pWeights[theta] = m_pWeights[theta] - 0.1f*tempThetaValues[theta];
}
printf("Cost on iteration[%d] = %f\n",iteration,error);
}
}
Where sigmoid and the hypothesis are calculated using:
float LogisticRegression::Sigmoid(float z) const
{
return 1.0f/(1.0f+exp(-z));
}
float LogisticRegression::Hypothesis(float *x) const
{
float z = 0.0f;
for (int index = 0; index < m_NumberOfWeights; ++index)
{
z += m_pWeights[index]*x[index];
}
return Sigmoid(z);
}
And the final prediction is given by:
int LogisticRegression::Predict(float *x)
{
return Hypothesis(x) > 0.5f;
}
As we are using a histogram of intensities the input and weight arrays are 255 elements. My hope is to use it on something like a picture of an apple with a bruise and use it to identify the brused parts. The (normalized) histograms for the whole brused and apple training sets look somthing like this:
For the "good" sections of the apple (y=0):
For the "bad" sections of the apple (y=1):
I'm not 100% convinced that using the intensites alone will produce the results I want but even so, using it on a clearly seperable data set isn't working either. To test it I passed it a, labeled, completely white and a completely black image. I then run it on the small image below:
Even on this image it fails to identify any segments as being black.
Using MSE I see that the cost is converging downwards to a point where it remains, for the black and white test it starts at about cost 250 and settles on 100. The apple chuncks start at about 4000 and settle on 1600.
What I can't tell is where the issues are.
Is, the approach sound but the implementation broken? Is logistic regression the wrong algorithm to use for this task? Is gradient decent not robust enough?
I forgot to answer this... Basically the problem was in my histograms which when generated weren't being memset to 0. As to the overall problem of whether or not logistic regression with greyscale images was a good solution, the answer is no. Greyscale just didn't provide enough information for good classification. Using all colour channels was a bit better but I think the complexity of the problem I was trying to solve (bruises in apples) was a bit much for simple logistic regression on its own. You can see the results on my blog here.

OpenCV video stabilization

I'm trying to implement video stabilization using OpenCV videostab module. I need to do it in stream, so I'm trying to get motion between two frames. After learning documentation, I decide to do it this way:
estimator = new cv::videostab::MotionEstimatorRansacL2(cv::videostab::MM_TRANSLATION);
keypointEstimator = new cv::videostab::KeypointBasedMotionEstimator(estimator);
bool res;
auto motion = keypointEstimator->estimate(this->firstFrame, thisFrame, &res);
std::vector<float> matrix(motion.data, motion.data + (motion.rows*motion.cols));
Where firstFrame and thisFrame are fully initialized frames. The problem is, that estimate method always return the matrix like that:
In this matrix only last value(matrix[8]) is changing from frame to frame. Am I correctly use videostab objects and how can I apply this matrix on frame to get result?
I am new to OpenCV but here is how I have solved this issue.
The problem lies in the line:
std::vector<float> matrix(motion.data, motion.data + (motion.rows*motion.cols));
For me, the motion matrix is of type 64-bit double (check yours from here) and copying it into std::vector<float> matrix of type 32-bit float messes-up the values.
To solve this issue, try replacing above line with:
std::vector<float> matrix;
for (auto row = 0; row < motion.rows; row++) {
for (auto col = 0; col < motion.cols; col++) {
matrix.push_back(motion.at<float>(row, col));
}
}
I have tested it with running the estimator on duplicate set of points and it gives expected results with most entries close to 0.0 and matrix[0], matrix[4] and matrix[8] being 1.0 (using author's code with this setting was giving the same erroneous values as author's picture displays).

Uniform histogram implementation in c++

I using the code of bytefish in order to calculate Local Binary Patterns (LBP) spatial uniform histograms for an image. I am using the the spatial_histogram function which calculates the histogram of local patches of image. Every calculated patch has size 256 so the final Mat hist file size is 1x(n*256). What I am trying to understand is how can I convert that histogram implementation to uniform histogram implementation. Implemented histogram code is the following:
void lbp::histogram_(const Mat& src, Mat& hist, int numPatterns) {
hist = Mat::zeros(1, numPatterns, CV_32SC1);
for(int i = 0; i < src.rows; i++) {
for(int j = 0; j < src.cols; j++) {
int bin = src.at<_Tp>(i,j);
hist.at<int>(0,bin) += 1;
}
}
Uniform process is based on the following paper ( for local binary patterns) here.
A local binary pattern is called uniform if the binary pattern contains at most two bitwise transitions from 0 to 1 or vice versa when the bit pattern is considered circular.
[edit2] color reduction
That is easy it is just recoloring by the table uniform[256] has nothing to do with uniform histograms !!!
create translation(recoloring) table for each possible color
for 8-bit gray-scale it is 256 colors for example:
BYTE table[256] = {
0,1,2,3,4,58,5,6,7,58,58,58,8,58,9,10,11,58,58,58,58,58,58,58,12,58,58,58,13,58,
14,15,16,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,17,58,58,58,58,58,58,58,18,
58,58,58,19,58,20,21,22,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,
58,58,58,58,58,58,58,58,58,58,58,58,23,58,58,58,58,58,58,58,58,58,58,58,58,58,
58,58,24,58,58,58,58,58,58,58,25,58,58,58,26,58,27,28,29,30,58,31,58,58,58,32,58,
58,58,58,58,58,58,33,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,34,58,58,58,58,
58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,
58,35,36,37,58,38,58,58,58,39,58,58,58,58,58,58,58,40,58,58,58,58,58,58,58,58,58,
58,58,58,58,58,58,41,42,43,58,44,58,58,58,45,58,58,58,58,58,58,58,46,47,48,58,49,
58,58,58,50,51,52,58,53,54,55,56,57 };
You can also compute it programaticaly like table[i]=(58*i)/255; for linear distribution but I suggest it is more like this recolor based on histogram example:
//hist[256] - already computed classic histogram
//table[59] - wanted recolor table
void compute_table(int *table,int *hist)
{
int i,c,threshold=1;
for (c=-1,i=0;i<256;i++)
if (hist[i]>threshold) { c++; table[i]=c; }
else table[i]=58;
}
set the threshold by area size or color count or whatever ...
recolor color
color_59=table[color_256]; either recolor source image or just change color value before used in histogram computation
That is all.
[edit1] LBP
I do not think is a good idea to compute histogram for LBP at all
I would compute min and max color for sub image region you work with
then convert colors to binary
if (color>=(max+min)/2) color=1; else color=0;
now shift+or them to form the LBP vector
4x4 LBP example:
LBP =color[0][0];
LBP<<=1; LBP|=color[0][1];
LBP<<=1; LBP|=color[0][2];
...
LBP<<=1; LBP|=color[3][3];
you can do the step #3 directly in step #2
[original answer] - now obsolete
Histogram is the probability/occurrence/count of distinct color(shade)
uniform histogram means that each colors have almost the same count/area in the whole image
by bins I assume you mean distinct colors not the sub-images
To combine sub-histogram
just add them together or use single hist array init it once and then just sum to it as you have something like:
??? hist=Mat::zeros(1, numPatterns, CV_32SC1);
void lbp::histogram_(const Mat& src, Mat& hist, int numPatterns, bool init) {
if (init) hist = Mat::zeros(1, numPatterns, CV_32SC1);
for(int i = 0; i < src.rows; i++) {
for(int j = 0; j < src.cols; j++) {
int bin = src.at<_Tp>(i,j);
hist.at<int>(0,bin) += 1;
}
}
set init to true for the first patch call and false for all the rest. numPatterns is the max used color+1 or max possible colors count (not the distinct colors count)
If you want save only used colors
then you need to remember also the color. int hist[][2],hists=0; or use some dynamic list template for that the hist computation will change (will be much slower).
take color
test if it is in hist[i][0]==color
if yes increment its counter hist[i][1]++;
if not add new color hist[hists][0]=color; hist[hists][1]=1; hists++;
this will save space only if used colors are less then half of the possible ones. To improve performance you can compute hist normally and recompute to this list after that in the same manner (instead of the increment part of coarse)

what is the fastest way to run a method on all pixels in opencv (c++)

I have several tasks to do on each pixel in opencv. I am using a construct like this:
for(int row = 0; row < inputImage.rows; ++row)
{
uchar* p = inputImage.ptr(row);
for(int col = 0; col < inputImage.cols*3; col+=3)
{
int blue=*(p+col); //points to each pixel B,G,R value in turn assuming a CV_8UC3 colour image
int green=*(p+col+1);
int red=*(p+col+2);
// process pixel }
}
This is working, but I am wondering if there is any faster way to do this? This solution doesn't use any SIMD or any paralle processing of OpenCV.
What is the best way to run a method over all pixels of an image in opencv?
If the Mat is continuous, i.e. the matrix elements are stored continuously without gaps at the end of each row, which can be referred using Mat::isContinuous(), you can treat them as a long row. Thus you can do something like this:
const uchar *ptr = inputImage.ptr<uchar>(0);
for (size_t i=0; i<inputImage.rows*inputImage.cols; ++i){
int blue = ptr[3*i];
int green = ptr[3*i+1];
int red = ptr[3*i+2];
// process pixel
}
As said in the documentation, this approach, while being very simple, can boost the performance of a simple element-operation by 10-20 percents, especially if the image is rather small and the operation is quite simple.
PS: For faster need, you will need to take full use of GPU to process each pixel in parallel.