I'm trying to implement video stabilization using OpenCV videostab module. I need to do it in stream, so I'm trying to get motion between two frames. After learning documentation, I decide to do it this way:
estimator = new cv::videostab::MotionEstimatorRansacL2(cv::videostab::MM_TRANSLATION);
keypointEstimator = new cv::videostab::KeypointBasedMotionEstimator(estimator);
bool res;
auto motion = keypointEstimator->estimate(this->firstFrame, thisFrame, &res);
std::vector<float> matrix(motion.data, motion.data + (motion.rows*motion.cols));
Where firstFrame and thisFrame are fully initialized frames. The problem is, that estimate method always return the matrix like that:
In this matrix only last value(matrix[8]) is changing from frame to frame. Am I correctly use videostab objects and how can I apply this matrix on frame to get result?
I am new to OpenCV but here is how I have solved this issue.
The problem lies in the line:
std::vector<float> matrix(motion.data, motion.data + (motion.rows*motion.cols));
For me, the motion matrix is of type 64-bit double (check yours from here) and copying it into std::vector<float> matrix of type 32-bit float messes-up the values.
To solve this issue, try replacing above line with:
std::vector<float> matrix;
for (auto row = 0; row < motion.rows; row++) {
for (auto col = 0; col < motion.cols; col++) {
matrix.push_back(motion.at<float>(row, col));
}
}
I have tested it with running the estimator on duplicate set of points and it gives expected results with most entries close to 0.0 and matrix[0], matrix[4] and matrix[8] being 1.0 (using author's code with this setting was giving the same erroneous values as author's picture displays).
Related
I am using Tensorflow for a image classification problem in C++.
I created the graph and tried using the example code here.
When I give an image (.jpeg) as the input to the main function (for string image in the main function) it works fine. But I have the pixel values of an image (luminance values only) in a 2d vector std::vector<std::vector<int>> vec2d. How can I give this vector as input for making a prediction?
I got a tensor created as follows, but I cannot figure out how to fit that to the existing code.
tensorflow::Tensor input(tensorflow::DT_FLOAT,
tensorflow::TensorShape({32, 32}));
auto input_map = input.tensor<float, 2>();
for (int b = 0; b < 32; b++) {
for (int c = 0; c < 32; c++) {
input_map(b, c) = (vec2d)[b][c];
}
}
Or is there an inbuilt way to pass pixel values in tensorflow?
I do not want to create an image with the pixel values and re-read it. I already did and it works but it takes time for file read/write operations and time is very crucial in my system.
I'm trying to perform Bundle Adjustment (BA) on a sequence of stereo images (class Step) taken with the same camera.
Each Step has left & right images (rectified and synchronized), the generated depth map, keypoints+descriptors of the left image & 2 4x4 matrices - 1 for local (image plane) to global (3D world), and its inverse (T_L2G and T_G2L respectively).
The steps are registered with respect to the 1st image.
I'm trying to run BA on the result to refine the transformation and I'm trying to use PBA (https://grail.cs.washington.edu/projects/mcba/)
Code for setting up the cameras:
for (int i = 0; i < steps.size(); i++)
{
Step& step = steps[i];
cv::Mat& T_G2L = step.T_G2L;
cv::Mat R;
cv::Mat t;
T_G2L(cv::Rect(0, 0, 3, 3)).copyTo(R);
T_G2L(cv::Rect(3, 0, 1, 3)).copyTo(t);
CameraT camera;
// Camera Parameters
camera.SetFocalLength((double)m_focalLength); // Same camera, global focal length
camera.SetTranslation((float*)t.data);
camera.SetMatrixRotation((float*)R.data);
if (i == 0)
{
camera.SetConstantCamera();
}
camera_data.push_back(camera);
}
Then, I generate a global keypoint by running on all image pairs and matching
(currently using SURF).
Then, Generating BA points data:
for (size_t i = 0; i < globalKps.size(); i++)
{
cv::Point3d& globalPoint = globalKps[i].AbsolutePoint;
cv::Point3f globalPointF((float)globalPoint.x, (float)globalPoint.y, (float)globalPoint.z);
int num_obs = 0;
std::vector < std::pair<int/*stepID*/, int/*KP_ID*/>>& localKps = globalKps[i].LocalKeypoints;
if (localKps.size() >= 2)
{
Point3D pointData;
pointData.SetPoint((float*)&globalPointF);
// For this point, set all the measurements
for (size_t j = 0; j < localKps.size(); j++)
{
int& stepID = localKps[j].first;
int& kpID = localKps[j].second;
int cameraID = stepsLUT[stepID];
Step& step = steps[cameraID];
cv::Point3d p3d = step.KeypointToLocal(kpID);
Point2D measurement = Point2D(p3d.x, p3d.y);
measurements.push_back(measurement);
camidx.push_back(cameraID);
ptidx.push_back((int)point_data.size());
}
point_data.push_back(pointData);
}
}
Then, Running BA:
ParallelBA pba(ParallelBA::PBA_CPU_FLOAT);
pba.SetFixedIntrinsics(true); // Same camera with known intrinsics
pba.SetCameraData(camera_data.size(), &camera_data[0]); //set camera parameters
pba.SetPointData(point_data.size(), &point_data[0]); //set 3D point data
pba.SetProjection(measurements.size(), &measurements[0], &ptidx[0], &camidx[0]);//set the projections
pba.SetNextBundleMode(ParallelBA::BUNDLE_ONLY_MOTION);
pba.RunBundleAdjustment(); //run bundle adjustment, and camera_data/point_data will be
Then, where I'm facing the problems, extracting the data back from PBA:
for (int i = 1/*First camera is stationary*/; i < camera_data.size(); i++)
{
Step& step = steps[i];
CameraT& camera = camera_data[i];
int type = CV_32F;
cv::Mat t(3, 1, type);
cv::Mat R(3, 3, type);
cv::Mat T_L2G = cv::Mat::eye(4, 4, type);
cv::Mat T_G2L = cv::Mat::eye(4, 4, type);
camera.GetTranslation((float*)t.data);
camera.GetMatrixRotation((float*)R.data);
t.copyTo(T_G2L(TranslationRect));
R.copyTo(T_G2L(RotationRect));
cv::invert(T_G2L, T_L2G);
step.SetTransformation(T_L2G); // Step expects local 2 global transformation
}
Everything runs the way I expect it to. PBA reports relatively small initial error (currently testing with a small amount of pair-wise registered images, so the error shouldn't be too large), and after the run it's reporting a smaller one. (Converges quickly, usually less the 3 iterations)
However, when I'm dumping the keypoints using the newly found transformations, the clouds seems to have moved further apart from each other.
(I've also tried switching between the T_G2L & T_L2G to "bring them closer". Doesn't work).
I'm wondering if there's something I'm missing using it.
the clouds seems to have moved further apart from each other
This appears not to be a PBA specific problem, but a bundle adjustment general problem.
When performing a bundle adjustment, you need to constrain the cloud, at least 7 constraints for 7 dof. If not, your cloud will drift in 3 axes, in 3 rotations and in scale.
In local BA border points are set fixed. In full BA usually there are designated point like the origin and an extra pair which fixes the scale and orientation.
What could be the possible machine vision solution for correct color recognition using opencv?
I must check if the color sequence of the connector bellow is correct.
Is it better to use color regonition technique or pattern match technique?
Is there any better approach to solve this?
In the image bellow is connector with colored wires, how to check correct sequence of wires?
I suggest doing following steps (with simple code ilustration):
converting to Lab color space;
https://en.wikipedia.org/wiki/Lab_color_space/
cv::cvtColor(img,img,CV_BGR2Lab);
take subimage which contains only wires
img = img(cv::Rect(x,y,width,height)); // detect wires
compute mean values for each column and get 1D vector of values
std::vector<cv::Vec3f> aggregatedVector;
for(int i=0;i<img.cols;i++)
{
cv::Vec3f sum = cv::Vec3f(0,0,0);
for(int j=0;j<img.rows;j++)
{
sum[0]+= img.at<Vecb>(j,i)[0]);
sum[1]+= img.at<Vecb>(j,i)[1];
sum[2]+= img.at<Vecb>(j,i)[2];
}
sum = sum/img.rows;
aggregatedVector.push_back(sum);
}
extract uniform fields using, for example gradient and get vector with 20
values
std::vector<Vec3f> fields
cv::Vec3f mean = 0;
int counter =0;
for(int i=0;i<aggregatedVector.size();i++)
{
mean+= aggregatedVector[i];
if(cv::norm(aggregatedVector[i+1] - aggregatedVector[i]) > /*thresh here */
{
fields.push_back(mean/(double)counter);
mean = cv::Vec3f(0,0,0);
counter=0;
}
counter++
}
compute vector of color distances between calculated vector and reference
double totalError = 0;
for(int i=0;i<fields.size();i++)
{
totalError+= cv::mean(reference[i]-fields[i]);
}
Then you can make decision based on error vector values. Have fun!
I am building an Android app to create panoramas. The user captures a set of images and those images
are sent to my native stitch function that was based on https://github.com/opencv/opencv/blob/master/samples/cpp/stitching_detailed.cpp.
Since the images are in order, I would like to match each image only to the next image in the vector.
I found an Intel article that was doing just that with following code:
vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(try_gpu, match_conf);
Mat matchMask(features.size(),features.size(),CV_8U,Scalar(0));
for (int i = 0; i < num_images -1; ++i)
{
matchMask.at<char>(i,i+1) =1;
}
matcher(features, pairwise_matches,matchMask);
matcher.collectGarbage();
Problem is, this wont compile. Im guessing its because im using OpenCV 3.1.
Then I found somewhere that this code would do the same:
int range_width = 2;
BestOf2NearestRangeMatcher matcher(range_width, try_cuda, match_conf);
matcher(features, pairwise_matches);
matcher.collectGarbage();
And for most of my samples this works fine. However sometimes, especially when im stitching
a large set of images (around 15), some objects appear on top of eachother and in places they shouldnt.
I've also noticed that the "beginning" (left side) of the end result is not the first image in the vector either
which is strange.
I am using "orb" as features_type and "ray" as ba_cost_func. Seems like I cant use SURF on OpenCV 3.1.
The rest of my initial parameters look like this:
bool try_cuda = false;
double compose_megapix = -1; //keeps resolution for final panorama
float match_conf = 0.3f; //0.3 default for orb
string ba_refine_mask = "xxxxx";
bool do_wave_correct = true;
WaveCorrectKind wave_correct = detail::WAVE_CORRECT_HORIZ;
int blend_type = Blender::MULTI_BAND;
float blend_strength = 5;
double work_megapix = 0.6;
double seam_megapix = 0.08;
float conf_thresh = 0.5f;
int expos_comp_type = ExposureCompensator::GAIN_BLOCKS;
string seam_find_type = "dp_colorgrad";
string warp_type = "spherical";
So could anyone enlighten me as to why this is not working and how I should match my features? Any help or direction would be much appreciated!
TL;DR : I want to stitch images in the order they were taken, but above codes are not working for me, how can I do that?
So I found out that the issue here is not with the order the images are stitched but rather the rotation that is estimated for the camera parameters in the Homography Based Estimator and the Bundle Ray Adjuster.
Those rotation angles are estimated considering a self rotating camera and my use case envolves an user rotating the camera (which means that will be some translation too.
Because of that (i guess) horizontal angles (around Y axis) are highly overestimated which means that the algorithm considers the set of images cover >= 360 degrees which results in some overlapped areas that shouldnt be overlapped.
Still havent found a solution for that problem though.
matcher() takes UMat as mask instead of Mat object, so try the following code:
vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(try_gpu, match_conf);
Mat matchMask(features.size(),features.size(),CV_8U,Scalar(0));
for (int i = 0; i < num_images -1; ++i)
{
matchMask.at<char>(i,i+1) =1;
}
UMat umask = matchMask.getUMat(ACCESS_READ);
matcher(features, pairwise_matches, umask);
matcher.collectGarbage();
I have several tasks to do on each pixel in opencv. I am using a construct like this:
for(int row = 0; row < inputImage.rows; ++row)
{
uchar* p = inputImage.ptr(row);
for(int col = 0; col < inputImage.cols*3; col+=3)
{
int blue=*(p+col); //points to each pixel B,G,R value in turn assuming a CV_8UC3 colour image
int green=*(p+col+1);
int red=*(p+col+2);
// process pixel }
}
This is working, but I am wondering if there is any faster way to do this? This solution doesn't use any SIMD or any paralle processing of OpenCV.
What is the best way to run a method over all pixels of an image in opencv?
If the Mat is continuous, i.e. the matrix elements are stored continuously without gaps at the end of each row, which can be referred using Mat::isContinuous(), you can treat them as a long row. Thus you can do something like this:
const uchar *ptr = inputImage.ptr<uchar>(0);
for (size_t i=0; i<inputImage.rows*inputImage.cols; ++i){
int blue = ptr[3*i];
int green = ptr[3*i+1];
int red = ptr[3*i+2];
// process pixel
}
As said in the documentation, this approach, while being very simple, can boost the performance of a simple element-operation by 10-20 percents, especially if the image is rather small and the operation is quite simple.
PS: For faster need, you will need to take full use of GPU to process each pixel in parallel.