I think I should be using cv::remap to remove the distortion but can't figure out what the the maps const Mat& map1, const Mat& map2 to should be to achieve this.
Should I be using the cv::initUndistortRectifyMap to find out the values? If so, I'd really appreciate an example. I do not have the intrinsic camera parameters or don't know how to calculate them. Thanks.
If you are looking to remove the distortion caused by the camera lens you should take a look at this answer I wrote some time ago, which has intructions and references on how to do proper camera calibration.
I also suggest this post, which has good info on the procedure as well and uses the C++ interface of OpenCV.
Related
Thresholded Image
BGR Image
Fitted Thresholded Image
Hi all. I'm working on a project about computer vision using OpenCV for C++ interface. My purpose is to track a moving deformable object that is marked with a colored tape. By processing each frame of the video I'm able to effectively isolate the color (as you can see in the thresholded image) and track its trajectory, movement and shape into the BGR image.
My problem is that I need to extrapolate an equation or polynomial that can describe the current shape assumed by my tracked object.
Is there an effective way to do this? I've no idea on how to address the problem.
Thanks in advance,
Cheers!
If your final goal is to detect your shape in various forms i think you want to read about Active shape model: https://en.wikipedia.org/wiki/Active_shape_model
If you just want to get a polynomial fit of the shape in each instance of time i would use the suggestion of Cherkesgiller Tural and read about 2D curve fitting.
If I understood correctly:
I would start to fit a polygon on your shape. A common method for that is alpha-shapes.
You can also try an optimization approach which is enormously powerful because you can basically design your cost-function and constrains however you want. But it is computationally very costly (depending on the algorithm).
Have a look at this thread: It might help you.
I can't find anything similar to Savitzky Golay Polynomial Fit on opencv. This is a standard smoothing operation though, so it seems like something they should have. Does anybody know of anything they have? Using C++ for what its worth.
Thanks!
-Tim
It is not clear what you need to do: fit or smooth, you mentioned both. But if you need to smooth using OpenCV you can try Kalman filter (fit in its way and smooth), smooth-2D (using your 1D-data) or your own convolution smooth kernel 1D+1D using 1D-data for kernelX only convolution (the fastest way to smooth).
OpenCV is near to real time image and video processing library, and it contains most common task solvers for this, where is no polynomial fitting among them yet. But if you really need fitting (not just smoothing) you can use polynomial fitting matrix equation and calculate your answer by yourself in a simple way thanks to Mat objects in OpenCV which has inv()(inverse) and t()(transpose) functions.
I've been trying to understand how to compute a projection matrix using image points and object points (3D and 2D points), but I can't seem to find a clear understanding of how you'd do this. I have a function as follows:
void calculateprojectionmatrix(Mat image_points, Mat object_points, Mat projection_matrix)
I've tried researching solutions for this (preferably in a C++ implementation), but can't find any clear explanation, and no background resources I can find seem to shed enough light on the topic. Any help would be greatly appreciated!
Given that you're using openCV you may want to scan some of the OpenCV docs.
Say, like initCameraMatrix2D ?
You might want to read Find 2D-3D correspondence of 4 non-coplanar points
I have a matrix (vector of vectors) with several points (measurements from sensors) that are supposed to represent walls. All the walls are parallel/perpendicular.
I want to fit these points to the respective walls. I thought of using ransac but I can't find a easy way to implement this on the matrix in cpp, without having to do visualization code, like point cloud library.
Do I have to write my own RANSAC or does this exist?
You may try RANSAC in OpenCV library. If it is not enough, take it's code (it is open source) and modify it according to your problem details.
Or you may add some pictures here for better understanding of your issue details.
In PointCloudLibrary there's a Ransac implementation for 3D. You can use it for your own application. It can identify planes too.
I am rather new to C++ and openframeworks. I am beginning to play with manipulating objects using the Lucas Kanade technique. I am having some success with pushing objects around but unfortunately I cannot figure out how to go about rotating them properly or even detect when rotational movement is occurring for that matter.
Does anyone have any pointers or tips they would like to share?
Many thanks,
N
Optical flow calculations won't on their own help you detect things like "rotational movement". Basically, all the optical flow calc is doing is looking at changes pixel-by-pixel, while what you mean by rotation is a larger aggregate of pixel change. An algorithm would need to detect something like "all the pixels on the edge of the object are flowing in a (counter-)clockwise direction". Very difficult to do, and I don't think there's anything in OpenFrameworks or OpenCV that will help you.
Are you trying to detect rotation of an object in the image, or rotation-like movements in the image that will affect a virtual object? If it's the former, I think there are OpenCV techniques for identifying objects and then tracking them, including things like rotation. I think the things to research are like "opencv object tracking" and "opencv object motion analysis".
To compute the 2x3 affine transformation matrix of your motion could be a solution. The affine transformation matrix contains tranlational and rotational movements as far as scaling. If you are using OpenCV than cv::getAffineTransform is what you are looking for where you can directly input the tracked feature points.