Doing some research, I found that in Matlab they have this function to create linear structuring elements with a certain angle:
strel('line',len,deg)
The documentations says creates a linear structuring element that is symmetric with respect to the neighborhood center, with approximate length len and angle deg., which basically is a Mat like this with different sizes and angles:
I'm trying to create a similar structuring element in different angles but I couldnt find an equivalent function in Opencv for C++. Is there a way of doing that?
I appreciate any help. Thanks in advance
The closest function OpenCV has is getStructuringElement. Unfortunately, the only shapes it supports are rectangle, cross and ellipse.
Therefore, it is probably the easiest to create/estimate it yourself.
Related
I need to implement the same logic and values we get in MATLAB using interp1 with 'pchip' or 'cubic' flags and I'm unable to find a fitting replacement in OpenCV other than implementing by myself the cubic interpolation in Numerical Recipes (as noted by another question, this is based on De Boor's algorithm, which is used by MATLAB).
We need to interpolate values of a 1D doubles vector based on our sample points. Using Linear interpolation did not yield sufficient results as it is not smooth enough and results in non-continuity on the gradient in the joint points.
I came across this on OpenCV's website. But this seems to only be bicubic and work on an image, whilst I need a cubic interpolation.
Does anyone know any other function or simple solution on OpenCV's side for this issue? Any suggestion would help, thanks.
Side note: we are using OpenCV 4.3.0.
Please understand that I'm fairly new to opencv.
What I have is a vector filled with a 2D point cloud with float values as x and y indices. All I want is some way to calculate the outer contour of this cloud. Determining a bounding rectangle and a convex hull was no problem since the respective functions simply worked with my vector. I was expecting findContour() to be no different but it seems I was wrong. Literally every tutorial on findContour() teaches me how to load images into a cv::Mat object and noone talks about how this is supposed to work with a 2D point cloud which is not that different from a binary image in theory. I understand that findContour() expects it's first argument to be a special type of matrix but I have no idea on how to modify my vector to get to the desired result. I've tried to instantiate a matrix with cv::Mat(vector<Point2f>)which works in itself but results in an exception when I pass it to findContour() unfortunately. Please help!
I am working with GCC on a Raspberry Pi 3 with Raspbian btw.
findContours will only find contours in an image.
The outer contour of a point cloud isn't well defined - how do you know not to go 'in' between two dots to connect to a dot closer to the middle?
It only makes sense if you have some scale length that you consider an edge. Simplest way with openCV is to calculate the convex hull and then look at convexity defects - add any defects you consider part of the outline into the point list at that position
I have a 3D image data obtained from a 3D OCT scan. The data can be represented as I(x,y,z) which means there is an intensity value at each voxel.
I am writing an algorithm which involves finding the image's gradient in x,y and z directions in C++. I've already written a code in C++ using OpenCV for 2D and want to extend it to 3D with minimal changes in my existing code for 2D.
I am familiar with 2D gradients using Sobel or Scharr operators. My search brought me to this post, answers to which recommend ITK and Point Cloud Library. However, these libraries have a lot more functionalities which might not be required. Since I am not very experienced with C++, these libraries require a bit of reading, which time doesn't permit me. Moreover, these libraries don't use cv::Mat object. If I use anything other than cv::Mat, my whole code might have to be changed.
Can anyone help me with this please?
Update 1: Possible solution using kernel separability
Based on #Photon's answer, I'm updating the question.
From what #Photon says, I get an idea of how to construct a Sobel kernel in 3D. However, even if I construct a 3x3x3 cube, how to implement it in OpenCV? The convolution operations in OpenCV using filter2d are only for 2D.
There can be one way. Since the Sobel kernel is separable, it means that we can break the 3D convolution into convolution in lower dimensions. Comments 20 and 21 of this link also tell the same thing. Now, we can separate the 3D kernel but even then filter2D cannot be used since the image is still in 3D. Is there a way to break down the image as well? There is an interesting post which hints at something like this. Any further ideas on this?
Since the Sobel operator is separable, it's easy to envision how to add a 3rd dimension.
For example, when you look at the filter definition for Gx in the link you posted, you see that is multiplies the surrounding pixels by coefficients that have a sign dependent on the relative X position, and magnitude relative to the offset in Y.
When you extend to 3D, the Gx gradient should be calculated the same way, but you need to work on a 3x3x3 cube, and the coefficient sign remains the same definition, and the magnitude now depends on change in either Y or Z or both.
The other gradients (Gy, Gz) are the same but around their axis.
I've been trying to understand how to compute a projection matrix using image points and object points (3D and 2D points), but I can't seem to find a clear understanding of how you'd do this. I have a function as follows:
void calculateprojectionmatrix(Mat image_points, Mat object_points, Mat projection_matrix)
I've tried researching solutions for this (preferably in a C++ implementation), but can't find any clear explanation, and no background resources I can find seem to shed enough light on the topic. Any help would be greatly appreciated!
Given that you're using openCV you may want to scan some of the OpenCV docs.
Say, like initCameraMatrix2D ?
You might want to read Find 2D-3D correspondence of 4 non-coplanar points
I have a matrix (vector of vectors) with several points (measurements from sensors) that are supposed to represent walls. All the walls are parallel/perpendicular.
I want to fit these points to the respective walls. I thought of using ransac but I can't find a easy way to implement this on the matrix in cpp, without having to do visualization code, like point cloud library.
Do I have to write my own RANSAC or does this exist?
You may try RANSAC in OpenCV library. If it is not enough, take it's code (it is open source) and modify it according to your problem details.
Or you may add some pictures here for better understanding of your issue details.
In PointCloudLibrary there's a Ransac implementation for 3D. You can use it for your own application. It can identify planes too.