Obtain Gradient orientation in OpenCV - c++

I must find the orientation of the gradient in an image. I already obtain Gx,Gy and the total gradient.
//Scharr( src_gray, grad_x, ddepth, 1, 0, scale, delta, BORDER_DEFAULT );
Sobel( img, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT );
convertScaleAbs( grad_x, abs_grad_x ); //Gradiente en X
/// Gradient Y
//Scharr( src_gray, grad_y, ddepth, 0, 1, scale, delta, BORDER_DEFAULT );
Sobel( img, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT );
convertScaleAbs( grad_y, abs_grad_y ); //Gradiente en Y
/// Total Gradient (approximate)
addWeighted( abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad ); //Magnitud del Gradiente
Now, I must find the orientation of the gradient,but I dont find any code to get it. I know the theory but I dont know to put it in practice.
Anyone knows how can I get de orientation of the gradient?
Thanks for your time
EDIT: I tried to use this:
Mat modulo;
Mat orientacion;
cartToPolar(abs_grad_x,abs_grad_y,modulo,orientacion);
But it give me an error:
OpenCV Error: Assertion failed (X.size == Y.size && type == Y.type() && (depth == CV_32F || depth == CV_64F)) in cartToPolar, file C:\OpenCV246PC\opencv\modules\core\src\mathfuncs.cpp, line 448
I tried to change the depth to CV_32F and the image of gradient its not correct.

Related

How do I draw a green rectangle with opencv?

Really new to C++ so my apologies for such a question. Been trying out with these but they dont seem to work.
(I'm executing a template matching function in opencv - https://docs.opencv.org/3.4/de/da9/tutorial_template_matching.html)
Edit: Here is my code for my image, template and mask i used!
cv::Mat image = cv::Mat(height,width, CV16UC1, image); // image in short
cv::Mat temp;
image.convertTo(temp, CV32_F); // convert image to 32 bits
cv::image_template = cv::Mat(t_width, t_height, CV_32F, t_image); // template
cv::mask_template = cv::Mat(t_width, t_height, CV_32F, m_image); // mask
cv:: Mat img_display, result;
temp.copyTo(img_display); // image to display
int result_cols = temp.cols - image_template.cols + 1;
int result_rows = temp.rows - image_template.rows + 1;
result.create(result_rows, result_cols, CV32FC1);
// all the other code
matchTemplate(temp, image_template, result, 0, mask_template);
normalize( result, result, 0, 1, cv::NORM_MINMAX, -1, cv::Mat());
// localize the minimum and maximum values in the result matrix
double minVal;
double maxVal;
cv::Point minLoc;
cv::Point maxLoc;
cv::Point matchLoc;
minMaxLoc(result, &minVal, &maxVal, &minLoc, &maxLoc, cv::Mat());
// for match_method TM_SQDIFF we take lowest values
matchLoc = minLoc;
// display source image and result matrix , draw rectangle around highest possible matching area
cv::rectangle( img_display, matchLoc, cv::Point( matchLoc.x + image_template.cols, matchLoc.y + image_template.rows), cv::Scalar::all(255), 2, 8, 0);
cv::rectangle( result, matchLoc, cv::Point(matchLoc.x + image_template.cols, matchLoc.y + image_template.rows), cv::Scalar::all(255), 2, 8, 0);
This is the given code:
cv::rectangle( img_display, matchLoc, cv::Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), cv::Scalar::all(0), 2, 8, 0 );
Tried to change it with the following code snippets but doesn't seem to work.
cv::rectangle( img_display, matchLoc, cv::Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), cv::Scalar(0,255,0) , 2, 8, 0 );
This doesn't work either
rectangle(ref, maxloc, Point(maxloc.x + tpl.cols, maxloc.y + tpl.rows), CV_RGB(0,255,0), 2);
Do let me know where I am wrong!
First of all, you are trying to scale your pixels 0 to 255 but you cant do that because your image is a float type image(32FC1) float type image pixel values can be scaled 0.0 to 1.0.
You need to convert your image to 8UC to be able to colorize easily. But this way will also have several problems which mentioned here. OpenCV matchTemplate function always gives result in 32FC1 format so it is difficult to make some colorized things on this type image.
In your source image you can draw your rectangles with your desired color but not in float type. You can also check this link
Simply add
#define CV_RGB(r, g, b)
on top of your code so that OpenCV knows it will use RGB color space instead of the default BGR.
And then draw your green rectangle this way.
rectangle(frame, Point(startX, startY), Point(endX, endY), CV_RGB(0, 255, 0), 2);

OpenCV Mat object copying speed up

Recently i switched from opencv-python to c++ version of opencv, because i want to speed up my real time video-processing app with CUDA. I am new to C++ so i found some unclear moments with memory management while optimizing my code.
For example, i have some filter chain like this:
void apply_blue_edgess(cv::Mat& matrix, cv::Mat& mask, cv::Mat& inverted_mask) {
cv::Mat gray_image, blured, canny, canny_3d, in_range_mask;
cv::cvtColor( matrix, gray_image, CV_BGR2GRAY );
cv::GaussianBlur( gray_image, blured, cv::Size( 5, 5 ), 0, 0 );
cv::Canny(blured, canny, 0, 100);
cv::cvtColor( canny, canny_3d, CV_GRAY2BGR );
cv::inRange(canny_3d, cv::Scalar(255,255,255), cv::Scalar(255,255,255), in_range_mask);
canny_3d.setTo(cv::Scalar(0, 171, 255), in_range_mask);
cv::GaussianBlur( canny_3d, matrix, cv::Size( 5, 5 ), 0, 0 );
cv::bitwise_and(matrix, mask, matrix);
}
Is it ok to use new Mat object at every step of the filter chain (gray_image, blured, canny, canny_3d, in_range_mask) ? Is such continuous memory allocation bad for performance? If so, how should i write similar functions?
As was suggested in the comment section, i ended up doing functor wrapper:
struct blue_edges_filter {
blue_edges_filter(int width, int height)
: gray_image(width, height, CV_8UC1),
blured(width, height, CV_8UC1),
canny(width, height, CV_8UC1),
canny_3d(width, height, CV_8UC3),
in_range_mask(width, height, CV_8UC3)
{ }
int operator()(cv::Mat& matrix, cv::Mat& mask, cv::Mat& inverted_mask) {
cv::bitwise_and(matrix, mask, internal_mask_matrix);
cv::bitwise_and(matrix, inverted_mask, external_mask_matrix);
cv::cvtColor( matrix, gray_image, CV_BGR2GRAY );
cv::GaussianBlur( gray_image, blured, cv::Size( 5, 5 ), 0, 0 );
cv::Canny(blured, canny, 0, 100);
cv::cvtColor( canny, canny_3d, CV_GRAY2BGR );
cv::inRange(canny_3d, cv::Scalar(255,255,255), cv::Scalar(255,255,255), in_range_mask);
canny_3d.setTo(cv::Scalar(0, 171, 255), in_range_mask);
cv::GaussianBlur( canny_3d, matrix, cv::Size( 5, 5 ), 0, 0 );
cv::bitwise_and(matrix, mask, matrix);
}
private:
cv::Mat gray_image, blured, canny, canny_3d, in_range_mask;
};
//Usage
blue_edges_filter apply_blue_edgess(1024, 576);
apply_blue_edgess(matrix, mask, inverted_mask);
You can reuse memory without allocation. Create temporal images:
void apply_blue_edgess(cv::Mat& matrix, cv::Mat& mask, cv::Mat& inverted_mask)
{
cv::Mat tmp[2];
int srcInd = 1;
auto InvInd = [&]() -> int { return srcInd ? 0 : 1; };
cv::cvtColor( matrix, tmp[InvInd()], CV_BGR2GRAY );
srcInd = InvInd();
cv::GaussianBlur( tmp[srcInd], tmp[InvInd()], cv::Size( 5, 5 ), 0, 0 );
srcInd = InvInd();
cv::Canny(tmp[srcInd], tmp[InvInd()], 0, 100);
srcInd = InvInd();
cv::cvtColor( tmp[srcInd], tmp[InvInd()], CV_GRAY2BGR );
srcInd = InvInd();
cv::inRange(tmp[srcInd], cv::Scalar(255,255,255), cv::Scalar(255,255,255), tmp[InvInd()]);
tmp[srcInd].setTo(cv::Scalar(0, 171, 255), tmp[InvInd()]);
cv::GaussianBlur( tmp[srcInd], matrix, cv::Size( 5, 5 ), 0, 0 );
cv::bitwise_and(matrix, mask, matrix);
}

OpenCV filter2D negative values in C++

I'm trying to implement Histogram of Oriented Gradients on some video frames in C++. I used filter2D to convolute the frame image yet it seems that the resulting values are floored at 0. How do I get filter2D to give negative values as well?
Here's a snippet of code:
// Function that gets the histogram of gradients for a single video file
int HOG(string filename)
{
static int frames_read = 0;
VideoCapture cap(filename);
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat image;
namedWindow(filename,1);
// Read through frames of video
for(;;)
{
Mat frame;
float histogram[NUM_BINS * SPACIAL_X * SPACIAL_Y] = {0};
cap >> frame; // Get a new frame from camera
if(frame.empty())
break;
cvtColor(frame, image, CV_BGR2GRAY);
// Set up gradient kernels
float kernelX[9] = {0, 0, 0, -1.0, 0, 1.0, 0, 0, 0};
float kernelY[9] = {0, -1.0, 0, 0, 0, 0, 0, 1.0, 0};
Mat filterX(3, 3, CV_32F, kernelX);
Mat filterY(3, 3, CV_32F, kernelY);
Mat gradientX;
Mat gradientY;
// Apply gradients
filter2D(image, gradientX, CV_32F, filterX, Point (-1, 1), 0, BORDER_DEFAULT);
filter2D(image, gradientY, CV_32F, filterY, Point (-1, 1), 0, BORDER_DEFAULT);
}
}
Your code seems to be OK, and should generate positive results as well as negatives. How are you checking that there no negative results? Maybe you are converting floating point images to gray level (i.e. unsigned char), this indeed will crop all negative results.
It is easier to get same results by using Sobel function, that is dedicated to calculation of gradients in image:
Sobel(image, gradientX, CV_32F, 1, 0, 1);
Sobel(image, gradientY, CV_32F, 0, 1, 1);

Unable to imwrite to PNG even though imshow works

Does anyone know why even though I could imshow the image stored in grad, I am unable to write it using imwrite? I searched the web and it seems like it might be a floating point issue, but I do not know of any way to make the floating points in the matrix of an image disappear.
int main( int argc, char** argv ) {
cv::Mat src, src_gray;
cv::Mat grad;
char* window_name = "Sobel Demo - Simple Edge Detector";
int scale = 1;
int delta = 0;
int ddepth = CV_16S;
int c;
/// Load an image
src = imread("C:/Users/Qi Han/Dropbox/44.jpg" );
if( !src.data ) return -1;
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
/// Convert it to gray
cvtColor( src, src_gray, CV_RGB2GRAY );
/// Create window
namedWindow( window_name, CV_WINDOW_AUTOSIZE );
/// Generate grad_x and grad_y
Mat grad_x, grad_y;
Mat abs_grad_x, abs_grad_y;
/// Gradient X
//Scharr( src_gray, grad_x, ddepth, 1, 0, scale, delta, BORDER_DEFAULT );
Sobel( src_gray, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT );
convertScaleAbs( grad_x, abs_grad_x );
/// Gradient Y
//Scharr( src_gray, grad_y, ddepth, 0, 1, scale, delta, BORDER_DEFAULT );
Sobel( src_gray, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT );
convertScaleAbs( grad_y, abs_grad_y );
/// Total Gradient (approximate)
addWeighted( abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad );
imshow( window_name, grad );
imwrite("C:/Users/Qi Han/Dropbox/aftsobel.png", grad);
return 0;
}
Try to imwrite a BMP image instead or use Mat::convertTo and cvtColor to convert it before saving.
From imwrite documentation:
[...] Only 8-bit (or 16-bit unsigned (CV_16U) in case of PNG, JPEG 2000, and TIFF) single-channel or 3-channel (with ‘BGR’ channel order) images can be saved using this function. If the format, depth or channel order is different, use Mat::convertTo() , and cvtColor() to convert it before saving. [...]
read the docs of imwrite:
Only 8-bit (or 16-bit unsigned (CV_16U) in case of PNG, JPEG 2000, and TIFF) single-channel or 3-channel (with ‘BGR’ channel order) images can be saved using this function. If the format, depth or channel order is different, use Mat::convertTo() , and cvtColor() to convert it before saving.

How to extract Numerical gradient matrix from the image matrix in Opencv

i am searching a way to get Numerical gradient from the matrix. The same
function is implemented in matlab's default documentation.http://www.mathworks.com/help/techdoc/ref/gradient.html but i couldn't find any in opencv.
I want to port this to C++ using opencv.
should i use sobel for horizontal and vertical gradient or any
other function or way to do it???
Mat grad_x, grad_y;
Mat abs_grad_x, abs_grad_y;
/// Gradient X
Sobel( mat, grad_x, CV_32F, 1, 0, 3);
imshow("xx",grad_x);
convertScaleAbs( grad_x, abs_grad_x );
/// Gradient Y
Sobel( mat, grad_y, CV_32F, 0, 1, 3);
convertScaleAbs( grad_y, abs_grad_y );
/// Total Gradient (approximate)
Mat res;
addWeighted( abs_grad_x, 0.5, abs_grad_y, 0.5, 0, res );
[EDIT]
solution
Mat grad_x,abs_grad_x,grad_y,abs_grad_y;
int type=CV_64F ;
Gradient.setTo(Scalar::all(0));
/// Gradient Y
Sobel( input, grad_x, type, 1, 0, 3);
convertScaleAbs(grad_x,abs_grad_x);
cv::accumulateSquare(abs_grad_x,Gradient);
/// Gradient Y
Sobel(input, grad_y, type, 0, 1, 3);
convertScaleAbs(grad_y,abs_grad_y);
cv::accumulateSquare(abs_grad_y,Gradient);
imshow("gradient Mag",Gradient);
You can find the gradient calculation here Just like you have said to calculate sobel gradients, the example does so.