polarToCart and cartToPolar functions in opencv - c++

Hey I've a circular image that I want to make a cartesian in openCV.
I've successfully made it on matlab however I want to do it on OpenCV.
After some digging on internet. I figured out there are actually functions called logPolar, polarToCart and cartToPolar. However OpenCV official documentation is lack of information to use them. Since I don't really understand parameters those functions take I couldn't really use them
So could someone give me (actually I think a lot of people looking for it) appropriate example to use those functions please ?
Just in case I am sharing my sample image too.
thanks in advance

if you're using opencv3, you probably want linearPolar:
note, that for both versions, you need a seperate src and dst image (does not work inplace)
#include "opencv2/opencv.hpp" // needs imgproc, imgcodecs & highgui
Mat src = imread("my.png", 0); // read a grayscale img
Mat dst; // empty.
linearPolar(src,dst, Point(src.cols/2,src.rows/2), 120, INTER_CUBIC );
imshow("linear", dst);
waitKey();
or logPolar:
logPolar(src,dst,Point(src.cols/2,src.rows/2),40,INTER_CUBIC );
[edit:]
if you're still using opencv2.4, you can only use the arcane c-api functions, and need IplImage conversions (not recommended):
Mat src=...;
Mat dst(src.size(), src.type()); // yes, you need to preallocate here
IplImage ipsrc = src; // new header, points to the same pixels
IplImage ipdst = dst;
cvLogPolar( &ipsrc, &ipdst, cvPoint2D32f(src.cols/2,src.rows/2), 40, CV_INTER_CUBIC);
// result is in dst, no need to release ipdst (and please don't do so.)
(polarToCart and cartToPolar work on point coords, not images)

Related

OpenCv, get image information

I am playing around with an open source openCv application. With the provided image sets, it works great, but when I attempt to pass it a live camera stream, or even recorded frames from that camera stream, it crashes. I assume that this is to do with the cv::Mat type, differing image channels, or some conversion that i am not doing.
The provided dataset is grey-scale, 8 bit, and so are my images.
The application expects grayscale (CV_8U).
My question is:
Given one of the (working) provided images, and one of my recorded (not working) images, what is the best way to compare them using opencv, to find out what the difference might be that is causing my crashes?
Thank you.
I have tried:
Commenting out this code (Which gave assertion errors)
if(mImGray.channels()==3)
{
cvtColor(mImGray,mImGray,CV_BGR2GRAY);
cvtColor(imGrayRight,imGrayRight,CV_BGR2GRAY);
}
else if(mImGray.channels()==4)
{
cvtColor(mImGray,mImGray,CV_BGRA2GRAY);
cvtColor(imGrayRight,imGrayRight,CV_BGRA2GRAY);
}
And replacing it with:
cv::Mat TempL;
mImGray.convertTo(TempL, CV_8U);
cvtColor(TempL, mImGray, CV_BayerGR2BGR);
cvtColor(mImGray, mImGray, CV_BGR2GRAY);
And the program crashes with no error...
You can try this code:
if ( mImGray.depth() != CV_8U )
mImGray.convertTo(mImGray, CV_8U);
if (mImGray.channels() == 3 )
{
cvtColor(mImGray, mImGray, COLOR_BGR2GRAY);
}
Or you can define a new Mat with create function and use that.

OpenCV: Why create multiple Mat objects to transform an image's format?

I have not worked with OpenCV for a while, so please bear with my beginner questions. I curiously thought of something as I was looking through OpenCV tutorials and sample code.
Why do people create multiple Mat images when going through multiple transformations? Here is an example:
Mat mat, gray, thresh, equal;
mat = imread("E:/photo.jpg");
cvtColor(mat, gray, CV_BGR2GRAY);
equalizeHist(gray, equal);
threshold(equal, thresh, 50, 255, THRESH_BINARY);
Example of a code that uses only two Mat images:
Mat mat, process;
mat = imread("E:/photo.jpg");
cvtColor(mat, process, CV_BGR2GRAY);
equalizeHist(process, process);
threshold(process, process, 50, 255, THRESH_BINARY);
Is there anything different between the two examples? Also, another beginner question: will OpenCV run faster when it only creates two Mat images, or will it still be the same?
Thank you in advance.
The question comes down to, do you still need the unequalized image later on in the code? If you want to further process the gray image then the first option is better. If not, then use the second option.
Some functions might not work in-place; specifically, ones that transform the matrix to a different format, either by changing its dimensions (such as copyMakeBorder) or number of channels (such as cvtColor).
For your use case, the two blocks of code perform the same number of calculations, so the speed wouldn't change at all. The second option is obviously more memory efficient.

Taking a screenshot of a particular area

Looking for a way for taking a screenshot of a particular area on the screen in C++. (So not the whole screen) Then it should save it as .png .jpg whatever to use it with another function afterwards.
Also, I am going to use it, somehow, with openCV. Thought i'd mention that, maybe it's a helpful detail.
OpenCV cannot take screenshots from your computer directly. You will need a different framework/method to do this. #Ben is correct, this link would be worth investigating.
Once you have read this image in, you will need to store it into a cv:Mat so that you are able to perform OpenCV operations on it.
In order to crop an image in OpenCV the following code snippet would help.
CVMat * imagesource;
// Transform it into the C++ cv::Mat format
cv::Mat image(imagesource);
// Setup a rectangle to define your region of interest
cv::Rect myROI(10, 10, 100, 100);
// Crop the full image to that image contained by the rectangle myROI
// Note that this doesn't copy the data
cv::Mat croppedImage = image(myROI);

OpenCV: How to use HOGDescriptor::detect method?

I have succeeded in tracking moving objects in a video.
However I want to decide if an object is person or not. I have tried the HOGDescriptor in OpenCV. HOGDescriptor have two methods for detecting people: HOGDescriptor::detect, and HOGDescriptor::detectMultiScale. OpenCV "sources\samples\cpp\peopledetect.cpp" demonstrates how to use HOGDescriptor::detectMultiScale , which search around the image at different scale and is very slow.
In my case, I have tracked the objects in a rectangle. I think using HOGDescriptor::detect to detect the inside of the rectangle will be much more quickly. But the OpenCV document only have the gpu::HOGDescriptor::detect (I still can't guess how to use it) and missed HOGDescriptor::detect. I want to use HOGDescriptor::detect.
Could anyone provide me with some c++ code snippet demonstrating the usage of HOGDescriptor::detect?
thanks.
Since you already have a list of objects, you can call the HOGDescriptor::detect method for all objects and check the output foundLocations array. If it is not empty the object was classified as a person. The only thing is that HOG works with 64x128 windows by default, so you need to rescale your objects:
std::vector<cv::Rect> movingObjects = ...;
cv::HOGDescriptor hog;
hog.setSVMDetector(cv::HOGDescriptor::getDefaultPeopleDetector());
std::vector<cv::Point> foundLocations;
for (size_t i = 0; i < movingObjects.size(); ++i)
{
cv::Mat roi = image(movingObjects[i]);
cv::Mat window;
cv::resize(roi, window, cv::Size(64, 128));
hog.detect(window, foundLocations);
if (!foundLocations.empty())
{
// movingObjects[i] is a person
}
}
If you don't cmake OpenCV with CUDA enabled, calling gpu::HOGDescriptor::detect will be equal to call HOGDescriptor::detect. No GPU is called.
Also for code, you can use
GpuMat img;
vector<Point> found_locations;
gpu::HOGDescriptor::detect(img, found_locations);
if(!found_locations.empty())
{
// img contains/is a real person
}
Edit:
However I want to decide if an object is person or not.
I don't think that you need this step. HOGDescriptor::detect itself is used to detect people, so you don't need to verify them as they are supposed to be people according to your setup. On the other hand, you can setup its threshold to control its detected quality.

OpenCV WarpPerspective issue

I am currently trying to implement a basic image stitching C++ (OpenCV) code in Eclipse. The feature detection part shows great results for SURF Features. However, when I attempt to warp the 2 images together, I get only half the image as the output. I have tried to find a solution everywhere but to no avail. I even tried to offset the homography matrix , like in this answer OpenCV warpperspective . Nothing has helped so far.
I'll attach the output images in the comments since I don't have enough reputation points.
For feature detection and homography, I used the exact code from here
http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html
And then I added the following piece of code after the given code,
Mat result;
warpPerspective(img_object,result,H, Size(2*img_object.cols,img_object.rows));
Mat half(result,Rect(0,0,img_scene.cols,img_scene.rows));
img_scene.copyTo(half);
imshow( "Warped Image", result);
I'm quite new at this and just trying to put the pieces together. So I apologize if there's some basic error.
If you're only trying to put the pieces together, you cold try the built in OpenCV image stitcher class: http://docs.opencv.org/modules/stitching/doc/high_level.html#stitcher
I found a related question here Stitching 2 images in opencv and implemented the additional code given. It worked!
For reference, the edited code I wrote was
Mat result;
warpPerspective(img_scene, result, H, Size(img_scene.cols*2, img_scene.rows*2), INTER_CUBIC);
Mat final(Size(img_scene.cols + img_object.cols, img_scene.rows*2),CV_8UC3);
Mat roi1(final, Rect(0, 0, img_object.cols, img_object.rows));
Mat roi2(final, Rect(0, 0, result.cols, result.rows));
result.copyTo(roi2);
img_object.copyTo(roi1);