UIImage to cv::Mat CV_8UC3 - c++

There are a ton of questions about converting a UIIMage to a cv::Mat with CV_8UC4 encoding. There's even the tutorial on the OpenCV iOS site.
However, for the life of me I can't figure out how to convert a UIImage to 8UC3 correctly. Using the stock OpenCV example, trying to convert to 8UC1 also breaks, as the cv:Mat has a ton of null pointers after initialization.
Any insight into how I might be able to do this correctly? Thanks!

For me when I want to convert from UIImage into Mat simply I use the predefined function
UIImageToMat like this:
UIImageToMat(extractedUIImage, matImage);
But first you need to include this header file from openCV:
#import <opencv2/highgui/ios.h>
Same to convert from Mat into UIImage:
UIImage *extractedImage = MatToUIImage(matImage);
Just give it a try it works fine for me

Try calling the following after converting UIImage to cv::Mat...
cvtColor(srcMat, destMat, CV_RGBA2BGR);

Related

OpenCV C++ using cv::UMat instead of cv:Mat and pointers

I want to change some code in OpenCV to use cv::UMat instead of cv::Mat to get extra speed with OpenCL/GPU when working with images.
But what I used to access image data directly by pointer doesn't work anymore, e.g:
cv::Vec3b* imageP = image.ptr<cv::Vec3b>(y)
where image is a cv::Mat and y is the image line offset, because the Ptr() method doesn't exist for UMat.
What is the correct syntax with a UMat? Is it even possible ?
Thank you.
I searched a lot and I understand there is no simple solution, but copying back and forth between Mat and UMat is soooo slow :(

Dlib face detection doesn't work with grayscale images

Do you know why dlib face detection doesn't work with grayscale images (python works pretty well with grayscale images)?
My code:
mFaceDetector = dlib::get_frontal_face_detector();
// image is opencv grayscale mat
dlib::array2d<unsigned char> img;
dlib::assign_image(img, dlib::cv_image<unsigned char>(image));
std::vector<dlib::rectangle> mRets = mFaceDetector(img);
How to make it work?
Your code is nothing wrong in my perspective. Mine is the same. You should check
The image is loaded correctly by using imshow() function
If it works with non-gray image and other image
If you are setting any scan_fhog_pyramid value to the detector
the mRets.size()

using opencv in iOS

I want to firstly choose a picture from my the local library, it shows on the UIImageView
and then I can use the opencv fuction UIImageToMat() to convert the UIImage to Mat and then I convert Mat to IPlImage to implement image processing algorithms(why I have to convert Mat to IplImage? Because most of my opencv image processing codes are based on IplImage).
So the choosen image shows on the UIImageView, if I want to do some image processing things, I'll first convert the UIImage to IplImage, process it and then convert IplImage to UIImage again. That's the way I use opencv to do image processing things.
But the problem is when I want to process the chosen picture from the local library, something strange happens:
the original picture is:
and then I want to use opencv to let some columns of this picture to be blue and some rows to be red, but the result is:
the implementation code is:
I don't know why this happens, the picture obviously lost many important features during the opencv image processing process.(I just want some columns to be blue and some rows to be red.).Why this happened?

cv::Mat detect PixelFormat

I'm trying to use pictureBox->Image (Windows Forms) to display a cv::Mat image (openCV). I want to do that without saving the Image as a file ('cause i want to reset the image every 100ms).
I just found that topic here: How to display a cv::Mat in a Windows Form application?
When i use this solution the image appears to be white only. I guess i took the wrong PixelFormat.
So how do figure out the PixelFormat i need? Haven't seen any Method in cv::Mat to get info about that. Or does this depend on the image Source i use to create this cv::Mat?
Thanks so far :)
Here i took a screen. Its not completely white. So i guess there is some color info. But maybe i'm wrong.
Screen
cv::Mat.depth() returns the pixel type, eg. CV_8U for 8bit unsigned etc.
and cv::Mat.channels() returns the number of channels, typically 3 for a colour image
For 'regular images' opencv generally uses 8bit BGR colour order which should map directly onto a Windows bitmap or most Windows libs.
SO you probably want system.drawing.imaging.pixelformat.Format24bppRgb, I don't know what order ( RGB or BGR) it uses but you can use the opencv cv::cvtColor() with CV_BGR2RGB to convert to RGB
Thanks for the help! The problem was something else.
I just did not allocate memory for the Image Object. So there was nothing to really display.
Using cv::Mat img; in the headerFile and img = new cv::Mat(120,160,0); in Constructor (ocvcamimg Class) got it to work. (ocvcamimg->captureCamMatImg() returns img).

How should one GpuMat in OpenCV

In OpenCV to create an image Mat I'd usually do something along the lines of: cv::Mat myImage however when I use this method cv::gpu::GpuMat myImage to create a GpuMat I get undefined reference errors. I have noticed that a lot of people simply declare GpuMat myImage however using namespace cv::gpu leads to the same errors.
In short, how do I create a GpuMat in OpenCV properly?
Note: I'm still learning C/C++, so it's likely I'm missing something obvious (or not accessing the method correctly).
The problem occurred as I had not included the opencv_gpu231 lib, thanks for the help!