convert unsigned short mat to vector in opencv - c++

I load a depth image in opencv with
cv::Mat depth = cv::imread("blabla.png",CV_LOAD_IMAGE_UNCHANGED);
then get a subimage of it with
cv::Mat sub_image= depth(cv::Rect( roi_x,roi_y,roi_size,roi_size)).clone();
now I want to convert that sub_image into a vector
I try with
std::vector<uchar> array;
array.assign(sub_image.datastart,sub_image.dataend);
that found here in StackOverflow in a similar question but it seems it doesnt work properly.
The size of array after assignment isnt roi_size * roi_size,
but instead roi_size*roi_size*2
Is something wrong with the type of vector?? I also tried various other types like double, float, int, etc
The type of the depth image is unsigned short right??
Edit:
array fills properly (correct size roi_size*roi_size) when I normalize the depth image
cv::Mat depthView;
cv::normalize(depth, depthView, 0, 255, cv::NORM_MINMAX,CV_8UC1);
but thats not what I want to do

Your observation of roi_size*roi_size*2 was due to the fact that the depth image is of short type which is each of size 2 bytes. You are trying to push this image data to an array of type unsigned char.
Make the vector type to short and you will find that the array size got back to roi_size*roi_size as expected. The values too would be what you expected it to be.

Have you tried checking, which type of image you read, i.e. with depth.type() == CV_8UC1 for 8-bit uchar with 1 channel.
Maybe your image was interpreted defferently.
Have you tried reading the image with the flag CV_LOAD_IMAGE_GRAYSCALE instead of CV_LOAD_IMAGE_UNCHANGED?

Thanks for all the help guys,
I think I managed to solve the problem.
I changed the code for the array assignment to this
std::vector<unsigned short> array(sub_image.begin<unsigned short>(),sub_image.end<unsigned short>());
and now the size is correct roi_size*roi_size
thanks again for the help.

Related

OpenCV padded Mat, use with VideoWriter

I have an image represented using a 2D array of 3 byte color values (OpenCV type CV_8UC3). The array is not densely packed, but instead elements are aligned on a 4 byte boundary, i.e. there is 1 byte of padding.
So the array is of the format
RGB_RGB_RGB_RGB_RGB_RGB_RGB_RGB_RGB_RGB_RGB_RGB_RGB_RGB_RGB_...
I want to access this data using OpenCV, without making a new copy of it. But OpenCV Mat of type CV_8UC3 is packed by default, so I create the Mat with explicit strides/steps, using
cv::Mat mat(
2,
sizes, // = (1080, 1920)
CV_8UC3,
reinterpret_cast<void*>(data),
steps // = (7680, 4)
);
data is a pointer to a rgb_color array, defined by
struct alignas(4) rgb_color { std::uint8_t r, g, b; };
However using this mat with cv::VideoWriter still produces incorrect results, and it seems that VideoWriter ignores the strides of the Mat.
Is it possible to use VideoWriter and other OpenCV functionality with matrices of this type?
VideoWriter is whimsical thing, so I won't surprised if it just copies data as simple RGB.
Excerpt from sources - FFMPEG proxy function for writeframe:
return icvWriteFrame_FFMPEG_p(ffmpegWriter, (const uchar*)image->imageData,
image->widthStep, image->width, image->height, image->nChannels, image->origin) !=0;
It uses line step padding (widthStep), but ignores per-element padding. I think that some similar code is for other AVI-writing approaches (vfw, dx, qt etc - lazy to check).
Note that per-element padding is poorly documented (just mentioned in steps description), so I suspect it's support might be omitted in some functions.

What's the interpretation of `type` for a std::vector instead of a cv::Mat in OpenCV and how can I change it? (C++)

I want to use the adaptiveThreshold function from OpenCV which is defined in the documentation as follows:
void adaptiveThreshold(InputArray src, OutputArray dst, double maxValue, int adaptiveMethod, int thresholdType, int blockSize, double C)
Instead of using a Mat as an input, I want to use a vector<double>. This should be possible, as I read the following in the documentation:
When you see in the reference manual or in OpenCV source code a
function that takes InputArray, it means that you can actually pass
Mat, Matx, vector<T> etc. (see above the complete list).
I am using the following code:
vector<double> diffs; // <initialized with a number of double values>
double maxValue = 255.0; // values in diffs above the threshold will be set to 255.0
vector<double> out; // stores the output values (either 0 or 255.0)
adaptiveThreshold(diffs, out, maxValue, ADAPTIVE_THRESH_GAUSSIAN_C, THRESH_BINARY, 3, 0);
However, when running the code I get the following error:
OpenCV Error: Assertion failed (src.type() == CV_8UC1) in
adaptiveThreshold, file
/Users/nburk/Developer/SDKs/opencv-2.4.10/modules/imgproc/src/thresh.cpp,
line 796
libc++abi.dylib: terminating with uncaught exception of type
cv::Exception:
/Users/nburk/Developer/SDKs/opencv-2.4.10/modules/imgproc/src/thresh.cpp:796:
error: (-215) src.type() == CV_8UC1 in function adaptiveThreshold
Now, I understand the call fails because the type of the input is not actually CV_8UC1. But I don't know how to solve this issue. I thought the type property is only relevant to Mat objects, I don't know how to interpret it for vector.
I also am not sure where to read about this issue in the docs. I found the following statement in the docs for Mat, but this doesn't help me a lot to solve my issue:
type – Array type. Use CV_8UC1, ..., CV_64FC4 to create 1-4 channel
matrices, or CV_8UC(n), ..., CV_64FC(n) to create multi-channel (up to
CV_CN_MAX channels) matrices.
Update: The above says that type actually is an Array type, but what does this mean? How can I make it so that my vector<double> gets the type CV_8UC1 that is required to use adaptiveThreshold?
Another update:
After reading in the Learning OpenCV book from O'Reilly, I learned the following:
type can be any of a long list of predefined types of the form:
CV_<bit_depth>(S|U|F) C<number_of_channels>. Thus, the matrix could
consist of 32-bit floats (CV_32FC1), of un-signed integer 8-bit
triplets (CV_8UC3), or of countless other elements.
So, it's obvious that in my case the vector<double> is not of type CV_8UC1 because double is clearly not an _unsigned 8-bit integer. However, I can just normalize these values which I just did. The result is a vector<int> that only has values between 0 and 255. Thus, it should be of type CV_8UC1, right? However, I am still getting the same error...
Types do not match. int and unsigned char are different. Try to cast vector<int> to vector<unsigned char> and then it should work.

The best access pixel value method to an Mat binary image?

I want to ask a question: I have to check the value time by time of a pixel (X,Y) in a binary thresholded OpenCV Mat image.
I have to check if the pixel at the X,Y that I want to verify is black or white (0 or 255)...how is the best method to do this thing?
I have searched and read about direct method (Get2D) or with pointers..but it isn't so clear for me...Image are binary, thresholded and eroded/dilated before...
Can someone post me an example of code of function that I've to use to do this thing?
If you check the same pixel all the time, do it as #Xale says:
mat.at<unsigned char>(x,y) // assuming your type is CV_8U
If you have to do it for several pixels, get the offset of the row first and access the column later:
unsigned char *row = mat.ptr<unsigned char>(y);
// access now row[x1], row[x2]...
There is one more option that is only valid if you only want a single pixel and if all the operations on the image are done on the same allocated memory (it depends on your code and the opencv functions you call). In that case, you can get the pointer to that pixel once, and access it when you need:
unsigned char *px = mat.ptr<unsigned char>(y) + x;
...
unsigned char current_value = *px;
You have to refer to this nice tutorial for accessing cv::Mat elements:
http://docs.opencv.org/doc/tutorials/core/mat_the_basic_image_container/mat_the_basic_image_container.html
There are different ways to achieve it. The main problem when you access the element is to understand the data type of the matrix/image. In you case, if the mat is a binary black and white probab it is of the type CV_8U, and my advice is always check for the type to be sure. Plus, playing with types get you more control on the knowledge of what you're dealing with.
One of the easiest method for accessing pixels is cv::Mat::at that is a template method, and it needs to specify the type, that, if your mati is CV_8U is uchar.
the easy way:
int n = cv::countNonZero(binary_mat);
the hard way:
for ( int i=0; i<mat.rows; i++ )
{
for ( int j=0; j<mat.cols; j++ )
{
uchar pix = mat.at<uchar>(i,j);
...
Hers is a link to another stackoverflow answer. Anyway short answer is
mat.at<Type>(x,y)
where Typeis the type of data stored in the matrixelements. In your case unsigned char

How to determine the type of data in a cv::Mat converted to grayscale

I'm not sure where to find this information.
I loaded in a .jpg and converted it to grayscale with cv::cvtColor(*input_image_grayscale, *input_image_grayscale, CV_BGR2GRAY);
I then try to reference a pixel with input_image_grayscale->at<float>(row, col) but get an assertion error. How do I determine the right type of data (it's clearly not float) to dereference this? Thanks
For reference, I ran input_image_grayscale->type() and got 0.
The value returned by type is just an integer that OpenCV declares with a preprocessor define. You can check it in a switch statement like this:
switch( matrixType )
{
case CV_8UC1:
.... check for other types
}
The 8U, in that example refers to an unsigned char, and C1 refers to a single channel image. CV_8UC1 is defined as 0, so that is your Mat's type and you should use unsigned char for your reference type.
You can also use the function Mat::depth, to return the type of the size of a single matrix element, because you already know it is a single channel image since it is grayscale.

How to build a QImage from known pixel values

I know the height and width, as well as each pixel value (from x,y location) that I want a QImage to be. How can I build a QImage knowing these values?
The second argument to setPixel() is a 24bit RGB value in a single int you can use the QRgb macros to construct it or just (red<<16) + (green << 8) + blue
But unless it's a very small image it will take a long time to call setPixel().
If you have the data I would call Qimage::bits() to get an unsigned pointer to the QImage data and just set the R,G,B values directly for each pixel or use memcpy()
You simply create the object (eg, new QImage(640, 480);) and then use setPixel to change each pixel in the image to the value you want it to be.