I have written a function to take a Mat image in and transpose it onto the center of a blank image three times the size. I have written function to do so but I feel it can be improved in terms of efficiency.
void transposeFrame( cv::Mat &frame){
Mat new_frame( frame.rows * 3, frame.cols * 3, CV_8UC3, Scalar(0,0,255));
Rect dim = Rect( frame.rows, frame.cols, frame.rows * 2, frame.cols * 2);
Mat subview = new_frame(dim);
frame.copyTo(subview);
frame = subview;
}
Is there a better to preform this operation?
I'd use something like :-
Mat frame_tpsed;
cv::Mat::transpose(frame, frame_tpsed);
new_frame(Rect(frame.rows, frame.cols, frame.rows*2, frame.cols*2)) = frame_tpsed
Transpose on opencvDocs : http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#void transpose(InputArray src, OutputArray dst)
Didn't try it out. Forgot to mention it.
You can use the copyMakeBorder function. However, I don't think it will work considerably faster.
Related
I have some misunderstanding about OpenCV 4.1.0 and memcpy in C++. The question is why the image is zoomed in a lot?
I read an image like this:
Mat img = imread("lena512.bmp", 1); // Black and White Image
namedWindow("Display window", WINDOW_AUTOSIZE);
imshow("Display window", img);
After this I have 2 byte array:
int inputSize = width * height * channels;
byte* pixels = new byte[width * height * channels];
byte* out = new byte[width * height * channels];
I copy the img to pixels array:
memcpy(pixels, img.data, inputSize * sizeof(byte));
And then I want to check if retrieving image is the same as input:
Mat image = Mat(width, height , CV_8U);
memcpy(image.data, out, inputSize * sizeof(byte));
Mat img = imread("lena512.bmp", 1); // Black and White Image
That's the problem, the comment is a lie, and by using a magic number instead of a named constant, you can't easily tell that's the case. 1 in this context means IMREAD_COLOR -- i.e. the image is always read as a 3 channel BGR image.
However, after the shenanigans with memcpy and raw pointers, you create new Mat in the following manner:
Mat image = Mat(width, height , CV_8U);
Note that CV_8U is equivalent to CV_8UC1. Hence, you create a single channel (grayscale) Mat, but give it 3-channel data.
Getting garbage as a result is the lesser issue. The much more serious issue is that you copy 3x as much data as the target pixel buffer can hold -- basically you clobber half a megabyte of memory that doesn't belong to the Mat. That can either end with a segfault, or some really hard to find bugs (in case you overwrite some memory used by other data structures).
Update: There's another issue that I've missed (thanks to #Micka for catching that). The order of parameters of the cv::Mat constructor is rows, columns, datatype. It appears you switched width and height, although since your input image appears to be square (i.e. width == height) it didn't matter.
The correct way to allocate the second Mat would be
Mat image = Mat(height, width, CV_8UC3);
I have a problem. I have an image. Then I have to split the image into two equal parts. I made this like that (the code is compiled, everything is good):
Mat image_temp1 = image(Rect(0, 0, image.cols, image.rows/2)).clone();
Mat image_temp2 = image(Rect(0, image.rows/2, image.cols, image.rows/2)).clone();
Then I have to change each part independently and finally to merge into one. I have no idea how to make this correctly. How I should merge this 2 parts of image into one image?
Example: http://i.stack.imgur.com/CLDK7.jpg
There is several way to do this, but the best way I found is to use cv::hconcat(mat1, mat2, dst) for horizontal merge orcv::vconcat(mat1, mat2, dst) for vertical.
Don't forget to take care of empty matrix merge case !
Seems that cv::Mat::push_back is exactly what are you looking for:
C++: void Mat::push_back(const Mat& m) : Adds elements to the bottom of the matrix.
Parameters:
m – Added line(s).
The methods add one or more elements to the bottom of the matrix. When elem is
Mat , its type and the number of columns must be the same as in the
container matrix.
Optionally, you could create new cv::Mat of proper size and place image parts directly into it:
Mat image_temp1 = image(Rect(0, 0, image.cols, image.rows/2)).clone();
Mat image_temp2 = image(Rect(0, image.rows/2, image.cols, image.rows/2)).clone();
...
cv::Mat result(image.rows, image.cols);
image_temp1.copyTo(result(Rect(0, 0, image.cols, image.rows/2)));
image_temp2.copyTo(result(Rect(0, image.rows/2, image.cols, image.rows/2));
How about this:
Mat newImage = image.clone();
Mat image_temp1 = newImage(Rect(0, 0, image.cols, image.rows/2));
Mat image_temp2 = newImage(Rect(0, image.rows/2, image.cols, image.rows/2));
By not using clone() to create the temp images, you're implicitly modifying newImage when you modify the temp images without the need to merge them again. After changing image_temp1 and image_temp2, newImage will be exactly the same as if you had split, modified, and then merged the subimages.
cvSetImageROI(dst, cvRect(0, 0,img1->width,img1->height) );
cvCopy(img1,dst,NULL);
cvResetImageROI(dst);
I was using these commands to set image ROI but now i m using MAT object and these functions take only Iplimage as a parameter. Is there any similar command for Mat object?
thanks for any help
You can use the cv::Mat::operator() to get a reference to the selected image ROI.
Consider the following example where you want to perform Bitwise NOT operation on a specific image ROI. You would do something like this:
img = imread("image.jpg", CV_LOAD_IMAGE_COLOR);
int x = 20, y = 20, width = 50, height = 50;
cv::Rect roi_rect(x,y,width,height);
cv::Mat roi = img(roi_rect);
/* ROI data pointer points to a location in the same memory as img. i.e.
No separate memory is created for roi data */
cv::Mat complement;
cv::bitwise_not(roi,complement);
complement.copyTo(roi);
cv::imshow("Image",img);
cv::waitKey();
The example you provided can be done as follows:
cv::Mat roi = dst(cv::Rect(0, 0,img1.cols,img1.rows));
img1.copyTo(roi);
Yes, you have a few options, see the docs.
The easiest way is usually to use a cv::Rect to specifiy the ROI:
cv::Mat img1(...);
cv::Mat dst(...);
...
cv::Rect roi(0, 0, img1.cols, img1.rows);
img1.copyTo(dst(roi));
I'm new to OpenCV and I was looking at the Canny tutorial for Edge Detection.
I was looking on how to resize a mat just created. The code is this:
src = imread( impath );
...
dst.create( src.size(), src.type() );
now I tried to resize the mat with this:
resize(dst, dst, dst.size(), 50, 50, INTER_CUBIC);
But it does not seems to change anything.
My doubts are two :
1 : Am I doing well calling resize() after create() ?
2 : How can I specify the dimensions of the mat ?
My goal is to resize the image, if it was not clear
You create dst mat with the same size as src. Also when you call resize you pass both destination size and fx/fy scale factors, you should pass something one:
Mat src = imread(...);
Mat dst;
resize(src, dst, Size(), 2, 2, INTER_CUBIC); // upscale 2x
// or
resize(src, dst, Size(1024, 768), 0, 0, INTER_CUBIC); // resize to 1024x768 resolution
UPDATE: from the OpenCV documentation:
Scaling is just resizing of the image. OpenCV comes with a function
cv2.resize() for this purpose. The size of the image can be specified
manually, or you can specify the scaling factor. Different
interpolation methods are used. Preferable interpolation methods are
cv2.INTER_AREA for shrinking and cv2.INTER_CUBIC (slow) &
cv2.INTER_LINEAR for zooming. By default, interpolation method used is
cv2.INTER_LINEAR for all resizing purposes. You can resize an input
image either of following methods:
import cv2
import numpy as np
img = cv2.imread('messi5.jpg')
res = cv2.resize(img,None,fx=2, fy=2, interpolation = cv2.INTER_CUBIC)
#OR
height, width = img.shape[:2]
res = cv2.resize(img,(2*width, 2*height), interpolation = cv2.INTER_CUBIC)
Also, in Visual C++, I tried both methods for shrinking and cv::INTER_AREA works significantly faster than cv::INTER_CUBIC (as mentioned by OpenCV documentation):
cv::Mat img_dst;
cv::resize(img, img_dst, cv::Size(640, 480), 0, 0, cv::INTER_AREA);
cv::namedWindow("Contours", CV_WINDOW_AUTOSIZE);
cv::imshow("Contours", img_dst);
I am useing the 2.4.4 version of OpenCV. - i know its a beta
but there is an example about cv::calcOpticalFlowSF the method in the example folder called: simpleflow_demo.cpp. But when i copy this demo and use it with my input images, it starts processing and after some seconds it came back a crash report.
The documentation about the method is a little bit strange, saying the output files are a x- and yflow instead of the cv::Mat& flow which the method actually wants.
Any ideas how to fix the problem to get the function working?
Try this simple demo that worked for me, then modify for your needs (display help from here):
Mat frame1 = imread("/home/radford/Desktop/1.png");
Mat frame2 = imread("/home/radford/Desktop/2.png");
namedWindow("flow");
Mat flow;
calcOpticalFlowSF(frame1, frame2, flow, 3, 2, 4);
Mat xy[2];
split(flow, xy);
//calculate angle and magnitude
Mat magnitude, angle;
cartToPolar(xy[0], xy[1], magnitude, angle, true);
//translate magnitude to range [0;1]
double mag_max;
minMaxLoc(magnitude, 0, &mag_max);
magnitude.convertTo(magnitude, -1, 1.0/mag_max);
//build hsv image
Mat _hsv[3], hsv;
_hsv[0] = angle;
_hsv[1] = Mat::ones(angle.size(), CV_32F);
_hsv[2] = magnitude;
merge(_hsv, 3, hsv);
//convert to BGR and show
Mat bgr;//CV_32FC3 matrix
cvtColor(hsv, bgr, COLOR_HSV2BGR);
imshow("flow", bgr);
waitKey(0);
In the example opencv/samples/cpp/simpleflow_demo.cpp there is a code block
if (frame1.type() != 16 || frame2.type() != 16) {
printf(APP_NAME "Images should be of equal type CV_8UC3\n");
exit(1);
}
So, grey images should be converted to CV_8UC3. For example using cvtColor(grey, grey3, CV_GRAY2RGB);