I'm trying to partition a cv::Mat into smaller cv::Mat's using OpenCV. I found this method online but I can't get it to work. I want to partition a cv::Mat of, say 640 x 480 into blocks of say, 32 x 32 and operate on each block individually as I go along.
Here is my code. curr_frame contains the total image as a cv::Mat. N_per_col and N_per_row contain the number of mb_sz x mb_sz blocks per column and row respectively.
void ClassName::partition( void )
{
for( i = 0; i < N_per_col; i += mb_sz )
{
for( j = 0; j < N_per_row; j += mb_sz )
{
cv::Mat tmp_img( curr_frame, cv::Rect( i, j, mb_sz, mb_sz ) );
// Do stuff with tmp_img here
}
}
}
This compiles fine but at runtime I get an image full of NULL pixels in tmp_img. curr_frame is definitely OK, as I can view it with imshow().
The documentation is not very clear on this, so any help would be greatly appreciated.
As i mentioned in the comments, the code is corret. to be sure, I tested it with opencv 2.4.1 and the result was as you would expect. so i guess the problem is with something else not mentioned here.
Related
My title may not be clear enough, but please look carefully on the following description.Thanks in advance.
I have a RGB image and a binary mask image:
Mat img = imread("test.jpg")
Mat mask = Mat::zeros(img.rows, img.cols, CV_8U);
Give some ones to the mask, assume the number of ones is N. Now the nonzero coordinates are known, based on these coordinates, we can surely obtain the corresponding pixel RGB value of the origin image.I know this can be accomplished by the following code:
Mat colors = Mat::zeros(N, 3, CV_8U);
int counter = 0;
for (int i = 0; i < mask.rows; i++)
{
for (int j = 0; j < mask.cols; j++)
{
if (mask.at<uchar>(i, j) == 1)
{
colors.at<uchar>(counter, 0) = img.at<Vec3b>(i, j)[0];
colors.at<uchar>(counter, 1) = img.at<Vec3b>(i, j)[1];
colors.at<uchar>(counter, 2) = img.at<Vec3b>(i, j)[2];
counter++;
}
}
}
And the coords will be as follows:
enter image description here
However, this two layer of for loop costs too much time. I was wondering if there is a faster method to obatin colors, hope you guys can understand what I was trying to convey.
PS:If I can use python, this can be done in only one sentence:
colors = img[mask == 1]
The .at() method is the slowest way to access Mat values in C++. Fastest is to use pointers, but best practice is an iterator. See the OpenCV tutorial on scanning images.
Just a note, even though Python's syntax is nice for something like this, it still has to loop through all of the elements at the end of the day---and since it has some overhead before this, it's de-facto slower than C++ loops with pointers. You necessarily need to loop through all the elements regardless of your library, you're doing comparisons with the mask for every element.
If you are flexible with using any other open source library using C++, try Armadillo. You can do all linear algebra operations with it and also, you can reduce above code to one line(similar to your Python code snippet).
Or
Try findNonZero()function and find all coordinates in image containing non-zero values. Check this: https://stackoverflow.com/a/19244484/7514664
Compile with optimization enabled, try profiling this version and tell us if it is faster:
vector<Vec3b> colors;
if (img.isContinuous() && mask.isContinuous()) {
auto pimg = img.ptr<Vec3b>();
for (auto pmask = mask.datastart; pmask < mask.dataend; ++pmask, ++pimg) {
if (*pmask)
colors.emplace_back(*pimg);
}
}
else {
for (int r = 0; r < img.rows; ++r) {
auto prowimg = img.ptr<Vec3b>(r);
auto prowmask = img.ptr(r);
for (int c = 0; c < img.cols; ++c) {
if (prowmask[c])
colors.emplace_back(prowimg[c]);
}
}
}
If you know the size of colors, reserve the space for it beforehand.
I'm trying to print an image using OpenCV defining a 400x400 Mat:
plot2 = cv::Mat(400,400, CV_8U, 255);
But when I try print the points, something strange happens. The y coordinate only prints to the first 100 values. That is, if I print the point (50,100), it does not print it in the 100/400th part of the columns, but at the end. Somehow, 400 columns have turned into 100.
For example, when running this:
for (int j = 0; j < 95; ++j){
plot2.at<int>(20, j) = 0;
}
cv::imshow("segunda pared", plot2);
Shows this (the underlined part is the part corresponding to the code above):
A line that goes to 95 almost occupies all of the 400 points when it should only occupy 95/400th of the screen.
What am I doing wrong?
When you defined your cv::Mat, you told clearly that it is from the type CV_8U:
plot2 = cv::Mat(400,400, CV_8U, 255);
But when you are trying to print it, you are telling that its type is int which is usually a signed 32 bit not unsigned 8 bit. So the solution is:
for (int j = 0; j < 95; ++j){
plot2.at<uchar>(20, j) = 0;
}
Important note: Be aware that OpenCV uses the standard C++ types not the fixed ones. So, there is no need to use fixed size types like uint16_t or similar. because when compiling OpenCV & your code on another platform both of them will change together.
BTW, one of the good way to iterate through your cv::Mat is:
for (size_t row = 0; j < my_mat.rows; ++row){
auto row_ptr=my_mat.ptr<uchar>(row);
for(size_t col=0;col<my_mat.cols;++col){
//do whatever you want with row_ptr[col] (read/write)
}
}
I'm coding using C++ and opencv on linux. I've found this similar question; although, I can't quite get it to work.
What I want to do is read in a video file and store a certain number of frames in an array. Over that number, I want to delete the first frame and add the most recent frame to the end of the array.
Here's my code so far.
VideoCapture cap("Video.mp4");
int width = 2;
int height = 2;
Rect roi = Rect(100, 100, width, height);
vector<Mat> matArray;
int numberFrames = 6;
int currentFrameNumber = 0;
for (;;){
cap >> cameraInput;
cameraInput(roi).copyTo(finalOutputImage);
if(currentFrameNumber < numberFrames){
matArray.push_back(finalOutputImage);
}else if(currentFrameNumber <= numberFrames){
for(int i=0;i<matArray.size()-1; i++){
swap(matArray[i], matArray[i+1]);
}
matArray.pop_back();
matArray.push_back(finalOutputImage);
}
currentFrameNumber++;
}
My understanding of mats says this is probably a problem with pointers; I'm just not sure how to fix it. When I look at the array of mats, every element is the same frame. Thank you.
There's no need for all this complication if you were to make use of C++'s highly useful STL.
if( currentFrameNumber >= numberFrames )
matArray.remove( matArray.begin() );
matArray.push_back( finalOutputImage.clone() ); //check out #berak's comment
should do it.
I have a for loop the takes an OpenCV Mat object of n x n dimensions, and returns a Mat object of n^2 x 1 dimensions. It works, but when I time the method it takes between 1 and 2 milliseconds. Since I am calling this method 3 or 4 million times its taking my program about an hour to run. A research paper I'm referencing suggests the author was able to produce a program with the same function that ran in only a few minutes, without running any threads in parallel. After timing each section of code, the only portion taking >1 ms is the following method.
static Mat mat2vec(Mat mat)
{
Mat toReturn = Mat(mat.rows*mat.cols, 1, mat.type());
float* matPt;
float* retPt;
for (int i = 0; i < mat.rows; i++) //rows
{
matPt = mat.ptr<float>(i);
for (int j = 0; j < mat.row(i).cols; j++) //col
{
retPt = toReturn.ptr<float>(i*mat.cols + j);
retPt[0] = matPt[j];
}
}
return toReturn;
}
Is there any way that I can increase the speed at which this method converts an n x n matrix into an n^2 x 1 matrix (or cv::Mat representing a vector)?
that solved most of the problem #berak, its running a lot faster now. however in some cases like below, the mat is not continuous. Any idea of how I can get an ROI in a continuous mat?
my method not looks like this:
static Mat mat2vec(Mat mat)
{
if ( ! mat.isContinuous() )
{
mat = mat.clone();
}
return mat.reshape(1,2500);
}
Problems occur at:
Mat patch = Mat(inputSource, Rect((inputPoint.x - (patchSize / 2)), (inputPoint.y - (patchSize / 2)), patchSize, patchSize));
Mat puVec = mat2vec(patch);
assuming that the data in your Mat is continuous, Mat::reshape() for the win.
and it's almost for free. only rows/cols get adjusted, no memory moved. i.e, mat = mat.reshape(1,1) would make a 1d float array of it.
Seeing this in OpenCV 3.2, but the function is now mat.reshape(1).
I'm trying to subdivide a gray frame in multiple small squares and than caluculate for each one of them the mean color value of each one so I can build a result frame that display those values, here 's what I' done :
int main (){
cv::Mat frame= cv::imread("test2.jpg",0), result, myROI;
int key = 0;
int roiSize =10;
cv::Scalar mean(0);
cv::Mat meanS;
meanS = cv::Mat::zeros (frame.rows/roiSize,frame.cols/roiSize,CV_32FC1) ;
cv::Rect roi;
if(frame.channels()!=1)
cv::cvtColor(frame,frame,CV_BGR2GRAY);
for ( int i=0 ; i< frame.cols /roiSize; i++){
for (int j = 0 ; j < frame.rows/roiSize; j++){
roi.x= i*roiSize;
roi.y= j*roiSize;
roi.height=roiSize;
roi.width= roiSize;
myROI = frame(roi);
cv::imshow("myRoi",myROI);
mean = cv::mean(myROI);
std::cout << mean[0] << std::endl;
meanS.at<float>(j,i) = mean[0];
}
}
//meanS *=1/255; // I've tried this one also, it didn't help !
cv::imshow("the reuslt ",meanS);
cv::waitKey(0);
return 0;
}
in the console the values are correct but when I display the result with imshow I get only a white frame ! !!
any Idea how can I solve this ?
thanks in advance !
your comment line is actually correct but it's doing integer division and thus multiplying by zero. just add a dot at the end like meanS *=1/255.; // I've tried this one also, it didn't help !