I declared a Mat object D4 with initial values zero with given dimensions and datatype.
Then to display it in smaller dimension, I wrote this
Mat D4=Mat::zeros(7168,7424,CV_32FC1);
Mat res6;
for (i=0; i<7168; i++)
for(j=0; j<7424; j++)
{
DC_.at<uchar>(i, j) = (unsigned char)D4.at<float>(i, j);
}
resize(DC_, res6, Size(512, 512));
imshow("Test", res6);
I expect a complete black image. But I get a patch of gray values on the bottom right side (that patch resembles my input image at that exact location) Why does this happen ? What is going wrong ? Please answer asap.
can you please try whether the problem still occurs if you use this snippet instead?
Mat D4=Mat::zeros(DC_.rows,DC_.cols,CV_32FC1);
Mat res6;
for (i=0; i<DC_.rows; i++)
for(j=0; j<DC_.cols; j++)
{
DC_.at<uchar>(i,j) = (unsigned char)D4.at<float>(i,j);
}
resize(DC_,res6,Size(512,512));
imshow("Test",res6);
Related
I am trying to sum together the BGR values of the entire column of a ROI of a camera image. Right now, I loop through the matrix, and assign each BGR value to a variable to put into my 2d vector. I am sure I am doing this wrong, and that there is a much better way.
Vec3b rgbVal;
for (int r = 0; r < img1.rows; ++r) {
vector <double> colVal;
for (int c = 0; c < img1.cols; ++c) {
rgbVal = img1.at<Vec3b>(r, c); //Converts Mat image to Vec3b for BRG access
//Accesses the BRG values of each pixel in the column of the ROI, and adds it to values
values += static_cast<double>(rgbVal[0]) + static_cast<double>(rgbVal[1]) + static_cast<double>(rgbVal[2]);
colVal.push_back(values);
}
j.push_back(colVal);
values = 0;
}
So in my ROI, I am trying to sum together ALL the BGR values of every pixel in each column. Can someone help me out with this and point me in the right direction?
Thanks!
I am trying to output an image from my mex file back to my matlab file, but when i open it in matlab it is not correct.
The output image withing the mex file is correct
I have tried switching the orientation of the mwSize as well as swapping i and j in new_img.at<int>(j, i);
Mat image = imread(mxArrayToString(prhs[0]));
Mat new_img(H,W, image.type(), Scalar(0));
// some operations on new_img
imshow( "gmm image", image ); //shows the original image
imshow( "gmm1 image", new_img ); //shows the output image
waitKey( 200 ); //both images are the same size as desired
mwSize nd = 2;
mwSize dims[] = {W, H};
plhs[0] = mxCreateNumericArray(nd, dims, mxUINT8_CLASS, mxREAL);
if(plhs == NULL) {
mexErrMsgTxt("Could not create mxArray.\n");
}
char* outMat = (char*) mxGetData( plhs[0]);
for (int i= 0; i < H; i++)
{
for (int j = 0; j < W; j++)
{
outMat[i +j*image.rows] = new_img.at<int>(j, i);
}
}
this is in the mat file
gmmMask = GmmMex2(imgName,rect);
imshow(gmmMask); % not the same as the output image. somewhat resembles it, but not correct.
Because you have alluded to this being a colour image, this means that you have three slices of the matrix to consider. Your code only considers one slice. First off you need to make sure that you declare the right size of the image. In MATLAB, the first dimension is always the number of rows while the second dimension is the number of columns. Now you have to add the number of channels too on top of this. I'm assuming this is an RGB image so there are three channels.
Therefore, change your dims to:
mwSize nd = 3;
mwSize dims[] = {H, W, nd};
Changing nd to 3 is important as this will allow you to create a 3D matrix. You only have a 2D matrix. Next, make sure that you are accessing the image pixels at the right location in the cv::Mat object. The way you are accessing the image pixels in the nested pair of for loops assumes a row-major fashion (iterating over the columns first, then the rows). As such, you need to interchange i and j as i accesses the rows and j accesses the columns. You will also need to access the channel of the colour image so you'll need another for loop to compensate. For the grayscale case, you have properly compensated for the column-major memory configuration for the MATLAB MEX matrix though. This is verified because j accesses the columns and you need to skip over by rows amount in order to access the next column. However, to accommodate for a colour image, you must also skip over by image.rows*image.cols to go to the next layer of pixels.
Therefore your for loop should now be:
for (int k = 0; k < nd; k++) {
for (int i = 0; i < H; i++) {
for (int j = 0; j < W; j++) {
outMat[k*image.rows*image.cols + i + j*image.rows] = new_img.at<uchar>(i, j, k);
}
}
}
Take note that the container of pixels is most likely 8-bit unsigned character, and so you must change the template to uchar not int. This may also explain why your program is crashing.
cv::Mat in = cv::imread("SegmentedImage.png");
// vector with all non-white point positions
std::vector<Point> nonWhiteList;
nonWhiteList.reserve(in.rows*in.cols);
// add all non-white points to the vector
for(int j=0; j< in.rows; ++j)
{
for(int i=0; i<in.cols; ++i)
{
// if not white: add to the list
if(in.at<cv::Vec3b>(j,i) != cv::Vec3b(255,255,255))
{
nonWhiteList.push_back(cv::Point(i,j));
}
}
}
cv::Mat BKGR = imread("photo_booth_Cars.png", CV_LOAD_IMAGE_COLOR); //1529x736
I need to write the vector<Point> nonWhiteList to image BKGR, How to do it?
Basically, need to remove the white background from the image and put non-white points on another background image. Researched very much on grabcut and findcontours.
I am completely new to Opencv. Thanks so much for help.
cv::Mat BKGR = imread("photo_booth_Cars.png", CV_LOAD_IMAGE_COLOR); //1529x736
for(int j=0; j<in.rows; ++j)
for(int i=0; i<in.cols; ++i)
{
if(in.at<cv::Vec3b>(j,i) != cv::Vec3b(255,255,255))
{
BKGR.at<cv::Vec3b>(j,i) = in.at<cv::Vec3b>(j,i);
}
}
cv::imwrite("newFinalImage.png", BKGR);
If the images are of same dimensionality than it makes sense else it is difficult to copy directly unless you know the 2 image camera parameters. However, you may use Interpolation to get the two images to same size.
If the images are same size, than why do you need to create a std::vector than copy it to another cv::Mat. You can achieve this in the same loop without extra computation overhead. Just simple as filling an array. However your question is ambiguous.
I try to do a loop over a new zero matrix and change every pixel to white.
cv::Mat background = cv::Mat::zeros(frame.rows, frame.cols,frame.type());
for (int i=0; i<frame.rows; i++)
{
for (int j=0; j<frame.cols; j++)
{
background.at<char>(i,j)=255;
}
}
Normally at the end i have to have a matrix totally white But i don't understand why finally i have this picture:
Thanks
EDIT:
solution:
cv::Mat background = cv::Mat::zeros(frame.rows, frame.cols,frame.type());
for (int i=0; i<frame.rows; i++)
{
for (int j=0; j<frame.cols; j++)
{
Vec3b bgrPixel = Vec3b(255,255,255);
background.at<Vec3b>(i,j)=bgrPixel;
// background.at<char>(i,j)=255;
}
}
Thank you !
Your matrix is made up of i*j pixels - each pixel is made up of 3 (RGB) or 4 (RGBA) chars (bytes/channels). You are only looping over the first i*j bytes of the matrix, when you need to be looping over i*j pixels. I'm guessing whatever type you're passing in as the third argument is the 'pixel type'.
Look here for an example usage: OpenCV get pixel channel value from Mat image
This question is continuance from my question in this link. After i get mat matrix, the 3x1 matrix is multiplied with 3x3 mat matrix.
for (int i = 0; i < im.rows; i++)
{
for (int j = 0; j < im.cols; j++)
{
for (int k = 0; k < nChannels; k++)
{
zay(k) = im.at<Vec3b>(i, j)[k]; // get pixel value and assigned to Vec4b zay
}
//convert to mat, so i can easily multiplied it
mat.at <double>(0, 0) = zay[0];
mat.at <double>(1, 0) = zay[1];
mat.at <double>(2, 0) = zay[2];
We get 3x1 mat matrix and do multiplication with the filter.
multiply= Filter*mat;
And i get mat matrix 3x1. I want to assign the value into my new 3 channels mat matrix, how to do that? I want to construct an images using this operation. I'm not use convolution function, because i think the result is different. I'm working in c++, and i want to change the coloured images to another color using matrix multiplication. I get the algorithm from this paper. In that paper, we need to multiplied several matrix to get the result.
OpenCV gives you a reshape function to change the number of channels/rows/columns implicitly:
http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-reshape
This is very efficient since no data is copied, only the matrix header is changed.
try:
cv::Mat mat3Channels = mat.reshape(3,1);
Didn't test it, but should work. It should give you a 1x1 matrix with 3 channel element (Vec3d) if you want a Vec3b element instead, you have to convert it:
cv::Mat mat3ChannelsVec3b;
mat3Channels.convertTo(mat3ChannelsVec3b, CV_8UC3);
If you just want to write your mat back, it might be better to create a single Vec3b element instead:
cv::Vec3b element3Channels;
element3Channels[0] = multiply.at<double>(0,0);
element3Channels[1] = multiply.at<double>(1,0);
element3Channels[2] = multiply.at<double>(2,0);
But care in all cases, that Vec3b elements can't save values < 0 and > 255
Edit: After reading your question again, you ask how to assign...
I guess you have another matrix:
cv::Mat outputMatrix = cv::Mat(im.rows, im.cols, CV_8UC3, cv::Scalar(0,0,0));
Now to assign multiply to the element in outputMatrix you ca do:
cv::Vec3b element3Channels;
element3Channels[0] = multiply.at<double>(0,0);
element3Channels[1] = multiply.at<double>(1,0);
element3Channels[2] = multiply.at<double>(2,0);
outputMatrix.at<Vec3b>(i, j) = element3Channels;
If you need alpha channel too, you can adapt that easily.