I've got a problem iterating through coordinates of a OpenCV Mat:
cv::Mat picture = cv::Mat(depth.rows, depth.cols, CV_32F);
for (int y = 0; y < depth.rows; ++y)
{
for (int x = 0; x < depth.cols; ++x)
{
float depthValue = (float) depth.at<float>(y,x);
picture.at<float>(y, x) = depthValue;
}
}
cv::namedWindow("picture", cv::WINDOW_AUTOSIZE);
cv::imshow("picture", picture);
cv::waitKey(0);
Resulting pictures:
before (depth)
after (picture)
It looks like it's
1. scaled and
2. stopped at about a third of the width. Any ideas?
Looks like your depth image have 3 channels.
All channels values are the same for BW image (B=G=R), so you have BGRBGRBGR instead of GrayGrayGray, and you trying to access it as it is 1 channel, that is why image is stretched 3 times horizontally.
Try to cv::cvtColor(depth,depth,COLOR_BGR2GRAY) before running loop.
Your iteration code is right.
What's wrong instead is the cv::Mat depth type assumption.
As suggested, this could be CV_U8C3 according to the distorsion.
To get the pixel value of such a CV_8UC3 matrix you can use :
cv::Vec3i depthValue = depth.at<cv::Vec3i>(y,x);
Then do whatever you want with this scalar.
For example, if your depth is of type CV_8UC3 with distance encoded in the two first bytes (MSB first) you can get the distance with :
float distance = depthValue[0] * 255 + depthValue[1];
Related
I'm using an STMap to map a .jpg image using remap().
I loaded my STMap, split the channels and converted each channel matrix to CV_32FC1.
I checked them and it worked - each matrix displays correctly and all of its values are between 0.0 and 1.0.
However, when i try to use the remap() function:
Mat dst;
remap(image4, dst,map_x,map_y,INTER_LINEAR,BORDER_CONSTANT,Scalar(0,0,0));
imshow( "Result", dst );
It just displays a black image.
image4 = my .jpg image
map_x = grayscale CV_32FC1 (red channel
of the original STMap)
map_y = grayscale CV_32FC1 (green channel
of the original STMap)
What could be the problem?
Thanks!
Black image when using cv::remap is due to using offsets instead of absolute locations in the passed map(s).
Optical flow algorithms usually export motion vectors, not absolute positions, whereas cv::remap expects the absolute coordinate (subpixel) to sample from.
To convert between the two, starting with a CV_32FC2 flow matrix we can do something like this:
// Convert from offsets to absolute locations.
Mat mapx(flow.size(), CV_32FC1);
Mat mapy(flow.size(), CV_32FC1);
for (int row = 0; row < flow.rows; row++)
{
for (int col = 0; col < flow.cols; col++)
{
Point2f f = flow.at<Point2f>(row, col);
mapx.at<float>(row, col) = col + f.x;
mapy.at<float>(row, col) = row + f.y;
}
}
Then mapx and mapy can be used in remap.
This question is continuance from my question in this link. After i get mat matrix, the 3x1 matrix is multiplied with 3x3 mat matrix.
for (int i = 0; i < im.rows; i++)
{
for (int j = 0; j < im.cols; j++)
{
for (int k = 0; k < nChannels; k++)
{
zay(k) = im.at<Vec3b>(i, j)[k]; // get pixel value and assigned to Vec4b zay
}
//convert to mat, so i can easily multiplied it
mat.at <double>(0, 0) = zay[0];
mat.at <double>(1, 0) = zay[1];
mat.at <double>(2, 0) = zay[2];
We get 3x1 mat matrix and do multiplication with the filter.
multiply= Filter*mat;
And i get mat matrix 3x1. I want to assign the value into my new 3 channels mat matrix, how to do that? I want to construct an images using this operation. I'm not use convolution function, because i think the result is different. I'm working in c++, and i want to change the coloured images to another color using matrix multiplication. I get the algorithm from this paper. In that paper, we need to multiplied several matrix to get the result.
OpenCV gives you a reshape function to change the number of channels/rows/columns implicitly:
http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-reshape
This is very efficient since no data is copied, only the matrix header is changed.
try:
cv::Mat mat3Channels = mat.reshape(3,1);
Didn't test it, but should work. It should give you a 1x1 matrix with 3 channel element (Vec3d) if you want a Vec3b element instead, you have to convert it:
cv::Mat mat3ChannelsVec3b;
mat3Channels.convertTo(mat3ChannelsVec3b, CV_8UC3);
If you just want to write your mat back, it might be better to create a single Vec3b element instead:
cv::Vec3b element3Channels;
element3Channels[0] = multiply.at<double>(0,0);
element3Channels[1] = multiply.at<double>(1,0);
element3Channels[2] = multiply.at<double>(2,0);
But care in all cases, that Vec3b elements can't save values < 0 and > 255
Edit: After reading your question again, you ask how to assign...
I guess you have another matrix:
cv::Mat outputMatrix = cv::Mat(im.rows, im.cols, CV_8UC3, cv::Scalar(0,0,0));
Now to assign multiply to the element in outputMatrix you ca do:
cv::Vec3b element3Channels;
element3Channels[0] = multiply.at<double>(0,0);
element3Channels[1] = multiply.at<double>(1,0);
element3Channels[2] = multiply.at<double>(2,0);
outputMatrix.at<Vec3b>(i, j) = element3Channels;
If you need alpha channel too, you can adapt that easily.
I have an grayscale image, and I want to crop a rectangle of size w x h centered at pixel (x,y). The problem is, I don't want the crop to look boxy so around the edge I want to gaussian blur the values so that they smoothly transisition to zero. Any ideas on how to do this?
Currently I am doing:
int bb_min_x = center_x - width/2.0;
int bb_max_x = center_x + width/2.0;
int bb_min_y = center_y - height/2.0;
int bb_max_y = center_y + height/2.0;
for(int y = bb_min_y; y <= bb_max_y; y++){
for(int x = bb_min_x; x <= bb_max_x; x++){
final_img.at<uchar>(y,x) = original_img.at<uchar>(y,x);
}
}
try this function:
compute the distance from your input rectangle and use that as a fading factor.
cv::Mat cropFade(cv::Mat _img, cv::Rect _roi, int _maxFadeDistance)
{
cv::Mat fadeMask = cv::Mat::ones(_img.size(), CV_8UC1);
cv::rectangle(fadeMask, _roi, cv::Scalar(0),-1);
cv::imshow("mask",fadeMask>0);
cv::Mat dt;
cv::distanceTransform(fadeMask > 0, dt, CV_DIST_L2 ,CV_DIST_MASK_PRECISE);
// fade to a maximum distance:
double maxFadeDist;
if(_maxFadeDistance > 0)
maxFadeDist = _maxFadeDistance;
else
{
// find min/max vals
double min,max;
cv::minMaxLoc(dt,&min,&max);
maxFadeDist = max;
}
//dt = 1.0-(dt* 1.0/max); // values between 0 and 1 since min val should alwaysbe 0
dt = 1.0-(dt* 1.0/maxFadeDist); // values between 0 and 1 in fading region
cv::imshow("blending mask", dt);
cv::Mat imgF;
_img.convertTo(imgF,CV_32FC3);
std::vector<cv::Mat> channels;
cv::split(imgF,channels);
// multiply pixel value with the quality weights for image 1
for(unsigned int i=0; i<channels.size(); ++i)
channels[i] = channels[i].mul(dt);
cv::Mat outF;
cv::merge(channels,outF);
cv::Mat out;
outF.convertTo(out,CV_8UC3);
return out;
}
calling that with cv::Mat out = cropFade(in, cv::Rect(in.cols/4, in.rows/4, in.cols/2, in.rows/2), in.cols/8); gives me those results for a lena with the specified rect:
this is the result for full image fading from the same unchanged rect:
One simple approach:
// Create a weight image
int border=25;
cv::Mat_<float> rect=cv::Mat_<float>::zeros(height,width)
cv::rectangle(rect,cv::Rect(border/2,border/2,width-border,height-border),cv::Scalar(1),-1);
cv::Mat_<float> weights, kernel=cv::getStructuringElement(cv::MORPH_ELLIPSE,cv::Size(border,border));
int nnz = cv::countNonZero(kernel);
cv::filter2D(rect,weights,-1,kernel/nnz);
This creates a weight image like the following:
Then you use it to fade your image out:
for(int y = bb_min_y; y <= bb_max_y; y++){
for(int x = bb_min_x; x <= bb_max_x; x++){
float w = weights.at<float>(y-bb_min_y,x-bb_min_x);
uchar val = original_img.at<uchar>(y,x);
final_img.at<uchar>(y,x) = cv::saturate_cast<uchar>(w*val);
}
}
If you turn your bounding box into a contour you can use pointPolygonTest to calculate the distance to the edge of the bounding box for each pixel. If you then lower the color values to zero depending on this distance you get a blur effect.
See this page for an example.
I am trying to work with each pixel from depth map. (I am implementing image segmentation.) I don't know how to work with pixels from image with depth higher than 1.
This sample code copies depth map to another cv::Mat pixel by pixel. It works fine, if I normalize it (depth of normalized image = 1). But it doesn't work with depth = 3, because .at<uchar> is wrong operation for this depth.
cv::Mat res;
cv::StereoBM bm(CV_STEREO_BM_NORMALIZED_RESPONSE);
bm(left, right, res);
std::cout<<"type "<<res.type()<<" depth "<<res.depth()<<" channels "<<res.channels()<<"\n";// type 3 depth 3 channels 1
cv::Mat tmp = cv::Mat::zeros(res.rows, res.cols, res.type());
for(int i = 0; i < res.rows; i++)
{
for(int j = 0; j < res.cols; j++)
{
tmp.at<uchar>(i, j) = res.at<uchar>(i, j);
//std::cout << (int)res.at<uchar>(i, j) << " ";
}
//std::cout << std::endl;
}
cv::imshow("tmp", normalize(tmp));
cv::imshow("res", normalize(res));
normilize function
cv::Mat normalize(cv::Mat const &depth_map)
{
double min;
double max;
cv::minMaxIdx(depth_map, &min, &max);
cv::Mat adjMap;
cv::convertScaleAbs(depth_map, adjMap, 255 / max);
return adjMap;
}
left image - tmp, right image - res
How can I get the pixel from image with depth equal to 3?
Mat::depth() returns value equal to a constant symbolising bit depth of the image. If You get depth equal to for example CV_32F, to get to the pixels You would need to use float instead of uchar.
CV_8S -> char
CV_8U -> uchar
CV_16U -> unsigned int
CV_16S -> int
CV_32F -> float
CV_64F -> double
Mat::channels() tells You how many values of that type are assigned to a coordinate. These multiple values can be extracted as cv::Vec. So if You have a two channel Mat with depth CV_8U, instead using Mat.at<uchar> You would need to go with Mat.at<Vec2b>, or Mat.at<Vec2f> for CV_32F one.
When your images are of depth 3, do this for copying pixel by pixel:
tmp.at<Vec3b>(i,j) = res.at<Vec3b>(i,j);
However, if you are copying the whole image , I do not understand the point of copying each pixel individually, unless you want to do different processing with different pixels.
You can just copy the whole image res to tmp by this:
res.copyTo(tmp);
I'm trying to get a pixel from a Mat object. To test I try to draw a diagonal line on a square and expect to get a perfect line crossing from the top left to the down right vertex.
for (int i =0; i<500; i++){
//I just hard-coded the width (or height) to make the problem more obvious
(image2.at<int>(i, i)) = 0xffffff;
//Draw a white dot at pixels that have equal x and y position.
}
The result, however, is not as expected.
Here is a diagonal line drawn on a color picture.
Here is it on a grayscale picture.
Anyone sees the problem?
The problem is that you are trying to access each pixel as int (32 bit per pixel image), while your image is a 3-channel unsigned char (24 bit per pixel image) or a 1-channel unsigned char (8 bit per pixel image) for the grayscale one.
You can try to access each pixel like this for the grayscale one
for (int i =0; i<image2.width; i++){
image2.at<unsigned char>(i, i) = 255;
}
or like this for the color one
for (int i =0; i<image2.width; i++){
image2.at<Vec3b>(i, i)[0] = 255;
image2.at<Vec3b>(i, i)[1] = 255;
image2.at<Vec3b>(i, i)[2] = 255;
}
(image2.at<int>(i, i)) = 0xffffff;
It looks like your color image is 24bit, but your addressing pixels in terms of int which seems to be 32 bit.