I want to read the picture in grayscale and rotate it 90 degrees, but I am making a mistake somewhere. Is there anyone who can help?
cv::Mat image = imread(".../jpg", IMREAD_GRAYSCALE);
cv::resize(image, image,cv::Size(400,400));
cv::Mat imageson;
int rows = image.rows;
int cols = image.cols;
for (int i = 0; i < cols; i++)
{
for (int j = 0; j < rows ; j++)
{
imageson.at<uchar>(i,j) = image.at<uchar>(rows-1-j,i);
}
}
cv::imshow("image",imageson);
cv::waitKey(0);
// im beginner btw. output = (exit value:-1)
Related
I want to make a negative transformation for the image which is a very simple program.
But when I run the program. I want to transform all of the pixels in the image, but only 1/3 parts of that are processed. I don't make sure where is wrong. all the code I followed the book. But the result is different.
I think there is something wrong about the columns, but when I change the value of I.cols in negativeImage function with the actual value of image. the output still keep the same. only 1/3 parts of image are processed. If I 3 times the I.cols all of the pixels in the iamge could be processed.
vector<uchar> getNegativeLUT() {
vector<uchar> LUT(256, 0);
for (int i = 0; i < 256; ++i)
LUT[i] = (uchar)(255 - i);
return LUT;
}
void negativeImage(Mat& I) {
vector<uchar> LUT = getNegativeLUT();
for (int i = 0; i < I.rows; ++i) {
for (int j = 0; j < I.cols; ++j) {
I.at<uchar>(i, j) = LUT[I.at<uchar>(i, j)];
//stack overflow
}
}
}
int main() {
Mat image = imread("1.png");
Mat processed_image2 = image.clone();
negativeImage(processed_image2);
printf("%d", image.cols);
imshow("Input Image", image);
imshow("Negative Image", processed_image2);
waitKey(0);
return 0;
}
Output Image
You need to put correct type with at<> operator. Your PNG image has to be converted to 8UC1 to then use uchar type to access each pixel. I suppose your image has 3 channels, so you only iterate over 1/3 of the image. Also, I suggest you to use ptr<> operator in rows loop and then access to pixel as an array.
Mat M;
cvtColor(I, M, CV_BGR2GRAY);
// M is CV_8UC1 type
for(int i = 0; i < M.rows; i++)
{
uchar* p = M.ptr<uchar>(i);
for(int j = 0; j < I.cols; j++)
{
p[j] = LUT[p[j]];
}
}
EDIT: you should use cv::LUT instead of doing it yourself.
cv::Mat lut(1, 256, CV_8UC1);
for( int i = 0; i < 256; ++i)
{
lut.at<uchar>(0,i) = uchar(255-i);
}
cv::LUT(M, lut, result);
I work on traffic sign detection, firstly I am applied a segmentation on RGB image to obtain red channel image as it is illustrated in image 1:
Secondely I try to find homogeneous region to eliminate not interested region (not a traffic sign) by calculating the variance of sliding window above the image
I use this code but I have always exception
int main(int argc, char** argv)
{
IplImage *image1;
if ((image1 = cvLoadImage("segmenter1/00051.jpg", 0)) == 0)
return NULL;
int rows = image1->width;
int cols = image1->height;
Mat image = Mat::zeros(cols, rows, CV_32FC1);
double x = 0;
double temp = 0;
for (int i = 0; i < rows; i++){
for (int j = 0; j < cols; j++){
temp = cvGet2D(image1, j, i).val[0];
x = temp / 255;
image.at<float>(j, i) = x;
x = image.at<float>(j, i);
}
}
int k = 16;
double seuil = 0.0013;
CvScalar blanc;//pixel blanc
blanc.val[0] = 255;
cv::Scalar mean, stddev; //0:1st channel, 1:2nd channel and 2:3rd channel
for (int j = 0; j < rows - k; j++)
{
for (int i = 0; i < cols - k; i++)
{
double som = 0;
double var = 0;
double t = 0;
for (int jj = j; jj < k+j; jj++)
{
for (int ii = i; ii < k+i; ii++)
{
t = image.at<float>(jj, ii);
som = som + t;
t = t*t;
var =var+ t;
}
}
som = som / (k*k);
if (som>0.18){
var = (var / (k*k)) - (som*som);
if (var < seuil)
cvSet2D(image1, j, i, blanc);
}
}
}
char stsave[80];
cvSaveImage("variance/00051.jpg", image1);
cv::waitKey(0);
return 0;
}
Without the specific exception, I can only guess it is out_of_range. According to opencv docs, cvGet2D and cvSet2D parameters are image, y, x which effectively translates to image, rows, cols. You have flipped the definition of rows, cols and have conflicting usage between the two loops. Maybe fix these and try again.
I have copied a grayscale image into a cv::Mat1b, and I want to loop through each pixel and read and change its value. How can I do that?
My code looks like this :
cv::Mat1b newImg;
grayImg.copyTo(newImg);
for (int i = 0; i < grayImg.rows; i++) {
for (int j = 0; i < grayImg.cols; j++) {
int pixelValue = static_cast<int>(newImg.at<uchar>(i, j));
if(pixelValue > thresh)
newImg.at<int>(i,j) = 0;
else
newImg.at<int>(i, j) = 255;
}
}
But in the assignments (inside of if and else), I get the error Access violation writing location.
How do I read and write specific pixels correctly?
Thanks !
Edit
Thanks to #Miki and #Micka, this is how I solved it :
for (int i = 0; i < newImg.rows; i++) {
for (int j = 0; j < newImg.cols; j++) {
// read :
cv::Scalar intensity1 = newImg.at<uchar>(i,j);
int intensity = intensity1.val[0];
// write :
newImg(i, j) = 255;
}
}
newImg.at<int>(i,j)
should be
newImg.at<uchar>(i,j)
Because cv::Mat1b is of uchar type
i suggest :
cv::Mat1b newImg;
newImg = grayImg > thresh ;
or
cv::Mat1b newImg;
newImg = grayImg < thresh ;
also look at the OpenCV Tutorials to know how to go through each and every pixel of an image
below is a snippet from the opencv SVM tutorial at this link. And in that snippet is this line of code ' Mat sampleMat = (Mat_(1,2) << j,i);'. Instead of using the Mat_ template, I would need to use a regular Mat object. I was hoping someone can show me how to convert the Mat_ to a Mat in the previous line.
I tried Mat sampleMat = (Mat(1,2, CV_32FC1) << j,i); //but get a long page of errors
I tried Mat sampleMat = Mat(1,2, CV_32FC1) << j,i; //same, long page of errors
I just need the code at the link at the top of the page to run without using the Mat_ and only use a Mat in its place...if someone can show me how to write that line I'd appreciate it.
for (int i = 0; i < image.rows; ++i)
for (int j = 0; j < image.cols; ++j)
{
Mat sampleMat = (Mat_<float>(1,2) << j,i);
float response = SVM.predict(sampleMat);
if (response == 1)
image.at<Vec3b>(i,j) = green;
else if (response == -1)
image.at<Vec3b>(i,j) = blue;
}
Edit: Trying to run like below but getting errors
Vec3b green(0,255,0), blue (255,0,0);
// Show the decision regions given by the SVM
for (int i = 0; i < image.rows; ++i)
for (int j = 0; j < image.cols; ++j)
{
Mat sampleMat(1, 2, CV_32F);
float * const pmat = sampleMat.ptr<float>();
pmat[0] = i;
pmat[1] = j;
float response = SVM.predict(sampleMat);
if (response == 1)
pmat[0] = green;
pmat[1] = green;
else if (response == -1)
pmat[0] = blue;
pmat[1] = blue;
}
I figured you'd know enough so I didn't need the errors=)
Set the values directly:
Mat sampleMat(1, 2, CV_32F);
sampleMat.at<float>(0,1) = j;
sampleMat.at<float>(0,2) = i;
or
Mat sampleMat(1, 2, CV_32F);
float * const pmat = sampleMat.ptr<float>();
pmat[0] = j;
pmat[1] = i;
Addendum:
Seeing your loop, you could make it a bit more efficient in the case that SVM.predict doesn't modify sampleMat. You can set the image row just once per row, instead of doing it all the time:
for (int i = 0; i < image.rows; ++i)
{
Mat sampleMat(1, 2, CV_32F);
sampleMat.at<float>(0, 2) = i;
for (int j = 0; j < image.cols; ++j)
{
sampleMat.at<float>(0, 1) = j;
...
}
}
I'm working with depth map acquired from two images (i took it from opencv StereoBM) and now i need to find clusters in them
I decided to use pcl region growing segmentation http://www.pointclouds.org/documentation/tutorials/region_growing_segmentation.php. I converted cv::Mat to point cloud after reading this article http://blog.martinperis.com/2012/01/3d-reconstruction-with-opencv-and-point.html and now i have cluster indexes
This to functions here https://gist.github.com/Daiver/5586252
Now i want, to use these indexes to display clusters on depth map from StereoBM (cv::Mat)
I'm trying this but I'm not satisfied with the results
pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud; //cloud from depth map and rgb image
std::vector <pcl::PointIndices> clusters;// clusters inices, extracted before
for(int j = 0; j < clusters.size(); j++)
{
cv::Mat to_show = cv::Mat::zeros(288, 384, cv::DataType<uchar>::type);//image has size that equal size of rgb image and depth map
for(int i = 0; i < clusters[j].indices.size(); i++)
{
to_show.data[clusters[j].indices[i]] = 200;// regions in this Mat must be equal with regions from depth map
}
cv::imshow("", to_show);
cv::waitKey();
}
Result
Some cluster
Another cluster
Visualized cloud
How i can project clusters to cv::Mat?
PS sorry for my writing mistakes. English in not my native language
UPD
I've tryed to "restore" depth map by using loops like loops in mat_to_cloud function
int counter = 0;
cv::Mat to_show = cv::Mat::zeros(288, 384, cv::DataType<uchar>::type);
for(int i = 0; i < cloud->height; i++)
{
for(int j = 0; j < cloud->width; j++)
{
to_show.at<uchar>(i, j) = cloud->at(counter).z;
counter++;
}
}
And another order of loops
int counter = 0;
cv::Mat to_show = cv::Mat::zeros(288, 384, cv::DataType::type);
for(int j = 0; j < cloud->width; j++)
{
for(int i = 0; i < cloud->height; i++)
{
to_show.at(i, j) = cloud->at(counter).z;
counter++;
}
}
I don't know why these image is similar
I have not worked with PCL before but it looks like this line might be wrong:
// regions in this Mat must be equal with regions from depth map
to_show.data[clusters[j].indices[i]] = 200;
to_show is an opencv matrix but you use the indices from the point cloud. You need to convert the indices to pixel coordinates first.
I solved my problem, but my solution is a little dirty and silly.
I Made my own simple reprojection function
void straight_reproject_cloud(cv::Mat& img_rgb, cv::Mat& img_disparity, pcl::PointCloud<pcl::PointXYZRGB>::Ptr& point_cloud_ptr)
{
uchar pr, pg, pb;
for (int i = 0; i < img_rgb.rows; i++)
{
uchar* rgb_ptr = img_rgb.ptr<uchar>(i);
uchar* disp_ptr = img_disparity.ptr<uchar>(i);
for (int j = 0; j < img_rgb.cols; j++)
{
uchar d = disp_ptr[j];
if ( d == 0 ) continue; //Discard bad pixels
pb = rgb_ptr[3*j];
pg = rgb_ptr[3*j+1];
pr = rgb_ptr[3*j+2];
//Insert info into point cloud structure
pcl::PointXYZRGB point;
point.x = j;
point.y = i;
point.z = d;
uint32_t rgb = (static_cast<uint32_t>(pr) << 16 |
static_cast<uint32_t>(pg) << 8 | static_cast<uint32_t>(pb));
point.rgb = *reinterpret_cast<float*>(&rgb);
point_cloud_ptr->push_back (point);
}
}
}
This function adds points to cloud directly from images, without changes
New cloud is not coincided with old reprojection but i can work with it
And now points coordinates in cloud is concided to coordinates in images
I can show all clusters from cloud in image:
for(int k = 0; k < clusters.size(); k++)
{
cv::Mat res = cv::Mat::zeros(img_rgb.rows, img_rgb.cols, CV_8U);
//for(int i =0 ; i < point_cloud_ptr->points.size(); i++)
for(int j =0 ; j < clusters[k].indices.size(); j++)
{
int i = clusters[k].indices[j];
int x = point_cloud_ptr->at(i).x;
int y = point_cloud_ptr->at(i).y;
res.at<uchar>(y, x) = (int)(point_cloud_ptr->at(i).z);
}
cv::imshow("rec2", res);
cv::waitKey();
}
New cloud