I am trying to print an element of a matrix which stores an image, but for some reason I get a debug error. The function abort() keeps calling. I have pasted the code bellow:
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
int main(){
Mat img = imread("D:/OwnResearch/photo2.jpg");
std::cout << img.at<int>(1, 1, 1) << std::endl;
return 0;
}
I was wondering if there is any way to get an ith, jth, kth element or the matrix img (type Mat)?
you cannot use any type you want with Mat::at(), you must stick to the one it is bound to. if you imread() an image without any further flags, this type will be Vec3b (24bit bgr), never int. also, you have to check, if imread actually succeded before doing so:
Mat img = imread("D:/OwnResearch/photo2.jpg");
if ( ! img.empty() )
{
std::cout << img.at<Vec3b>(1, 1) << std::endl;
}
You can access any element as exampled below.
img.at<uchar>(x , y)[channel]
using "uchar" should be better if you read from jpeg file.
more detailed: http://www.developerstation.org/2012/01/access-mat-in-c-using-opencv.html
Related
I have a single channel of 64-bit floats image that I am trying to transform into an unsigned char using OpenCV. I can successfully visualize the image and resize it as it is too big. However when I am trying to transform the resized image into an unsigned char I don't see anything.
I am doing the transformation using the following function as advised here.
I initially tried const uchar* inBuffer = desc.data; to transform it but according to the same source it seems to be unsafe and therefore opted for a recasting method. That also didn't work but that it seemed at my best understanding the best choice. The code is below:
#include <opencv2/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/opencv.hpp>
#include <iostream>
#include <string>
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
String imageName( "/home/to/Desktop/Myexample.tif" );
if( argc > 1)
{
imageName = argv[1];
}
Mat image;
Mat outImage;
Mat corrected;
// Read the file
image = cv::imread( imageName, IMREAD_UNCHANGED );
// Check for invalid input
if(image.empty())
{
cout << "Could not open or find the image" << std::endl ;
return -1;
}
cv::resize(image, outImage, Size(800,800));
cv::namedWindow("Resized", WINDOW_AUTOSIZE);
cv::imshow("Resized", outImage+220);
// Transformation of the resized image into a unsigned char for better visualization
cv::resize(outImage, corrected, Size(800,800));
cv::namedWindow("Corrected", WINDOW_AUTOSIZE);
// From here nothing is showing up
unsigned char const* inBuffer = reinterpret_cast<unsigned char const*>(outImage.data);
cv::imshow("Corrected", *inBuffer);
cv::waitKey(0);
return 0;
}
Another thing I thought could have been useful is from the following source where it was advised to use a double conversion. I understand that it is fast in terms of computation but at the same time this didn't give me any useful result.
Thank you in advance for shedding light on this matter.
I am new to OpenCV. I appreciate if somebody answers this question. I try to read an image and display it. Below is a copy of the code I copied from documentation. However, a window just pops up without the actual image:
#include "opencv2/opencv.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
Mat img = imread("myimage.jpg", CV_LOAD_IMAGE_UNCHANGED);
if (img.empty())
{
cout << "Error : Image cannot be loaded..!!" << endl;
return -1;
}
else
{
namedWindow("MyWindow", CV_WINDOW_AUTOSIZE);
imshow("MyWindow", img);
waitKey(5000);
}
return 0;
}
I have copied over your code, and changed the image to my local one, and it displays correctly.
Looks like the program cannot read the image for some reason.
Why don't you try with the full path to the image?
The code is pretty correct, make sure you got myimage.jpg in the same folder with your binary.
Try with full path to an image or provide a path to your image as argv[1].
I'm trying to extract images from gif using giflib in order to bind them in Opencv Mat.
I'm currently using Opencv-2.4.5 and giflib-4.1.6-10.
My problem is that I can extract only extract the first image of the gif.
The second and the others are scratched, I think it is a matter of bits alignement.
Following the doc: http://giflib.sourceforge.net/gif_lib.html
SavedImage *SavedImages; /* Image sequence (high-level API) */
Should provide a pointer to bits of images.
#include <gif_lib.h>
#include <iostream>
#include <assert.h>
#include <string.h>
#include <stdlib.h>
#include "opencv2/opencv.hpp"
using namespace std;
using namespace cv;
int main(int ac, char **av){
int *err;
GifFileType *f = DGifOpenFileName(av[1]);
assert(f != NULL);
int ret = DGifSlurp(f);
assert(ret == GIF_OK);
int width = f->SWidth;
int height = f->SHeight;
cout << f->ImageCount << endl;
cout << width << " : " << height<< endl;
cout << f->SColorResolution << endl;
// SavedImage *image = &f->SavedImages[0]; Does actually works
SavedImage *image = &f->SavedImages[1]; // that compile but the result is a scratched img
Mat img = Mat(Size(width, height), CV_8UC1, image->RasterBits);
imwrite("test.png", img);
DGifCloseFile(f);
return 0;
}
I don't want to use ImageMagick to keep a little piece of code and keep it "light".
Thanks for your Help.
Did you check whether your GIF file is interlaced ? If it is , you should consider it before storing rasterbits into a bitmap format.
Also check the top,left,width and height of the "SavedImages" , every frame does not need to cover all canvas so you should only overwrite pixels that are different from the last frame.
I am a beginner with OpenCV and I have read some tutorials and manuals but I couldn't quite make sense of some things.
Currently, I am trying to crop a binary image into two sections. I want to know which row has the most number of white pixels and then crop out the row and everything above it and then redraw the image with just the data below the row with the most number of white pixels.
What I've done so far is to find the coordinates of the white pixels using findNonZero and then store it into a Mat. The next step is where I get confused. I am unsure of how to access the elements in the Mat and figuring out which row occurs the most in the array.
I have used a test image with my code below. It gave me the pixel locations of [2,0; 1,1; 2,1; 3,1; 0,2; 1,2; 2,2; 3,2; 4,2; 1,3; 2,3; 3,3; 2,4]. Each element has a x and y coordinate of the white pixel. First of all how do I access each element and then only poll the y-coordinate in each element to determine the row that occurs the most? I have tried using the at<>() method but I don't think I've been using it right.
Is this method a good way of doing this or is there a better and/or faster way? I have read a different method here using L1-norm but I couldn't make sense of it and would this method be faster than mine?
Any help would be greatly appreciated.
Below is the code I have so far.
#include <opencv2\opencv.hpp>
#include <opencv2\imgproc\imgproc.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
int Number_Of_Elements;
Mat Grayscale_Image, Binary_Image, NonZero_Locations;
Grayscale_Image = imread("Test Image 6 (640x480px).png", 0);
if(!Grayscale_Image.data)
{
cout << "Could not open or find the image" << endl;
return -1;
}
Binary_Image = Grayscale_Image > 128;
findNonZero(Binary_Image, NonZero_Locations);
cout << "Non-Zero Locations = " << NonZero_Locations << endl << endl;
Number_Of_Elements = NonZero_Locations.total();
cout << "Total Number Of Array Elements = " << Number_Of_Elements << endl << endl;
namedWindow("Test Image",CV_WINDOW_AUTOSIZE);
moveWindow("Test Image", 100, 100);
imshow ("Test Image", Binary_Image);
waitKey(0);
return(0);
}
I expect the following to work:
Point loc_i = NonZero_Locations.at<Point>(i);
I am working with images in C++ with OpenCV.
I wrote code with an uchar array of two dimensions where I can read pixel values of an image, uploaded with imread in grayscale using .at< uchar>(i,j).
However I would like to do the same thing for color images. Since I know that to access the pixels values I now need .at< Vec3b>(i,j)[0], .at< Vec3b>(i,j)[1] and .at< Vec3b>(i,j)[2], I made a similar Vec3b 2d arrays.
But I don't know how to fill this array with the pixel values. It has to be a 2D array.
I tried:
array[width][height].val[0]=img.at< Vec3b>(i,j)[0]
but that didn't work.
Didn't find an answer on the OpenCV doc or here neither.
Anybody has an idea?
I've included some of my code. I need an array because I already have my whole algorithm working, using an array, for the images in grayscale with only one channel.
The grayscale code is like that:
for(int i=0;i<height;i++){
for(int j=0;j<width;j++){
image_data[i*width+j]=all_images[nb_image-1].at< uchar>(i,j);
}
}
Where I read from:
std::vector< cv::Mat> all_images
each image (I have a long sequence), retrieves the pixel values in the uchar array image_data, and processes them.
I want now to do the same but for RGB images, and I can't manage to read the data pixel of each channel and put them in an array.
This time image_data is a Vec3b array, and the code I'm trying looks like this:
for(int i=0;i<height;i++){
for(int j=0;j<width;j++){
image_data[0][i*width+j]=all_images[nb_image-1].at<cv::Vec3b>(i,j)[2];
image_data[1][i*width+j]=all_images[nb_image-1].at<cv::Vec3b>(i,j)[1];
image_data[2][i*width+j]=all_images[nb_image-1].at<cv::Vec3b>(i,j)[0];
}
}
But this doesn't work, so I am now at loss I don't know how to succeed to fill the image_data array with the values of all three channels, without changing the code structure as this array is then used on my image processing algorithm.
I don't understand exactly what you are trying to do.
You can directly read a color image with:
cv::Mat img = cv::imread("image.jpeg",1);
Your matrix (img) type will be CV_8UC3, then you can access to each pixel like you said using:
img.at<cv::Vec3b>(row,col)[channel].
If you have a 2D array of Vec3b as Vec3b myArray[n][m];
You can access the values like that:
myArray[i][j](k) where k={1,2,3} since Vec3b is a row matrix.
Here is the code I just tested, and it works.
#include <iostream>
#include <cstdlib>
#include <opencv/cv.h>
#include <opencv/highgui.h>
int main(int argc, char**argv){
cv::Mat img = cv::imread("image.jpg",1);
cv::imshow("image",img);
cv::waitKey(0);
cv::Vec3b firstline[img.cols];
for(int i=0;i<img.cols;i++){
// access to matrix
cv::Vec3b tmp = img.at<cv::Vec3b>(0,i);
std::cout << (int)tmp(0) << " " << (int)tmp(1) << " " << (int)tmp(2) << std::endl;
// access to my array
firstline[i] = tmp;
std::cout << (int)firstline[i](0) << " " << (int)firstline[i](0) << " " << (int)firstline[i](0) << std::endl;
}
return EXIT_SUCCESS;
}
In you edited first message, this line is strange:
image_data[0][i*width+j]=all_images[nb_image-1].at<cv::Vec3b>(i,j)[2];
If image data is your colored image, then it should be written like this:
image_data[i][j] = all_images[nb_image-1].at<cv::Vec3b>(i,j);