I am a beginner with OpenCV and I have read some tutorials and manuals but I couldn't quite make sense of some things.
Currently, I am trying to crop a binary image into two sections. I want to know which row has the most number of white pixels and then crop out the row and everything above it and then redraw the image with just the data below the row with the most number of white pixels.
What I've done so far is to find the coordinates of the white pixels using findNonZero and then store it into a Mat. The next step is where I get confused. I am unsure of how to access the elements in the Mat and figuring out which row occurs the most in the array.
I have used a test image with my code below. It gave me the pixel locations of [2,0; 1,1; 2,1; 3,1; 0,2; 1,2; 2,2; 3,2; 4,2; 1,3; 2,3; 3,3; 2,4]. Each element has a x and y coordinate of the white pixel. First of all how do I access each element and then only poll the y-coordinate in each element to determine the row that occurs the most? I have tried using the at<>() method but I don't think I've been using it right.
Is this method a good way of doing this or is there a better and/or faster way? I have read a different method here using L1-norm but I couldn't make sense of it and would this method be faster than mine?
Any help would be greatly appreciated.
Below is the code I have so far.
#include <opencv2\opencv.hpp>
#include <opencv2\imgproc\imgproc.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
int Number_Of_Elements;
Mat Grayscale_Image, Binary_Image, NonZero_Locations;
Grayscale_Image = imread("Test Image 6 (640x480px).png", 0);
if(!Grayscale_Image.data)
{
cout << "Could not open or find the image" << endl;
return -1;
}
Binary_Image = Grayscale_Image > 128;
findNonZero(Binary_Image, NonZero_Locations);
cout << "Non-Zero Locations = " << NonZero_Locations << endl << endl;
Number_Of_Elements = NonZero_Locations.total();
cout << "Total Number Of Array Elements = " << Number_Of_Elements << endl << endl;
namedWindow("Test Image",CV_WINDOW_AUTOSIZE);
moveWindow("Test Image", 100, 100);
imshow ("Test Image", Binary_Image);
waitKey(0);
return(0);
}
I expect the following to work:
Point loc_i = NonZero_Locations.at<Point>(i);
Related
I have tried following code to find corners of square boxes in attached chess board picture but unfortunately could not find it. Can you please inform me what can i do to detect corners of chessboard in this case...Many thanks ..:)
int main() {
cv::Mat imgOriginal; // input image
Size boardSizeTopChessBoard;
boardSizeTopChessBoard.width = 144;
boardSizeTopChessBoard.height = 3;
vector<Point2f> pointBufTopChessBoard;
bool topChessBoardCornersFound = false;
imgOriginal = cv::imread("topChessBoard.jpg");
imshow("Original Image ", imgOriginal);
topChessBoardCornersFound = findChessboardCornersSB(imgOriginal, boardSizeTopChessBoard, pointBufTopChessBoard, 0);
if (topChessBoardCornersFound)
{
cout << "Corners found in top chess baord" << endl;
}
else
{
cout << "Corners not found in top chess baord" << endl;
}
waitKey(0);
return(0);
}
There's a number of reasons why it doesn't work.
First of all, image appears to have small resolution with this amount of corners. Therefore, it is too dificult to detect them.
Secondly, the constrast at the edges of image is lower which makes it more difficult. Darker image is harder to detect.
And finally, try to capture sharper image. This one is little bit blurry.
I am trying to write each frame from a camera into a video. Till here it is fine. However, I want my video to include the shape_predictor too at each frame, so when it is reproduced it also appears on the image. So far I have got this... Any ideas? Thank you
cap >> frame;
cv::VideoWriter oVideoWriter;
// . . .
cv_image<bgr_pixel> cimg(frame); //Mat to something dlib can deal with
frontal_face_detector detector = get_frontal_face_detector();
std::vector<rectangle> faces = detector(cimg);
pose_model(cimg, faces[0]);
oVideoWriter.write(dlib::toMat(cimg)); //Turn it into an Opencv Mat
The shape predictor is not the face detector. You have to first call the face detector, then the shape predictor.
See this example program: http://dlib.net/face_landmark_detection_ex.cpp.html
You initialized the face detector properly..then you have to initialize the tracker. Something like this:
shape_predictor sp;
deserialize("shape_predictor_68_face_landmarks.dat") >> sp;
The model can be found here: http://sourceforge.net/projects/dclib/files/dlib/v18.10/shape_predictor_68_face_landmarks.dat.bz2
The rest of the way, you can just follow the example program I linked above. Here's the portion where the tracker is run. You have to pass to the tracker the output (bounding box) return by the detector for it to work. The code below iterates through all the boxes returned by the detector.
// Now tell the face detector to give us a list of bounding boxes
// around all the faces in the image.
std::vector<rectangle> dets = detector(img);
cout << "Number of faces detected: " << dets.size() << endl;
// Now we will go ask the shape_predictor to tell us the pose of
// each face we detected.
std::vector<full_object_detection> shapes;
for (unsigned long j = 0; j < dets.size(); ++j)
{
full_object_detection shape = sp(img, dets[j]);
cout << "number of parts: "<< shape.num_parts() << endl;
cout << "pixel position of first part: " << shape.part(0) << endl;
cout << "pixel position of second part: " << shape.part(1) << endl;
// You get the idea, you can get all the face part locations if
// you want them. Here we just store them in shapes so we can
// put them on the screen.
shapes.push_back(shape);
}
I am trying to print an element of a matrix which stores an image, but for some reason I get a debug error. The function abort() keeps calling. I have pasted the code bellow:
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
int main(){
Mat img = imread("D:/OwnResearch/photo2.jpg");
std::cout << img.at<int>(1, 1, 1) << std::endl;
return 0;
}
I was wondering if there is any way to get an ith, jth, kth element or the matrix img (type Mat)?
you cannot use any type you want with Mat::at(), you must stick to the one it is bound to. if you imread() an image without any further flags, this type will be Vec3b (24bit bgr), never int. also, you have to check, if imread actually succeded before doing so:
Mat img = imread("D:/OwnResearch/photo2.jpg");
if ( ! img.empty() )
{
std::cout << img.at<Vec3b>(1, 1) << std::endl;
}
You can access any element as exampled below.
img.at<uchar>(x , y)[channel]
using "uchar" should be better if you read from jpeg file.
more detailed: http://www.developerstation.org/2012/01/access-mat-in-c-using-opencv.html
I'm trying to extract images from gif using giflib in order to bind them in Opencv Mat.
I'm currently using Opencv-2.4.5 and giflib-4.1.6-10.
My problem is that I can extract only extract the first image of the gif.
The second and the others are scratched, I think it is a matter of bits alignement.
Following the doc: http://giflib.sourceforge.net/gif_lib.html
SavedImage *SavedImages; /* Image sequence (high-level API) */
Should provide a pointer to bits of images.
#include <gif_lib.h>
#include <iostream>
#include <assert.h>
#include <string.h>
#include <stdlib.h>
#include "opencv2/opencv.hpp"
using namespace std;
using namespace cv;
int main(int ac, char **av){
int *err;
GifFileType *f = DGifOpenFileName(av[1]);
assert(f != NULL);
int ret = DGifSlurp(f);
assert(ret == GIF_OK);
int width = f->SWidth;
int height = f->SHeight;
cout << f->ImageCount << endl;
cout << width << " : " << height<< endl;
cout << f->SColorResolution << endl;
// SavedImage *image = &f->SavedImages[0]; Does actually works
SavedImage *image = &f->SavedImages[1]; // that compile but the result is a scratched img
Mat img = Mat(Size(width, height), CV_8UC1, image->RasterBits);
imwrite("test.png", img);
DGifCloseFile(f);
return 0;
}
I don't want to use ImageMagick to keep a little piece of code and keep it "light".
Thanks for your Help.
Did you check whether your GIF file is interlaced ? If it is , you should consider it before storing rasterbits into a bitmap format.
Also check the top,left,width and height of the "SavedImages" , every frame does not need to cover all canvas so you should only overwrite pixels that are different from the last frame.
I am working with images in C++ with OpenCV.
I wrote code with an uchar array of two dimensions where I can read pixel values of an image, uploaded with imread in grayscale using .at< uchar>(i,j).
However I would like to do the same thing for color images. Since I know that to access the pixels values I now need .at< Vec3b>(i,j)[0], .at< Vec3b>(i,j)[1] and .at< Vec3b>(i,j)[2], I made a similar Vec3b 2d arrays.
But I don't know how to fill this array with the pixel values. It has to be a 2D array.
I tried:
array[width][height].val[0]=img.at< Vec3b>(i,j)[0]
but that didn't work.
Didn't find an answer on the OpenCV doc or here neither.
Anybody has an idea?
I've included some of my code. I need an array because I already have my whole algorithm working, using an array, for the images in grayscale with only one channel.
The grayscale code is like that:
for(int i=0;i<height;i++){
for(int j=0;j<width;j++){
image_data[i*width+j]=all_images[nb_image-1].at< uchar>(i,j);
}
}
Where I read from:
std::vector< cv::Mat> all_images
each image (I have a long sequence), retrieves the pixel values in the uchar array image_data, and processes them.
I want now to do the same but for RGB images, and I can't manage to read the data pixel of each channel and put them in an array.
This time image_data is a Vec3b array, and the code I'm trying looks like this:
for(int i=0;i<height;i++){
for(int j=0;j<width;j++){
image_data[0][i*width+j]=all_images[nb_image-1].at<cv::Vec3b>(i,j)[2];
image_data[1][i*width+j]=all_images[nb_image-1].at<cv::Vec3b>(i,j)[1];
image_data[2][i*width+j]=all_images[nb_image-1].at<cv::Vec3b>(i,j)[0];
}
}
But this doesn't work, so I am now at loss I don't know how to succeed to fill the image_data array with the values of all three channels, without changing the code structure as this array is then used on my image processing algorithm.
I don't understand exactly what you are trying to do.
You can directly read a color image with:
cv::Mat img = cv::imread("image.jpeg",1);
Your matrix (img) type will be CV_8UC3, then you can access to each pixel like you said using:
img.at<cv::Vec3b>(row,col)[channel].
If you have a 2D array of Vec3b as Vec3b myArray[n][m];
You can access the values like that:
myArray[i][j](k) where k={1,2,3} since Vec3b is a row matrix.
Here is the code I just tested, and it works.
#include <iostream>
#include <cstdlib>
#include <opencv/cv.h>
#include <opencv/highgui.h>
int main(int argc, char**argv){
cv::Mat img = cv::imread("image.jpg",1);
cv::imshow("image",img);
cv::waitKey(0);
cv::Vec3b firstline[img.cols];
for(int i=0;i<img.cols;i++){
// access to matrix
cv::Vec3b tmp = img.at<cv::Vec3b>(0,i);
std::cout << (int)tmp(0) << " " << (int)tmp(1) << " " << (int)tmp(2) << std::endl;
// access to my array
firstline[i] = tmp;
std::cout << (int)firstline[i](0) << " " << (int)firstline[i](0) << " " << (int)firstline[i](0) << std::endl;
}
return EXIT_SUCCESS;
}
In you edited first message, this line is strange:
image_data[0][i*width+j]=all_images[nb_image-1].at<cv::Vec3b>(i,j)[2];
If image data is your colored image, then it should be written like this:
image_data[i][j] = all_images[nb_image-1].at<cv::Vec3b>(i,j);