How to split OpenCV matrix while retaining continuity? - c++

I am trying to make a visual odometry algorithm work in real time (using my stereo camera). The camera feed gets returned as a single image (i420 pixel format), where I have to manually split the image into a left and right frame. One of the problems that I am running into is when I call cv::triangulatePoints. The function gives me an error saying that the input matrices (meaning the left and right frame) are not continuous.
When I receive the input image from the camera, using:
// Read camera feed
IMAGE_FORMAT fmt = {IMAGE_ENCODING_I420, 50};
BUFFER *buffer = arducam_capture(camera_instance, &fmt, 3000);
if (!buffer)
return -1;
// Store feed in image
cv::Mat image = cv::Mat(cv::Size(width,(int)(height * 1.5)), CV_8UC1, buffer->data);
arducam_release_buffer(buffer);
// Change image to grayscale (grayscale increases FPS)
cv::cvtColor(image, image, cv::COLOR_YUV2GRAY_I420);
if (!image.isContinuous())
std::cout << "image is not continuous" << std::endl;
The image passes the continuity check fine (meaning the image is continuous).
However, after I resize and split the image into a left and right frame, using:
double scale_factor = 640.0 / width;
int custom_width = int(width * scale_factor);
int custom_height = int(height * scale_factor);
// OpenCV resize
cv::Mat frame = cv::Mat(cv::Size(custom_width, (int)(custom_height * 1.5)), CV_8UC1);
cv::resize(image, frame, frame.size(), 0, 0);
// Split image into left and right frame
cv::Mat frame_left = frame(cv::Rect(0, 0, custom_width / 2, (int)(custom_height * 1.5)));
cv::Mat frame_right = frame(cv::Rect(custom_width / 2, 0, custom_width / 2, (int)(custom_height * 1.5)));
if (!frame.isContinuous())
std::cout << "frame is not continuous" << std::endl;
if (!frame_right.isContinuous())
std::cout << "right frame is not continuous" << std::endl;
if (!frame_left.isContinuous())
std::cout << "left frame is not continuous" << std::endl;
The resized image (frame) is continuous, but the left and right frames fail the continuity check (meaning they are not continuous).
So I guess my question is how can I split the image into two different images, while keeping them continuous?

The solution to this problem is actually quite simple:
if (!frame_right.isContinuous()) {
frame_right = frame_right.clone();
if (!frame_left.isContinuous()) {
frame_left = frame_left.clone();
By using the clone() function, you can copy the image and OpenCV will consider it to be a new image. This way the right and left frames will retain continuity (or be set to continuous status).
So splitting the image destroys continuity and cloning will restore continuity.

Related

Perform calibration on fisheye image - cancelling fisheye effect

I'm currently using opencv library with c++, and my goal is to cancel a fisheye effect on an image ("make it plane")
I'm using the function "undistortImage" to cancel the effect but I need before to perform camera calibration in order to find the parameters K, Knew, and D, but I didn't understand exactly the documentation ( link: http://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#gga37375a2741e88052ce346884dfc9c6a0a0899eaa2f96d6eed9927c4b4f4464e05).
From my understanding, I should give two lists of points and the function "calibrate" is supposed to return the arrays I need. So my question is the following: given a fisheye image, how am I supposed to pick the two lists of points to get the result ? This is for the moment my code, very basic, just takes the picture, display it, performs the undistortion and displays the new image. The elements in the matrix are random, so currently the result is not as expected. Thanks for the answers.
#include "opencv2\core\core.hpp"
#include "opencv2\highgui\highgui.hpp"
#include "opencv2\calib3d\calib3d.hpp"
#include <stdio.h>
#include <iostream>
using namespace std;
using namespace cv;
int main(){
cout << " Usage: display_image ImageToLoadAndDisplay" << endl;
Mat image;
image = imread("C:/Users/Administrator/Downloads/eiffel.jpg", CV_LOAD_IMAGE_COLOR); // Read the file
if (!image.data) // Check for invalid input
{
cout << "Could not open or find the image" << endl;
return -1;
}
cout << "Input image depth: " << image.depth() << endl;
namedWindow("Display window", WINDOW_AUTOSIZE);// Create a window for display.
imshow("Display window", image); // Show our image inside it.
Mat Ka = Mat::eye(3, 3, CV_64F); // Creating distortion matrix
Mat Da = Mat::ones(1, 4, CV_64F);
Mat dstImage(image.rows, image.cols, CV_32F);
cout << "K matrix depth: " << Ka.depth() << endl;
cout << "D matrix depth: " << Da.depth() << endl;
Mat Knew = Mat::eye(3, 3, CV_64F);
std::vector<cv::Vec3d> rvec;
std::vector<cv::Vec3d> tvec;
int flag = 0;
std::vector<Point3d> objectPoints1 = { Point3d(0,0,0), Point3d(1,1,0), Point3d(2,2,0), Point3d(3,3,0), Point3d(4,4,0), Point3d(5,5,0),
Point3d(6,6,0), Point3d(7,7,0), Point3d(3,0,0), Point3d(4,1,0), Point3d(5,2,0), Point3d(6,3,0), Point3d(7,4,0), Point3d(8,5,0), Point3d(5,4,0), Point3d(0,7,0), Point3d(9,7,0), Point3d(9,0,0), Point3d(4,3,0), Point3d(7,2,0)};
std::vector<Point2d> imagePoints1 = { Point(107,84), Point(110,90), Point(116,96), Point(126,107), Point(142,123), Point(168,147),
Point(202,173), Point(232,192), Point(135,69), Point(148,73), Point(165,81), Point(189,93), Point(219,112), Point(248,133), Point(166,119), Point(96,183), Point(270,174), Point(226,56), Point(144,102), Point(206,75) };
std::vector<std::vector<cv::Point2d> > imagePoints(1);
imagePoints[0] = imagePoints1;
std::vector<std::vector<cv::Point3d> > objectPoints(1);
objectPoints[0] = objectPoints1;
fisheye::calibrate(objectPoints, imagePoints, image.size(), Ka, Da, rvec, tvec, flag); // Calibration
cout << Ka<< endl;
cout << Da << endl;
fisheye::undistortImage(image, dstImage, Ka, Da, Knew); // Performing distortion
namedWindow("Display window 2", WINDOW_AUTOSIZE);// Create a window for display.
imshow("Display window 2", dstImage); // Show our image inside it.
waitKey(0); // Wait for a keystroke in the window
return 0;
}
For calibration with cv::fisheye::calibrate you must provide
objectPoints vector of vectors of calibration pattern points in the calibration pattern coordinate space.
This means to provide KNOWN real-world coordinates of the points (must be corresponding points to the ones in imagePoints), but you can choose the coordinate system positon arbitrarily (but carthesian), so you must know your object - e.g. a planar test pattern.
imagePoints vector of vectors of the projections of calibration pattern points
These must be the same points as in objectPoints, but given in image coordinates, so where the projection of the object points hit your image (read/extract the coordinates from your image).
For example, if your camera did capture this image (taken from here ):
you must know the dimension of your testpattern (up to a scale), for example you could choose the top-left corner of the top-left square to be position (0,0,0), the top-right corner of the top-left square to be (1,0,0), and the bottom-left corner of the top-left square to be (1,1,0), so your whole testpattern would be placed on the xy-plane.
Then you could extract these correspondences:
pixel real-world
(144,103) (4,3,0)
(206,75) (7,2,0)
(109,151) (2,5,0)
(253,159) (8,6,0)
for these points (marked red):
The pixel position could be your imagePoints list while the real-world positions could be your objectPoints list.
Does this answer your question?

Opencv setting a color pixel ends up blurring to neighboring pixels

I'm trying to set the pixel value of a CV_8UC3 type image in OpenCV. I know how to do this with a single channel image CV_8UC1, but when doing the same thing with a three channel image the pixel value ends up blurring to the neighboring pixels even though they were not changed.
This is how I do it with a single channel image:
Mat tmp(5, 5, CV_8UC1, Scalar(0));
uchar *tmp_p = tmp.ptr();
tmp_p[0] = (uchar)255;
imwrite("tmp.jpg", tmp);
The resulting image is as you would expect, just the very first pixel has been changed from black to white, while all of the other pixels were left alone.
The following is how I'd expect to do it with a three channel image:
Mat tmp(5, 5, CV_8UC3, Scalar(0));
uchar *tmp_p = tmp.ptr();
tmp_p[0] = (uchar)255;
imwrite("tmp.jpg", tmp);
The expected result from this process should yield a single blue pixel in the top left corner of the image. However the neighboring 3 pixels have seemed to "blur" with the pixel value I set.
If anyone knows why this blurring of pixels is happening I'd very much appreciate any help I can get.
It turns out the problem was in the image file format. I was outputting the image as .jpg which was modifying pixels. When changing the file type to .png this problem was corrected.
Here is my code now with prints to the console of the original image before outputting to a file as well as after re-reading in the file that was output.
// create a small black image and
// change the color of the first pixel to blue
Mat tmp(5, 5, CV_8UC3, Scalar(0));
uchar *tmp_p = tmp.ptr();
tmp_p[0] = (uchar)255;
// output values to the console
cout << tmp << endl << endl;
// write out image to a file then re-read back in
#define CORRECTMETHOD // comment out this line to see what was wrong before
#ifdef CORRECTMETHOD
imwrite("tmp.png", tmp);
tmp = imread("tmp.png");
#else
imwrite("tmp.jpg", tmp);
tmp = imread("tmp.jpg");
#endif
// print out values to console (these values
// should match the original image created above)
cout << tmp << endl;

Real Time Multiple object detection from mulitple videos, while correcting image distortion

I'm working on a project where I'm trying to detect multiple objects from multiple videos at the same time and I am also correcting the distortion in the videos so I can get an accurate reading for the bearing of the detections relative to the camera.
I'm working with OpenCV, in C++ using Visual Studio 2010
The code works but it is very slow, not real time. I am hoping someone may have suggestions as how it may be sped up, if it can. I'm not much of a coder at present and learning about image processing and don't know many tricks.
What the code does in a general sense is:
-Opens a video file
-Applies distortion correction to each image frame (for bearings)
-Crops the image (improves detections, time stamps give false positives)
-Applies a Haar type cascade to the cropped image to detect objects
-Draws a bounding box around the detections
-Displays the images
-It calculates the angle of the detections of the objects and prints to terminal
-It also draws an image like a radar, and displays the angle relative to the camera of each detection in each frame, the idea is to have it as a single source giving the detections from each camera on a single source mapping out the surrounding area.
I've included the main code that runs the video and detections for a single video, this is still pretty slow and takes approx. 18 seconds for each second of video. And when I have 4 videos attempting to run it's about 3 times longer.
Video dimensions are 704x576.
Any help or advice would be much appreciated, or even just knowing that it can only be sped up with purpose designed hardware.
Cheers,
Dave
int main(){
/////////////////////////////
//**Distortion Correction**//
/////////////////////////////
std::cout<< endl << "Reading:" << endl;
//stores a file
FileStorage fs;
//reads and stores the xml file with the camera calibration
fs.open("cal2.xml", FileStorage::READ);
if (fs.isOpened())
{
cout<<"File is opened\n";
}
//Mat objects to store the camera matrix and distortion coefficients
Mat CamMat, DistCoeff;
FileNode n = fs.root();
//takes the parameters from the xml file and stores them in the Mat objects
fs["Camera_Matrix"] >> CamMat;
fs["Distortion_Coefficients"] >> DistCoeff;
/////////////////////
//**Video Display**//
/////////////////////
//Mat objects to store the images
Mat Original, Vid1, Vid1Crop;
//Cropping Image to exclude time/camera stamps to improve detections
Rect roi(0, 35, 704, 490);
//for reading video or webcam
VideoCapture cap;
//for opening video file, give the location and name of the file seperating folders with "\\" instead of with a single "\"
cap.open("C:\\Users\\Desktop\\Run_01_005 two containers\\Video\\ch04_20140219124355.mp4");
//Windows for the images
namedWindow("New", CV_WINDOW_NORMAL);
namedWindow("Display", CV_WINDOW_NORMAL);
///////////////////////
//**Detection Setup**//
///////////////////////
// Cascade Classifier object
CascadeClassifier boat_cascade;
//loads the xml file for the classifier, put the address and name of the xml file in the brackets
boat_cascade.load( "boatcascadeAttp3.xml" );
/////////////////////////////////////
//**Single Source Display Image**////
/////////////////////////////////////
Mat Output;
//loop to continually capture/update images
while (1){
Output = Display();
cap>>Original;
//applies the distortion correction to input image and outputs to New image
undistort(Original, Vid1, CamMat, DistCoeff, noArray());
//Image excluding the time/camera stamps in video which caused a lot of false positives
Vid1Crop = Vid1(roi);
//Set.NewCrop(New, roi);
// Detect boats
std::vector<Rect> boats;
//Parameters may need some further adjustment, currently seems to work well
//Detection performed on Region of Interest that excludes the time stamp which caused a number of False Positives
boat_cascade.detectMultiScale( Vid1Crop, boats, 1.1, 15, 0|CV_HAAR_SCALE_IMAGE, Size(25, 25), Size(75,75) );
// Draw circles on the detected boats
for( int i = 0; i < boats.size(); i++ )
{
//Draws a box around the detected object
rectangle( Vid1Crop, Point(boats[i].x, boats[i].y), Point(boats[i].x+boats[i].width, boats[i].y+boats[i].height), Scalar( 0, 255, 0), 2, 8);
//finds the position of the detection along the X axis
int centreX = boats[i].x + boats[i].width*0.5;
int fromCent = Vid1Crop.cols - centreX;
float angle;
//calls Angle function
angle = Angle(centreX, fromCent, Vid1Crop);
//calls DisplayPoints function
Point XYpoints = DisplayPoints(angle);
//prints out the result, angle for cam ranges
cout << angle;
cout << " degrees" << endl;
//Draws red circles on the single source display corresponding to the detections
circle( Output, XYpoints, 5.0, Scalar( 0, 0, 255 ), 4, 8 );
}
//shows the New output image after correction
imshow("New", Vid1);
imshow("Display", Output);
//delay for 1ms between frames - Note 25 fps in video
waitKey(1);
}
fs.release();
return (0);
}

openFrameworks and openCv image processing issue with doing analysing video and rendering manipulated images back to the user with color palette

I am working on a project with OpenFrameworks using ofxCV, ofxOpencv and ofxColorQuantizer. Technically, the project is analyzing live video captured via webcam and analysis's the image in real time to gather and output the most prominent color in the current frame. When generating the most prominent color I am using the pixel difference between the current frame and the previous frame to generate the what colors have updated and use the updated or moving areas of the video frame to figure out the most prominent colors.
The reason for using the pixel difference's to generate the color pallet is because I want to solve for the case of a user walks into the video frame, I want try and gather the color pallet of the person, for instance what they are wearing. For example red shirt, blue pants will be in the pallet and the white background will be excluded.
I have a strong background in Javascript and canvas but am fairly new to OpenFrameworks and C++ which is why I think I am running into a roadblock with this problem I described above.
Along with OpenFrameworks I am using ofxCV, ofxOpencv and ofxColorQuantizer as tools for this installation. I am taking a webcam image than making it a cv:Mat than using pyrdown on the webcam image twice followed by a absdiff of the mat which I am than trying to pass the mat into the ofxColorQuantizer. This is where I think I am running into problems — I don't think the ofxColorQuantizer likes the mat format of the image I am trying to use. I've tried looking for the different image format to try and convert the image to to solve this issue but I haven't been able to come to solution.
For efficiencies I am hoping to to the color difference and color prominence calculations on the smaller image (after I pyrdown' the image) and display the full image on the screen and the generated color palette is displayed at the bottom left like in the ofxColorQuantizer example.
I think there may be other ways to speed up the code but at the moment I am trying to get this portion of the app working first.
I have my main.cpp set up as follows:
#include "ofMain.h"
#include "ofApp.h"
#include "ofAppGlutWindow.h"
//========================================================================
int main( ){
ofAppGlutWindow window;
ofSetupOpenGL(&window, 1024,768, OF_WINDOW); // <-------- setup the GL context
// ofSetupOpenGL(1024,768,OF_WINDOW); // <-------- setup the GL context
// this kicks off the running of my app
// can be OF_WINDOW or OF_FULLSCREEN
// pass in width and height too:
ofRunApp(new ofApp());
}
My ofApp.h file is as follows:
#pragma once
#include "ofMain.h"
#include "ofxOpenCv.h"
#include "ofxCv.h"
#include "ofxColorQuantizer.h"
class ofApp : public ofBaseApp{
public:
void setup();
void update();
void draw();
ofVideoGrabber cam;
ofPixels previous;
ofImage diff;
void kMeansTest();
ofImage image;
ofImage img;
cv::Mat matA, matB;
ofImage diffCopy;
ofImage outputImage;
ofxCv::RunningBackground background;
ofxColorQuantizer colorQuantizer;
// a scalar is like an ofVec4f but normally used for storing color information
cv::Scalar diffMean;
};
And finally my ofApp.cpp is below:
#include "ofApp.h"
using namespace ofxCv;
using namespace cv;
//--------------------------------------------------------------
void ofApp::setup(){
ofSetVerticalSync(true);
cam.initGrabber(320, 240);
// get our colors
colorQuantizer.setNumColors(3);
// resize the window to match the image
// ofSetWindowShape(image.getWidth(), image.getHeight());
ofSetWindowShape(800, 600);
// imitate() will set up previous and diff
// so they have the same size and type as cam
imitate(previous, cam);
imitate(diff, cam);
imitate(previous, outputImage);
imitate(diff, outputImage);
}
//--------------------------------------------------------------
void ofApp::update(){
cam.update();
if(cam.isFrameNew()) {
matA = ofxCv::toCv(cam.getPixelsRef());
ofxCv::pyrDown(matA, matB);
ofxCv::pyrDown(matB, matA);
ofxCv::medianBlur(matA, 3);
ofxCv::toOf(matA, outputImage);
// take the absolute difference of prev and cam and save it inside diff
absdiff(previous, outputImage, diff);
}
}
//--------------------------------------------------------------
void ofApp::draw(){
// If the image is ready to draw, then draw it
if(outputImage.isAllocated()) {
outputImage.update();
outputImage.draw(0, 0, ofGetWidth(), ofGetHeight());
}
ofBackground(100,100,100);
ofSetColor(255);
ofImage diffCopy;
diffCopy = diff;
diffCopy.resize(diffCopy.getWidth()/2, diffCopy.getHeight()/2);
// there is some sort of bug / issue going on here...
// prevent the app from compiling
// comment out to run and see blank page
colorQuantizer.quantize(diffCopy.getPixelsRef());
ofLog() << "the number is " << outputImage.getHeight();
ofLog() << "the number is " << diffCopy.getHeight();
ofSetColor(255);
img.update();
// cam.draw(0, 0, 800, 600);
outputImage.draw(0, 0, 800, 600);
// colorQuantizer.draw(ofPoint(0, cam.getHeight()-20));
colorQuantizer.draw(ofPoint(0, 600-20));
// use the [] operator to get elements from a Scalar
float diffRed = diffMean[0];
float diffGreen = diffMean[1];
float diffBlue = diffMean[2];
ofSetColor(255, 0, 0);
ofRect(0, 0, diffRed, 10);
ofSetColor(0, 255, 0);
ofRect(0, 15, diffGreen, 10);
ofSetColor(0, 0, 255);
ofRect(0, 30, diffBlue, 10);
}
//--------------------------------------------------------------
void ofApp::kMeansTest(){
cv::Mat samples = (cv::Mat_<float>(8, 1) << 31 , 2 , 10 , 11 , 25 , 27, 2, 1);
cv::Mat labels;
// double kmeans(const Mat& samples, int clusterCount, Mat& labels,
cv::TermCriteria termcrit;
int attempts, flags;
cv::Mat centers;
double compactness = cv::kmeans(samples, 3, labels, cv::TermCriteria(), 2, cv::KMEANS_PP_CENTERS, centers);
cout<<"labels:"<<endl;
for(int i = 0; i < labels.rows; ++i)
{
cout<<labels.at<int>(0, i)<<endl;
}
cout<<"\ncenters:"<<endl;
for(int i = 0; i < centers.rows; ++i)
{
cout<<centers.at<float>(0, i)<<endl;
}
cout<<"\ncompactness: "<<compactness<<endl;
}
Apologies in advance for the state of my code — it's getting late and I'm trying to get this done.
My question is what is the image format openFrameworks is using for grabbing the webcam image, what is the image format that openCV expects and what should I use to switch back from a mat image to an ofImage and is there a way to getPixelsRef from a mat image?
The area of code that I think I have something wrong is the following logic.
I have this line of code which gets the video frame from the webcam matA = ofxCv::toCv(cam.getPixelsRef());
Than do a couple ofxCv procedures on the frame such as ofxCv::pyrDown(matA, matB); which I think changes the image format or pixel format of the frame
Than I convert the frame back to OF with ofxCv::toOf(matA, outputImage);,
Next I get the difference in the pixels between the current frame and the last frame, create a copy of the difference between the two frames. Potentially the issue lies here with the diff output image format
Which I pass the diff copy to colorQuantizer.quantize(diffCopy.getPixelsRef()); to try and generate the color palette in for the change in pixels.
It is the colorQuantizer class and function call that is giving me an error which reads thread error [ error ] ofTexture: allocate(): ofTextureData has 0 width and/or height: 0x0
with an EXC_BAD_ACCESS
And lastly, could there be an alternative cause for the exc_bad_access thread error rather than image formatting? Being new to c++ I'm just guessing and going off instinct of what I think the rood cause of my problem is.
Many thanks.

OpenCV2.1, map function? accessing each pixel?

I have a function that I would like to apply to each pixel in a YUN image (call it src). I would like the output to be saved to a separate image, call it (dst).
I know I can achieve this through pointer arithmetic and accessing the underlying matrix of the image. I was wondering if there was a easier way, say a predefined "map" function that allows me to map a function to all the pixels?
Thanks,
Since I don't know what a YUN image is, I'll assume you know how to convert RGB to that format.
I'm not aware of an easy way to do the map function you mentioned. Anyway, OpenCV has a few predefined functions to do image conversion, including
cvCvtColor(color_frame, gray_frame, CV_BGR2GRAY);
which you might want to take a closer look.
If you would like to do your own, you would need to access each pixel of the image individually, and this code shows you how to do it (the code below skips all kinds of error and return checks for the sake of simplicity):
// Loading src image
IplImage* src_img = cvLoadImage("input.png", CV_LOAD_IMAGE_UNCHANGED);
int width = src_img->width;
int height = src_img->height;
int bpp = src_img->nChannels;
// Temporary buffer to save the modified image
char* buff = new char[width * height * bpp];
// Loop to iterate over each pixel of the original img
for (int i=0; i < width*height*bpp; i+=bpp)
{
/* Perform pixel operation inside this loop */
if (!(i % (width*bpp))) // printing empty line for better readability
std::cout << std::endl;
std::cout << std::dec << "R:" << (int) src_img->imageData[i] <<
" G:" << (int) src_img->imageData[i+1] <<
" B:" << (int) src_img->imageData[i+2] << " ";
/* Let's say you wanted to do a lazy grayscale conversion */
char gray = (src_img->imageData[i] + src_img->imageData[i+1] + src_img->imageData[i+2]) / 3;
buff[i] = gray;
buff[i+1] = gray;
buff[i+2] = gray;
}
IplImage* dst_img = cvCreateImage(cvSize(width, height), src_img->depth, bpp);
dst_img->imageData = buff;
if (!cvSaveImage("output.png", dst_img))
{
std::cout << "ERROR: Failed cvSaveImage" << std::endl;
}
Basically, the code loads a RGB image from the hard disk and performs a grayscale conversion on each pixel of the image, saving it to a temporary buffer. Later, it will create another IplImage with the grayscale data and then it will save it to a file on the disk.