I'm doing a little project with ARToolkit plus. I found it strange that the detected marker id is always -1, as the confidence of the marker is also always 0.0. I've loaded the patt.hiro file provided with the standard ARTK plus zip. The code below shows what I'm doing:
snippet from the 'DrawGLScene' function:
//Render the webcam background
IplImage* img = showWebcam();
// do the OpenGL camera setup
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(tracker->getTracker()->getProjectionMatrix());
//Detect the markers in the video frame
ARToolKitPlus::ARMarkerInfo* markerinfo=0;
int nummarkers = detectMarkers(img, &markerinfo);
the 'detectMarkers' function:
int detectMarkers(IplImage* image, ARToolKitPlus::ARMarkerInfo** markerinfo){
cvFlip(image, image, 0);
int nummarkers;
tracker->getTracker()->calc((uchar*)(image->imageData), -1, false, markerinfo, &nummarkers);
return nummarkers;
The program succesfully detects markers in the scene, but doesn't give them any id or confidence ratio, even if the marker is the one loaded in the memory.. Any help really appreciated!
Related
I have images of humans where I want to eliminate some pattern. Please, take a look on the three images below:
GrabCut extracted the figure (second picture) from the first image without any problem.
Now I have a rectangle corresponding to the face (circle on the second picture) and I want to use it as "background" (while the rest of the image would be a combination of a foreground and background) to eliminate skin leaving the clothes only.
The approximately desired result is on the third picture:
Is there any way to make GrabCut to do it? I cannot assign the areas/masks manually, all that I have is a rectangle provided by the face detection.
UPD: In the code below I try to do it using mask, but stage 2 doesn't seem to work (because at least the face should be cut). Maybe I just do not understand how it is working, as I have just modified another example The code is working:
static Mat my_segment(Mat _inImage, Rect assumption, Rect face){
// human segmentation opencv
// _inImage - input image
// assumption - human rectangle on _inImage
// face - face rectangle on _inImage
// iterations - is being set externally
/*
GrabCut segmentation
*/
Mat bgModel,fgModel; // the models (internally used)
Mat result; // segmentation result
//*********************** step1: GrabCut human figure segmentation
grabCut(_inImage, // input image
result, // segmentation result
assumption,// rectangle containing foreground
bgModel,fgModel, // models
iterations, // number of iterations
cv::GC_INIT_WITH_RECT); // use rectangle
// Get the pixels marked as likely foreground
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
// upsample the resulting mask
cv::Mat separated(assumption.size(),CV_8UC3,cv::Scalar(255,255,255));
_inImage(assumption).copyTo(separated,result(assumption));
// (bg pixels not copied)
//return(separated); // return the innerings of assumption rectangle
//*********************** step2:
//cutting the skin with the mask based on the face rectangle
Rect adjusted_face = face;
adjusted_face.x = face.x - assumption.x;
adjusted_face.y = face.y - assumption.y;
//rectangle(separated,
// adjusted_face.tl(),
// adjusted_face.br(),
// cv::Scalar(166,94,91), 2);
//creating mask
Mat mymask(separated.size(),CV_8UC1);
// setting face area as sure background
mymask.setTo(Scalar::all(GC_PR_FGD));
mymask(adjusted_face).setTo(Scalar::all(GC_BGD));
// performing grabcut
grabCut(separated,
mymask,
cv::Rect(0,0,assumption.width,assumption.height),
bgModel,fgModel,
1,
cv::GC_INIT_WITH_MASK);
// Just repeating everything from before
// Get the pixels marked as likely foreground
cv::compare(mymask,cv::GC_PR_FGD,mymask,cv::CMP_EQ);
//here was the error
//separated.copyTo(separated,mymask); // bg pixels not copied
cv::Mat res(separated.size(),CV_8UC3,cv::Scalar(255,255,255));
separated.copyTo(res,mymask); // bg pixels not copied
//*************************//
//return(separated); // return the innerings of assumption rectangle
return(res);
}
Okay, I found the mistake, instead of
separated.copyTo(separated,mymask);
the last lines should be changed to:
cv::Mat res(separated.size(),CV_8UC3,cv::Scalar(255,255,255));
separated.copyTo(res,mymask); // bg pixels not copied
//*************************//
return(res); // return the innerings of assumption rectangle
Also one needs much more iterations for the second grabcut call, something like 5-7 iterations.
The results are not so good, so I welcome the answers that can improve it.
I'm working on a project where I'm trying to detect multiple objects from multiple videos at the same time and I am also correcting the distortion in the videos so I can get an accurate reading for the bearing of the detections relative to the camera.
I'm working with OpenCV, in C++ using Visual Studio 2010
The code works but it is very slow, not real time. I am hoping someone may have suggestions as how it may be sped up, if it can. I'm not much of a coder at present and learning about image processing and don't know many tricks.
What the code does in a general sense is:
-Opens a video file
-Applies distortion correction to each image frame (for bearings)
-Crops the image (improves detections, time stamps give false positives)
-Applies a Haar type cascade to the cropped image to detect objects
-Draws a bounding box around the detections
-Displays the images
-It calculates the angle of the detections of the objects and prints to terminal
-It also draws an image like a radar, and displays the angle relative to the camera of each detection in each frame, the idea is to have it as a single source giving the detections from each camera on a single source mapping out the surrounding area.
I've included the main code that runs the video and detections for a single video, this is still pretty slow and takes approx. 18 seconds for each second of video. And when I have 4 videos attempting to run it's about 3 times longer.
Video dimensions are 704x576.
Any help or advice would be much appreciated, or even just knowing that it can only be sped up with purpose designed hardware.
Cheers,
Dave
int main(){
/////////////////////////////
//**Distortion Correction**//
/////////////////////////////
std::cout<< endl << "Reading:" << endl;
//stores a file
FileStorage fs;
//reads and stores the xml file with the camera calibration
fs.open("cal2.xml", FileStorage::READ);
if (fs.isOpened())
{
cout<<"File is opened\n";
}
//Mat objects to store the camera matrix and distortion coefficients
Mat CamMat, DistCoeff;
FileNode n = fs.root();
//takes the parameters from the xml file and stores them in the Mat objects
fs["Camera_Matrix"] >> CamMat;
fs["Distortion_Coefficients"] >> DistCoeff;
/////////////////////
//**Video Display**//
/////////////////////
//Mat objects to store the images
Mat Original, Vid1, Vid1Crop;
//Cropping Image to exclude time/camera stamps to improve detections
Rect roi(0, 35, 704, 490);
//for reading video or webcam
VideoCapture cap;
//for opening video file, give the location and name of the file seperating folders with "\\" instead of with a single "\"
cap.open("C:\\Users\\Desktop\\Run_01_005 two containers\\Video\\ch04_20140219124355.mp4");
//Windows for the images
namedWindow("New", CV_WINDOW_NORMAL);
namedWindow("Display", CV_WINDOW_NORMAL);
///////////////////////
//**Detection Setup**//
///////////////////////
// Cascade Classifier object
CascadeClassifier boat_cascade;
//loads the xml file for the classifier, put the address and name of the xml file in the brackets
boat_cascade.load( "boatcascadeAttp3.xml" );
/////////////////////////////////////
//**Single Source Display Image**////
/////////////////////////////////////
Mat Output;
//loop to continually capture/update images
while (1){
Output = Display();
cap>>Original;
//applies the distortion correction to input image and outputs to New image
undistort(Original, Vid1, CamMat, DistCoeff, noArray());
//Image excluding the time/camera stamps in video which caused a lot of false positives
Vid1Crop = Vid1(roi);
//Set.NewCrop(New, roi);
// Detect boats
std::vector<Rect> boats;
//Parameters may need some further adjustment, currently seems to work well
//Detection performed on Region of Interest that excludes the time stamp which caused a number of False Positives
boat_cascade.detectMultiScale( Vid1Crop, boats, 1.1, 15, 0|CV_HAAR_SCALE_IMAGE, Size(25, 25), Size(75,75) );
// Draw circles on the detected boats
for( int i = 0; i < boats.size(); i++ )
{
//Draws a box around the detected object
rectangle( Vid1Crop, Point(boats[i].x, boats[i].y), Point(boats[i].x+boats[i].width, boats[i].y+boats[i].height), Scalar( 0, 255, 0), 2, 8);
//finds the position of the detection along the X axis
int centreX = boats[i].x + boats[i].width*0.5;
int fromCent = Vid1Crop.cols - centreX;
float angle;
//calls Angle function
angle = Angle(centreX, fromCent, Vid1Crop);
//calls DisplayPoints function
Point XYpoints = DisplayPoints(angle);
//prints out the result, angle for cam ranges
cout << angle;
cout << " degrees" << endl;
//Draws red circles on the single source display corresponding to the detections
circle( Output, XYpoints, 5.0, Scalar( 0, 0, 255 ), 4, 8 );
}
//shows the New output image after correction
imshow("New", Vid1);
imshow("Display", Output);
//delay for 1ms between frames - Note 25 fps in video
waitKey(1);
}
fs.release();
return (0);
}
I am writing a program, to display two cameras next to each other. In Qt it is pretty simple with the QCamera. But my Cameras are turned by 90°, so I have to turn the Camera in the porgram too.
The QCamera variable has no command to turn it, so I want to display it in a label, and not in a viewfinder. So I take an Image, turn it and display it in a label.
QImage img;
QPixmap img_;
img = ui->viewfinder->grab().toImage();
img_ = QPixmap::fromImage(img);
img_ = img_.transformed(QTransform().rotate((90)%360));
QImage img2;
QPixmap img2_;
img2 = ui->viewfinder->grab().toImage();
img2_ = QPixmap::fromImage(img2);
img2_ = img2_.transformed(QTransform().rotate((90)%360));
ui->label->setPixmap(img_);
ui->label_2->setPixmap(img2_);
When I start the program there are just two black boxes next to each other.
(In the code there is missing the part where I deklare it, but the camera works fine in the viewfinder so I think there is no problem)
Try this:
img_ = QPixmap::grabWindow(ui->viewfinder->winId(), 0, 0, -1, -1); (for take a snapshot as QPixmap)
or
img = QPixmap::grabWindow(ui->viewfinder->winId(), 0, 0, -1, -1).toImage(); (for take a snapshot as QImage)
You can use the orientation of the camera to correct the image orientation in view finder as described in Qt documentation. Here is the link:
http://doc.qt.io/qt-5/cameraoverview.html
and here is the code found in the documentation:
// Assuming a QImage has been created from the QVideoFrame that needs to be presented
QImage videoFrame;
QCameraInfo cameraInfo(camera); // needed to get the camera sensor position and orientation
// Get the current display orientation
const QScreen *screen = QGuiApplication::primaryScreen();
const int screenAngle = screen->angleBetween(screen->nativeOrientation(), screen->orientation());
int rotation;
if (cameraInfo.position() == QCamera::BackFace) {
rotation = (cameraInfo.orientation() - screenAngle) % 360;
} else {
// Front position, compensate the mirror
rotation = (360 - cameraInfo.orientation() + screenAngle) % 360;
}
// Rotate the frame so it always shows in the correct orientation
videoFrame = videoFrame.transformed(QTransform().rotate(rotation));
It looks like you don't even understand, what you are looking at...
Whats the purpose of pasting stuff like that to forum? Did you read ALL description about this? Its only part of the code that - i see You dont understand, but you try to be smart :)
I am working on a project with OpenFrameworks using ofxCV, ofxOpencv and ofxColorQuantizer. Technically, the project is analyzing live video captured via webcam and analysis's the image in real time to gather and output the most prominent color in the current frame. When generating the most prominent color I am using the pixel difference between the current frame and the previous frame to generate the what colors have updated and use the updated or moving areas of the video frame to figure out the most prominent colors.
The reason for using the pixel difference's to generate the color pallet is because I want to solve for the case of a user walks into the video frame, I want try and gather the color pallet of the person, for instance what they are wearing. For example red shirt, blue pants will be in the pallet and the white background will be excluded.
I have a strong background in Javascript and canvas but am fairly new to OpenFrameworks and C++ which is why I think I am running into a roadblock with this problem I described above.
Along with OpenFrameworks I am using ofxCV, ofxOpencv and ofxColorQuantizer as tools for this installation. I am taking a webcam image than making it a cv:Mat than using pyrdown on the webcam image twice followed by a absdiff of the mat which I am than trying to pass the mat into the ofxColorQuantizer. This is where I think I am running into problems — I don't think the ofxColorQuantizer likes the mat format of the image I am trying to use. I've tried looking for the different image format to try and convert the image to to solve this issue but I haven't been able to come to solution.
For efficiencies I am hoping to to the color difference and color prominence calculations on the smaller image (after I pyrdown' the image) and display the full image on the screen and the generated color palette is displayed at the bottom left like in the ofxColorQuantizer example.
I think there may be other ways to speed up the code but at the moment I am trying to get this portion of the app working first.
I have my main.cpp set up as follows:
#include "ofMain.h"
#include "ofApp.h"
#include "ofAppGlutWindow.h"
//========================================================================
int main( ){
ofAppGlutWindow window;
ofSetupOpenGL(&window, 1024,768, OF_WINDOW); // <-------- setup the GL context
// ofSetupOpenGL(1024,768,OF_WINDOW); // <-------- setup the GL context
// this kicks off the running of my app
// can be OF_WINDOW or OF_FULLSCREEN
// pass in width and height too:
ofRunApp(new ofApp());
}
My ofApp.h file is as follows:
#pragma once
#include "ofMain.h"
#include "ofxOpenCv.h"
#include "ofxCv.h"
#include "ofxColorQuantizer.h"
class ofApp : public ofBaseApp{
public:
void setup();
void update();
void draw();
ofVideoGrabber cam;
ofPixels previous;
ofImage diff;
void kMeansTest();
ofImage image;
ofImage img;
cv::Mat matA, matB;
ofImage diffCopy;
ofImage outputImage;
ofxCv::RunningBackground background;
ofxColorQuantizer colorQuantizer;
// a scalar is like an ofVec4f but normally used for storing color information
cv::Scalar diffMean;
};
And finally my ofApp.cpp is below:
#include "ofApp.h"
using namespace ofxCv;
using namespace cv;
//--------------------------------------------------------------
void ofApp::setup(){
ofSetVerticalSync(true);
cam.initGrabber(320, 240);
// get our colors
colorQuantizer.setNumColors(3);
// resize the window to match the image
// ofSetWindowShape(image.getWidth(), image.getHeight());
ofSetWindowShape(800, 600);
// imitate() will set up previous and diff
// so they have the same size and type as cam
imitate(previous, cam);
imitate(diff, cam);
imitate(previous, outputImage);
imitate(diff, outputImage);
}
//--------------------------------------------------------------
void ofApp::update(){
cam.update();
if(cam.isFrameNew()) {
matA = ofxCv::toCv(cam.getPixelsRef());
ofxCv::pyrDown(matA, matB);
ofxCv::pyrDown(matB, matA);
ofxCv::medianBlur(matA, 3);
ofxCv::toOf(matA, outputImage);
// take the absolute difference of prev and cam and save it inside diff
absdiff(previous, outputImage, diff);
}
}
//--------------------------------------------------------------
void ofApp::draw(){
// If the image is ready to draw, then draw it
if(outputImage.isAllocated()) {
outputImage.update();
outputImage.draw(0, 0, ofGetWidth(), ofGetHeight());
}
ofBackground(100,100,100);
ofSetColor(255);
ofImage diffCopy;
diffCopy = diff;
diffCopy.resize(diffCopy.getWidth()/2, diffCopy.getHeight()/2);
// there is some sort of bug / issue going on here...
// prevent the app from compiling
// comment out to run and see blank page
colorQuantizer.quantize(diffCopy.getPixelsRef());
ofLog() << "the number is " << outputImage.getHeight();
ofLog() << "the number is " << diffCopy.getHeight();
ofSetColor(255);
img.update();
// cam.draw(0, 0, 800, 600);
outputImage.draw(0, 0, 800, 600);
// colorQuantizer.draw(ofPoint(0, cam.getHeight()-20));
colorQuantizer.draw(ofPoint(0, 600-20));
// use the [] operator to get elements from a Scalar
float diffRed = diffMean[0];
float diffGreen = diffMean[1];
float diffBlue = diffMean[2];
ofSetColor(255, 0, 0);
ofRect(0, 0, diffRed, 10);
ofSetColor(0, 255, 0);
ofRect(0, 15, diffGreen, 10);
ofSetColor(0, 0, 255);
ofRect(0, 30, diffBlue, 10);
}
//--------------------------------------------------------------
void ofApp::kMeansTest(){
cv::Mat samples = (cv::Mat_<float>(8, 1) << 31 , 2 , 10 , 11 , 25 , 27, 2, 1);
cv::Mat labels;
// double kmeans(const Mat& samples, int clusterCount, Mat& labels,
cv::TermCriteria termcrit;
int attempts, flags;
cv::Mat centers;
double compactness = cv::kmeans(samples, 3, labels, cv::TermCriteria(), 2, cv::KMEANS_PP_CENTERS, centers);
cout<<"labels:"<<endl;
for(int i = 0; i < labels.rows; ++i)
{
cout<<labels.at<int>(0, i)<<endl;
}
cout<<"\ncenters:"<<endl;
for(int i = 0; i < centers.rows; ++i)
{
cout<<centers.at<float>(0, i)<<endl;
}
cout<<"\ncompactness: "<<compactness<<endl;
}
Apologies in advance for the state of my code — it's getting late and I'm trying to get this done.
My question is what is the image format openFrameworks is using for grabbing the webcam image, what is the image format that openCV expects and what should I use to switch back from a mat image to an ofImage and is there a way to getPixelsRef from a mat image?
The area of code that I think I have something wrong is the following logic.
I have this line of code which gets the video frame from the webcam matA = ofxCv::toCv(cam.getPixelsRef());
Than do a couple ofxCv procedures on the frame such as ofxCv::pyrDown(matA, matB); which I think changes the image format or pixel format of the frame
Than I convert the frame back to OF with ofxCv::toOf(matA, outputImage);,
Next I get the difference in the pixels between the current frame and the last frame, create a copy of the difference between the two frames. Potentially the issue lies here with the diff output image format
Which I pass the diff copy to colorQuantizer.quantize(diffCopy.getPixelsRef()); to try and generate the color palette in for the change in pixels.
It is the colorQuantizer class and function call that is giving me an error which reads thread error [ error ] ofTexture: allocate(): ofTextureData has 0 width and/or height: 0x0
with an EXC_BAD_ACCESS
And lastly, could there be an alternative cause for the exc_bad_access thread error rather than image formatting? Being new to c++ I'm just guessing and going off instinct of what I think the rood cause of my problem is.
Many thanks.
I have successfully detected a face out of an image having other things in background using OpenCv.
Now I need to extract just the detected part (i.e. face) and convert it into some image format like jpeg or gif to make a face database to use for my neural net training.
How can I do this?
Once you detect the faces, you get opposite corners of a rectangle, which is used to draw rectangles around the face.
Now you can set image ROI ( Region of Interest) , crop the ROI and save it as another image.
/* After detecting the rectangle points, Do as follows */
/* sets the Region of Interest
Note that the rectangle area has to be __INSIDE__ the image */
cvSetImageROI(img1, cvRect(10, 15, 150, 250));
/* create destination image
Note that cvGetSize will return the width and the height of ROI */
IplImage *img2 = cvCreateImage(cvGetSize(img1),
img1->depth,
img1->nChannels);
/* copy subimage */
cvCopy(img1, img2, NULL);
/* always reset the Region of Interest */
cvResetImageROI(img1);
Above code is taken from http://nashruddin.com/OpenCV_Region_of_Interest_(ROI)
Further cvSaveImage function can be used to save image to a file.
try this:
for(i=0;i<(pFaceRectSeq?pFaceRectSeq->total:0);i++)
{
CvRect* r=(CvRect*)cvGetSeqElem(pFaceRectSeq,i);
int width=r->width;
int height=r->height;
cvSetImageROI(pInpImg,*r);
IplImage* pCropImg=cvCreateImage(cvSize(width,height),IPL_DEPTH_8U,3);
cvCopy(pInpImg,pCropImg,NULL);
cvShowImage("Cropped Window",pCropImg);
cvWaitKey(0);
cvResetImageROI(pInpImg);
cvReleaseImage(&pCropImg);
}