Issue with stitching images using openCV for iOS - c++

I'm trying to adopt the code from here:
https://github.com/foundry/OpenCVStitch
into my program. However, I've run up against a wall. This code stitches images together that are already existing. The program I'm trying to make will stitch images together that the user took. The error I'm getting is that when I pass the images to the stitch function, it is saying they are of invalid size (0 x 0).
Here is the stitching function:
- (IBAction)stitchImages:(UIButton *)sender {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
NSArray* imageArray = [NSArray arrayWithObjects:
chosenImage, chosenImage2, nil];
UIImage* stitchedImage = [CVWrapper processWithArray:imageArray]; // error occurring within processWithArray function
dispatch_async(dispatch_get_main_queue(), ^{
NSLog (#"stitchedImage %#",stitchedImage);
UIImageView *imageView = [[UIImageView alloc] initWithImage:stitchedImage];
self.imageView = imageView;
[self.scrollView addSubview:imageView];
self.scrollView.backgroundColor = [UIColor blackColor];
self.scrollView.contentSize = self.imageView.bounds.size;
self.scrollView.maximumZoomScale = 4.0;
self.scrollView.minimumZoomScale = 0.5;
self.scrollView.contentOffset = CGPointMake(-(self.scrollView.bounds.size.width-self.imageView.bounds.size.width)/2, -(self.scrollView.bounds.size.height-self.imageView.bounds.size.height)/2);
[self.spinner stopAnimating];
});
});
}
chosenImage and chosenImage2 are images the user has taken using these two functions:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
savedImage = info[UIImagePickerControllerOriginalImage];
// display photo in the correct UIImageView
switch(image_location){
case 1:
chosenImage = info[UIImagePickerControllerOriginalImage];
self.imageView2.image = chosenImage;
image_location++;
break;
case 2:
chosenImage2 = info[UIImagePickerControllerOriginalImage];
self.imageView3.image = chosenImage2;
image_location--;
break;
}
// if user clicked "take photo", it should save photo
// if user clicked "select photo", it should not save photo
/*if (should_save){
UIImageWriteToSavedPhotosAlbum(chosenImage, nil, nil, nil);
}*/
[picker dismissViewControllerAnimated:YES completion:NULL];
}
- (IBAction)takePhoto:(UIButton *)sender {
UIImagePickerController *picker = [[UIImagePickerController alloc] init];
picker.delegate = self;
picker.allowsEditing = NO;
picker.sourceType = UIImagePickerControllerSourceTypeCamera;
//last_pressed = 1;
should_save = 1;
[self presentViewController:picker animated:YES completion:NULL];
}
The stitchesImages function passes an array of these two images to this function:
+ (UIImage*) processWithArray:(NSArray*)imageArray
{
if ([imageArray count]==0){
NSLog (#"imageArray is empty");
return 0;
}
cv::vector<cv::Mat> matImages;
for (id image in imageArray) {
if ([image isKindOfClass: [UIImage class]]) {
cv::Mat matImage = [image CVMat3];
NSLog (#"matImage: %#",image);
matImages.push_back(matImage);
}
}
NSLog (#"stitching...");
cv::Mat stitchedMat = stitch (matImages); // error occurring within stitch function
UIImage* result = [UIImage imageWithCVMat:stitchedMat];
return result;
}
This is where the program is running into a problem. When it is passed images that are saved locally in the application file, it works fine. However, when it is passed images that are saved in variables (chosenImage and chosenImage2), it doesn't work.
Here is the stitch function that is being called in the processWithArray function and is causing the error:
cv::Mat stitch (vector<Mat>& images)
{
imgs = images;
Mat pano;
Stitcher stitcher = Stitcher::createDefault(try_use_gpu);
Stitcher::Status status = stitcher.stitch(imgs, pano);
if (status != Stitcher::OK)
{
cout << "Can't stitch images, error code = " << int(status) << endl;
//return 0;
}
return pano;
}
The error is "Can't stitch images, error code = 1".

You are hitting memory limits. The four demo images included are 720 x 960 px, whereas you are using the full resolution image from the device camera.
Here is an Allocations trace in instruments leading up to the crash, stitching two images from the camera...
The point of this github sample is to illustrate a few things...
(1) how to integrate openCV with iOS;
(2) how to separate Objective-C and C++ code using a wrapper;
(3) how to implement the most basic stitching function in openCV.
It is best regarded as a 'hello world' project for iOS+openCV, and was not designed to work robustly with camera images. If you want to use my code as-is, I would suggest first reducing your camera images to a manageable size (e.g. max 1000 on the long side).
In any case the openCV framework you are using is as old as the project. Thanks to your question, I have just updated it (now arm64-friendly), although the memory limitations still apply.
V2, OpenCVSwiftStitch may be a more interesting starting-point for your experiments - the interface is written in Swift, and it uses cocoaPods to keep up with openCV versions (albeit currently fixed to 2.4.9.1 as 2.4.10 breaks everything). So it still illustrates the three points, and also shows how to use Swift with C++ using an Objective-C wrapper as an intermediary.
I may be able to improve memory handling (by passing around pointers). If so I will push an update to both v1 and v2. If you can make any improvements, please send a pull request.
update i've had another look and I am fairly sure it won't be possible to improve the memory handling without getting deeper into the openCV stitching algorithms. The images are already allocated on the heap so there are no improvements to be made there. I expect the best bet would be to tile and cache the intermediate images which it seems openCV is creating as part of the process. I will post an update if I get any further with this. Meanwhile, resizing the camera images is the way to go.
update 2
Some while later, I found the underlying cause of the issue. When you use images from the iOS camera as your inputs, if those images are in portrait orientation they will have the incorrect input dimensions (and orientation) for openCV. This is because all iOS camera photos are taken natively as 'landscape left'. The pixel dimensions are landscape, with the home button on the right. To display portrait, the 'imageOrientation' flag is set to UIImageOrientationRight. This is only an indication to the OS to rotate the image 90 degrees to the right for display.
The image is stored unrotated, landscape left. The incorrect pixel orientation leads to higher memory requirements and unpredictable/broken results in openCV.
I have fixed this in the latest version of openCVSwiftStitch: when necessary images are rotated pixelwise before adding to the openCV pipeline.

Related

Why "findChessboardCorners" function is returning false

I am using Opencv "findChessboardCorners" function to find corners of chess board, but I am getting false as a returned value from "findChessboardCorners" function.
Following is my code:
int main(int argc, char* argv[])
{
vector<vector<Point2f>> imagePoints;
Mat view;
bool found;
vector<Point2f> pointBuf;
Size boardSize; // The size of the board -> Number of items by width and height
boardSize.width = 75;
boardSize.height = 49;
view = cv::imread("FraunhoferChessBoard.jpeg");
namedWindow("Original Image", WINDOW_NORMAL);// Create a window for display.
imshow("Original Image", view);
found = findChessboardCorners(view, boardSize, pointBuf,
CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FAST_CHECK | CV_CALIB_CB_NORMALIZE_IMAGE);
if (found)
{
cout << "Corners of chess board detected";
}
else
{
cout << "Corners of chess board not detected";
}
waitKey(0);
return 0;
}
I expect return value from "findChessboardCorners" function to be true whereas I am getting false.
Please explain me where have I made mistake ?
Many thanks :)
The function didn't find the pattern in your image and this is why it returns false. Maybe the exact same code works with a different image.
I cannot directly answer to why this function did not find the pattern inside your image, but I would recommend different approaches to be less sensitive to the noise, so that the algorithm could detect properly your corners:
- Use findChessboardCornersSB instead of findChessboardCorners. According to the documentation it is more robust to noise and works faster for large images like yours. That's probably what you are looking for. I tried and with python it works properly with the image you posted. See the result below.
- Change the pattern shapes as shown in the doc for findChessboardCornersSB.
- Use less and bigger squares in your pattern. It's not helping to have so many squares.
For the next step you will need to use a non-symmetrical pattern. If your top-left square is white then the bottom right has to be black.
If you have additional problems with the square pattern, you could also change your approach using corners and switch to the circle pattern. All functions are available in opencv. In my case it worked better. See findCirclesGrid. If you use this method, you can run the "BlobDetector" to check how each circle is detected and configure some parameters to improve the accuracy.
Hope this helps!
EDIT:
Here is the python code to make it work from the downloaded image.
import cv2
import matplotlib.pyplot as plt
img = cv2.imread('img.jpg')
img_small = cv2.resize(img, (img.shape[1], img.shape[0]))
found, corners = cv2.findChessboardCornersSB(img_small, (75, 49), flags=0)
plt.imshow(cv2.cvtColor(img_small, cv2.COLOR_BGR2RGB), cmap='gray')
plt.scatter(corners[:, 0, 0], corners[:, 0, 1])
plt.show()

Cannot detect Faces using Offline Affectiva SDK

I'm new to Affectiva Emotion Recognition SDK. I have been following example from video from this link But when I feed some pictures example this image the face could not be detected.
My code looks:-
Listener
class Listener : public affdex::ImageListener{
void onImageResults(std::map<affdex::FaceId,affdex::Face> faces,affdex::Frame image){
std::string pronoun="they";
std::string emotion="neutral";
for (auto pair : faces){
affdex::FaceId faceId=pair.first;
affdex::Face face=pair.second;
if(face.appearance.gender==affdex::Gender::Male){
pronoun="Male";
}else if(face.appearance.gender==affdex::Gender::Female){
pronoun="Female";
}
if(face.emotions.joy>25){
emotion="Happy :)";
}else if(face.emotions.sadness>25){
emotion="Sad :(";
}
cout<<faceId<<" : "<<pronoun <<" looks "<< emotion <<endl;
}
}
void onImageCapture(affdex::Frame image){
cout<<"IMage captured"<<endl;
}
};
Main code
Mat img;
img=imread(argv[1],CV_LOAD_IMAGE_COLOR);
affdex::Frame frame(img.size().width, img.size().height, img.data, affdex::Frame::COLOR_FORMAT::BGR);
affdex::PhotoDetector detector(3);
detector.setClassifierPath("/xxx/xxx/affdex-sdk/data");
affdex::ImageListener * listener(new Listener());
detector.setImageListener(listener);
detector.setDetectAllEmotions(true);
detector.setDetectAllExpressions(true);
detector.start();
detector.process(frame);
detector.stop();
Where do am I making mistake?Or is the sdk cannot detect faces from some images? can any body help me?
Edit
I Used the following images
Sometimes the SDK cannot detect faces in an image. There is no detector that can detect all faces all the time. Did you check with different images?
Edit:
Those two images are 250x250 and 260x194 and really low quality. I recommend you to test the app with higher resolution images. As Affectiva states in their webpage that the minimum recommended resolution is 320x240 and faces should be at least 30x30.
https://developer.affectiva.com/obtaining-optimal-results/

How can I make openCV Backgroundsubtraction KNN algorithm last longer, tracking a foregound object which is not moving

I am trying to substract this building brick.
.
For that I am using the KNN algorithm provided by opencv 3.0.
To initialize the background model I am using 40 frames without the brick.
All in all it works pretty well.
(Brick with Shadow)
The only problem is that the algorithm starts loosing the brick around Frame 58
(Image shows frame 62)
After frame 64 I get only black images. I know this wouldn't happen if the brick would move, but unfortunatly there are long sequences where it doesn`t.
Does somebody know a solution to this?
PS: I tried playing around with the history Paramer of
cv::createBackgroundSubtractorKNN(int history,double Threshold, bool detectShadows= true)
But there is no difference between history = 500 or history = 500000
A easy but slow solution is to reinitialize the background model every five frames.
for (size_t i = 0; i < imageList.size(); i++){
if (i % 5 == 0){
for (auto& it : backgroundList){
string nextFrameFilename(it.string());
frame = cv::imread(nextFrameFilename);
pMOG->apply(frame, fgMaskMOG2);
imshow("Frame", frame);
imshow("FG Mask MOG 2", fgMaskMOG2);
keyboard = cv::waitKey(30);
}
}
}

Display Image from QCamera in Label

I am writing a program, to display two cameras next to each other. In Qt it is pretty simple with the QCamera. But my Cameras are turned by 90°, so I have to turn the Camera in the porgram too.
The QCamera variable has no command to turn it, so I want to display it in a label, and not in a viewfinder. So I take an Image, turn it and display it in a label.
QImage img;
QPixmap img_;
img = ui->viewfinder->grab().toImage();
img_ = QPixmap::fromImage(img);
img_ = img_.transformed(QTransform().rotate((90)%360));
QImage img2;
QPixmap img2_;
img2 = ui->viewfinder->grab().toImage();
img2_ = QPixmap::fromImage(img2);
img2_ = img2_.transformed(QTransform().rotate((90)%360));
ui->label->setPixmap(img_);
ui->label_2->setPixmap(img2_);
When I start the program there are just two black boxes next to each other.
(In the code there is missing the part where I deklare it, but the camera works fine in the viewfinder so I think there is no problem)
Try this:
img_ = QPixmap::grabWindow(ui->viewfinder->winId(), 0, 0, -1, -1); (for take a snapshot as QPixmap)
or
img = QPixmap::grabWindow(ui->viewfinder->winId(), 0, 0, -1, -1).toImage(); (for take a snapshot as QImage)
You can use the orientation of the camera to correct the image orientation in view finder as described in Qt documentation. Here is the link:
http://doc.qt.io/qt-5/cameraoverview.html
and here is the code found in the documentation:
// Assuming a QImage has been created from the QVideoFrame that needs to be presented
QImage videoFrame;
QCameraInfo cameraInfo(camera); // needed to get the camera sensor position and orientation
// Get the current display orientation
const QScreen *screen = QGuiApplication::primaryScreen();
const int screenAngle = screen->angleBetween(screen->nativeOrientation(), screen->orientation());
int rotation;
if (cameraInfo.position() == QCamera::BackFace) {
rotation = (cameraInfo.orientation() - screenAngle) % 360;
} else {
// Front position, compensate the mirror
rotation = (360 - cameraInfo.orientation() + screenAngle) % 360;
}
// Rotate the frame so it always shows in the correct orientation
videoFrame = videoFrame.transformed(QTransform().rotate(rotation));
It looks like you don't even understand, what you are looking at...
Whats the purpose of pasting stuff like that to forum? Did you read ALL description about this? Its only part of the code that - i see You dont understand, but you try to be smart :)

How can i get 100x100 square picture from facebook graph api?

I need to get 100x100 square picture from facebook graph api. Picture must be croped like standart square "50x50" picture, but size must be 100x100. Thank you!
The Graph API only* offers the following sizes (specify the picture size with the type argument):
square: 50x50 pixels
small: 50 pixels wide, variable height
normal: 100 pixels wide, variable height
large: about 200 pixels wide, variable height
If you want the image to be 100x100, you will have to retrieve the "normal" size and crop it yourself, e.g. if you are using php check the imagecopyresampled function
* UPDATE:
As pointed out in the comments below, this answer was correct in May 2012, but nowadays you also have the option to get different sizes using graph.facebook.com/UID/picture?width=N&height=N, as described in the more recent answer from Jeremy.
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
//UIGraphicsBeginImageContext(newSize);
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (UIImage *)crop:(CGRect)rect andIm:(UIImage*) im {
CGFloat scale = [[UIScreen mainScreen] scale];
if (scale>1.0) {
rect = CGRectMake(rect.origin.x*scale , rect.origin.y*scale, rect.size.width*scale, rect.size.height*scale);
}
CGImageRef imageRef = CGImageCreateWithImageInRect([im CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return result;
}
-(UIImage*)squareLargeFacebookProfilePhoto:(UIImage*) image{
float startPosX=0.0;
float startPosY=0.0;
float newsizeW;
float newsizeH;
if (image.size.height>=image.size.width){
newsizeW=200;
float diff=200-image.size.width;
newsizeH=image.size.height+diff/image.size.width*image.size.height;
if (newsizeH>200){
startPosY=(newsizeH-200.0)/8.0;
}
}
else
{
newsizeH=200;
float diff=200-image.size.height;
newsizeW=image.size.width+diff/image.size.height*image.size.width;
if (newsizeW>200){
startPosX=(newsizeW-200.0)/2.0;
}
}
UIImage *imresized=[self imageWithImage:image scaledToSize:CGSizeMake(newsizeW, newsizeH)];
return [self crop:CGRectMake(startPosX, startPosY, 200, 200) andIm:imresized];
}
The url https://graph.facebook.com/user_id/picture?type=square gives us a small
profile picture ( 50x50 )
1)First of all you have to download the image from the url:
https://graph.facebook.com/user_id/picture?type=large
For example Loading Images
NSURL * imageURL = [NSURL URLWithString:https://graph.facebook.com/user_id/picture?type=large];
NSData * imageData = [NSData dataWithContentsOfURL:imageURL];
UIImage *image = [UIImage imageWithData:imageData];
2)Then simply call the function that will return the large Facebook profile picture of the size ( 200x200 ):
UIImage *LargeProfileImage=[self squareLargeFacebookProfilePhoto:image];
Note: the code was written with use of ARC memory management
You can also try this:
https://graph.facebook.com/INSERTUIDHERE/picture?width=100&height=100
DON'T FORGET to replace the 'INSERTUIDHERE' with the UID of the user you are trying to get an image for. Embedded ruby works great here. Ex: ...ook.com/<%= #problem.user.uid %>/pic...
Note that you can change the dimensions to be anything you want (ex: 50x50 or 500x500). It should crop and resize from the center of the photo. It (for some reason) is sometimes a few pixels larger or smaller, but I think that has more to do with the dimensions of the original photo. Yay first answer on SO!
Here's my stupid mug at 100 x 100 using the link above. I'd provide more, but SO is preventing posting multiple links since I'm a n00b.
100 x 100: https://graph.facebook.com/8644397/picture?width=100&height=100