I need to get 100x100 square picture from facebook graph api. Picture must be croped like standart square "50x50" picture, but size must be 100x100. Thank you!
The Graph API only* offers the following sizes (specify the picture size with the type argument):
square: 50x50 pixels
small: 50 pixels wide, variable height
normal: 100 pixels wide, variable height
large: about 200 pixels wide, variable height
If you want the image to be 100x100, you will have to retrieve the "normal" size and crop it yourself, e.g. if you are using php check the imagecopyresampled function
* UPDATE:
As pointed out in the comments below, this answer was correct in May 2012, but nowadays you also have the option to get different sizes using graph.facebook.com/UID/picture?width=N&height=N, as described in the more recent answer from Jeremy.
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
//UIGraphicsBeginImageContext(newSize);
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (UIImage *)crop:(CGRect)rect andIm:(UIImage*) im {
CGFloat scale = [[UIScreen mainScreen] scale];
if (scale>1.0) {
rect = CGRectMake(rect.origin.x*scale , rect.origin.y*scale, rect.size.width*scale, rect.size.height*scale);
}
CGImageRef imageRef = CGImageCreateWithImageInRect([im CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return result;
}
-(UIImage*)squareLargeFacebookProfilePhoto:(UIImage*) image{
float startPosX=0.0;
float startPosY=0.0;
float newsizeW;
float newsizeH;
if (image.size.height>=image.size.width){
newsizeW=200;
float diff=200-image.size.width;
newsizeH=image.size.height+diff/image.size.width*image.size.height;
if (newsizeH>200){
startPosY=(newsizeH-200.0)/8.0;
}
}
else
{
newsizeH=200;
float diff=200-image.size.height;
newsizeW=image.size.width+diff/image.size.height*image.size.width;
if (newsizeW>200){
startPosX=(newsizeW-200.0)/2.0;
}
}
UIImage *imresized=[self imageWithImage:image scaledToSize:CGSizeMake(newsizeW, newsizeH)];
return [self crop:CGRectMake(startPosX, startPosY, 200, 200) andIm:imresized];
}
The url https://graph.facebook.com/user_id/picture?type=square gives us a small
profile picture ( 50x50 )
1)First of all you have to download the image from the url:
https://graph.facebook.com/user_id/picture?type=large
For example Loading Images
NSURL * imageURL = [NSURL URLWithString:https://graph.facebook.com/user_id/picture?type=large];
NSData * imageData = [NSData dataWithContentsOfURL:imageURL];
UIImage *image = [UIImage imageWithData:imageData];
2)Then simply call the function that will return the large Facebook profile picture of the size ( 200x200 ):
UIImage *LargeProfileImage=[self squareLargeFacebookProfilePhoto:image];
Note: the code was written with use of ARC memory management
You can also try this:
https://graph.facebook.com/INSERTUIDHERE/picture?width=100&height=100
DON'T FORGET to replace the 'INSERTUIDHERE' with the UID of the user you are trying to get an image for. Embedded ruby works great here. Ex: ...ook.com/<%= #problem.user.uid %>/pic...
Note that you can change the dimensions to be anything you want (ex: 50x50 or 500x500). It should crop and resize from the center of the photo. It (for some reason) is sometimes a few pixels larger or smaller, but I think that has more to do with the dimensions of the original photo. Yay first answer on SO!
Here's my stupid mug at 100 x 100 using the link above. I'd provide more, but SO is preventing posting multiple links since I'm a n00b.
100 x 100: https://graph.facebook.com/8644397/picture?width=100&height=100
Related
I am trying to make my watermark transparent with low opacity, but it seems just setting the colors to white:
This is the code I'm using which BTW I found in some website
/////////////////// Blending Images (Making Alpha) ////////////////////////
int main()
{
Mat img, img_bgra;
string img_path = "res/test.png";
img = imread(img_path);
if (img.data == NULL)
{
cout << "Image is not loaded!" << endl;
return -1;
}
cvtColor(img, img_bgra, ColorConversionCodes::COLOR_BGR2BGRA);
vector<Mat> channels(4);
split(img_bgra, channels);
channels[3] = channels[3] * 0.1;
merge(channels.data(), 4, img_bgra);
imwrite("res/transparent.png", img_bgra);
imshow("Image", img_bgra);
waitKey(0);
return 0;
}
I want the watermark to be displayed like this:
How can I achieve that?
i`m no good with C++, so i will try to explain with python example, hopefully this will be readable enough to help
alpha = 0.1 # maximum watermark opacity
imageSource = cv2.imread("res/test.png") # assuming BGR, uint8
imageWatermark = cv2.imread("res/transparent.png") # assuming BGRA, uint8
maskWatermark = imageWatermark[:,:, 3] # copy the alpha(transparency) channel, uint8
maskWatermark = np.float32(maskWatermark)*(1/255)*alpha # convert to float, normalize, apply transparency mul
maskSource = 1 -maskWatermark # float32, mask out the things we want to keep
imageWatermark = cv2.cvtColor(imageWatermark, cv2.COLOR_BGRA2BGR) # convert to same colorspace as source (3 channels), uint8
imageResult = np.uint8( np.float32(imageSource)*maskSource
+np.float32(imageWatermark)*maskWatermark)) # blend, convert to uint8
cv2.imshow('result', imageResult)
Key points here are:
some sort of mask is needed to tell which pixels of watermark are
going to affect the resulting image
blending is like interpolation between two color vectors, where
opacity acts like t-coordinate; this is done for each correspoinding
pixel pairs of two images
carefully watch data types to avoid overflow
images must be of same dimensions; if they`re not, you should shrink
or extend them in some way. I think that watermark is most likely is
much smaller than the image is. In this case you may want to copy the
watermarke part of the image (which matches watermark dimensions),
apply watermark and then copy back the watermarked fragment
I have recently started working with OpenCV 3.0 and my goal is to capture a pair of stereo images from a set of stereo cameras, create a proper disparity map, convert the disparity map to a 3D point cloud and finally show the resulting point cloud in a point-cloud viewer using PCL.
I have already performed the camera calibration and the resulting calibration RMS is 0.4
You can find my image pairs (Left Image)1 and (Right Image)2 in the links below. I am using StereoSGBM in order to create disparity image. I am also using track-bars to adjust StereoSGBM function parameters in order to obtain better disparity image. Unfortunately I can't post my disparity image since I am new to StackOverflow and don't have enough reputation to post more than two image links!
After getting the disparity image ("disp" in the code below), I use the reprojectImageTo3D() function to convert the disparity image information to XYZ 3D coordinate, and then I convert the results into an array of "pcl::PointXYZRGB" points so they can be shown in a PCL point cloud viewer. After performing the required conversion, what I get as a point cloud is a silly pyramid shape point-cloud which does not make any sense. I have already read and tried all of the suggested methods in the following links:
1- http: //blog.martinperis.com/2012/01/3d-reconstruction-with-opencv-and-point.html
2- http: //stackoverflow.com/questions/13463476/opencv-stereorectifyuncalibrated-to-3d-point-cloud
3- http: //stackoverflow.com/questions/22418846/reprojectimageto3d-in-opencv
and non of them worked!!!
Below I provided the conversion portion of my code, it would be greatly appreciated if you could tell me what I am missing:
pcl::PointCloud<pcl::PointXYZRGB>::Ptr pointcloud(new pcl::PointCloud<pcl::PointXYZRGB>());
Mat xyz;
reprojectImageTo3D(disp, xyz, Q, false, CV_32F);
pointcloud->width = static_cast<uint32_t>(disp.cols);
pointcloud->height = static_cast<uint32_t>(disp.rows);
pointcloud->is_dense = false;
pcl::PointXYZRGB point;
for (int i = 0; i < disp.rows; ++i)
{
uchar* rgb_ptr = Frame_RGBRight.ptr<uchar>(i);
uchar* disp_ptr = disp.ptr<uchar>(i);
double* xyz_ptr = xyz.ptr<double>(i);
for (int j = 0; j < disp.cols; ++j)
{
uchar d = disp_ptr[j];
if (d == 0) continue;
Point3f p = xyz.at<Point3f>(i, j);
point.z = p.z; // I have also tried p.z/16
point.x = p.x;
point.y = p.y;
point.b = rgb_ptr[3 * j];
point.g = rgb_ptr[3 * j + 1];
point.r = rgb_ptr[3 * j + 2];
pointcloud->points.push_back(point);
}
}
viewer.showCloud(pointcloud);
After doing some work and some research I found my answer and I am sharing it here so other readers can use.
Nothing was wrong with the conversion algorithm from the disparity image to 3D XYZ (and eventually to a point cloud). The problem was the distance of the objects (that I was taking pictures of) to the cameras and amount of information that was available for the StereoBM or StereoSGBM algorithms to detect similarities between the two images (image pair). In order to get proper 3D point cloud it is required to have a good disparity image and in order to have a good disparity image (assuming you have performed good calibration) make sure of the followings:
1- There should be enough detectable and distinguishable common features available between the two frames (right and left frame). The reason being is that StereoBM or StereoSGBM algorithms look for common features between the two frames and they can easily be fooled by similar things in the two frames which may not necessarily belong to the same objects. I personally think these two matching algorithms have lots of room for improvement. So beware of what you are looking at with your cameras.
2- Objects of interest (the ones that you are interested to have their 3D point cloud model) should be within a certain distance to your cameras. The bigger the base-line is (base line is the distance between the two cameras), the further your objects of interest (targets) can be.
A noisy and distorted disparity image never generates a good 3D point cloud. One thing you can do to improve your disparity images is to use track-bars in your applications so you can adjust the StereoSBM or StereoSGBM parameters until you can see good results (clear and smooth disparity image). Code below is a small and simple example on how to generate track-bars (I wrote it as simple as possible). Use as required:
int PreFilterType = 0, PreFilterCap = 0, MinDisparity = 0, UniqnessRatio = 0, TextureThreshold = 0,
SpeckleRange = 0, SADWindowSize = 5, SpackleWindowSize = 0, numDisparities = 0, numDisparities2 = 0, PreFilterSize = 5;
Ptr<StereoBM> sbm = StereoBM::create(numDisparities, SADWindowSize);
while(1)
{
sbm->setPreFilterType(PreFilterType);
sbm->setPreFilterSize(PreFilterSize);
sbm->setPreFilterCap(PreFilterCap + 1);
sbm->setMinDisparity(MinDisparity-100);
sbm->setTextureThreshold(TextureThreshold*0.0001);
sbm->setSpeckleRange(SpeckleRange);
sbm->setSpeckleWindowSize(SpackleWindowSize);
sbm->setUniquenessRatio(0.01*UniqnessRatio);
sbm->setSmallerBlockSize(15);
sbm->setDisp12MaxDiff(32);
namedWindow("Track Bar Window", CV_WINDOW_NORMAL);
cvCreateTrackbar("Number of Disparities", "Track Bar Window", &PreFilterType, 1, 0);
cvCreateTrackbar("Pre Filter Size", "Track Bar Window", &PreFilterSize, 100);
cvCreateTrackbar("Pre Filter Cap", "Track Bar Window", &PreFilterCap, 61);
cvCreateTrackbar("Minimum Disparity", "Track Bar Window", &MinDisparity, 200);
cvCreateTrackbar("Uniqueness Ratio", "Track Bar Window", &UniqnessRatio, 2500);
cvCreateTrackbar("Texture Threshold", "Track Bar Window", &TextureThreshold, 10000);
cvCreateTrackbar("Speckle Range", "Track Bar Window", &SpeckleRange, 500);
cvCreateTrackbar("Block Size", "Track Bar Window", &SADWindowSize, 100);
cvCreateTrackbar("Speckle Window Size", "Track Bar Window", &SpackleWindowSize, 200);
cvCreateTrackbar("Number of Disparity", "Track Bar Window", &numDisparities, 500);
if (PreFilterSize % 2 == 0)
{
PreFilterSize = PreFilterSize + 1;
}
if (PreFilterSize2 < 5)
{
PreFilterSize = 5;
}
if (SADWindowSize % 2 == 0)
{
SADWindowSize = SADWindowSize + 1;
}
if (SADWindowSize < 5)
{
SADWindowSize = 5;
}
if (numDisparities % 16 != 0)
{
numDisparities = numDisparities + (16 - numDisparities % 16);
}
}
}
If you are not getting proper results and smooth disparity image, don't get disappointed. Try using the OpenCV sample images (the one with an orange desk lamp in it) with your algorithm to make sure you have the correct pipe-line and then try taking pictures from different distances and play with StereoBM/StereoSGBM parameters until you can get something useful. I used my own face for this purpose and since I had a very small baseline, I came very close to my cameras (Here is a link to my 3D face point-cloud picture, and hey, don't you dare laughing!!!)1.I was very happy of seeing myself in 3D point-cloud form after a week of struggling. I have never been this happy of seeing myself before!!! ;)
I'm trying to adopt the code from here:
https://github.com/foundry/OpenCVStitch
into my program. However, I've run up against a wall. This code stitches images together that are already existing. The program I'm trying to make will stitch images together that the user took. The error I'm getting is that when I pass the images to the stitch function, it is saying they are of invalid size (0 x 0).
Here is the stitching function:
- (IBAction)stitchImages:(UIButton *)sender {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
NSArray* imageArray = [NSArray arrayWithObjects:
chosenImage, chosenImage2, nil];
UIImage* stitchedImage = [CVWrapper processWithArray:imageArray]; // error occurring within processWithArray function
dispatch_async(dispatch_get_main_queue(), ^{
NSLog (#"stitchedImage %#",stitchedImage);
UIImageView *imageView = [[UIImageView alloc] initWithImage:stitchedImage];
self.imageView = imageView;
[self.scrollView addSubview:imageView];
self.scrollView.backgroundColor = [UIColor blackColor];
self.scrollView.contentSize = self.imageView.bounds.size;
self.scrollView.maximumZoomScale = 4.0;
self.scrollView.minimumZoomScale = 0.5;
self.scrollView.contentOffset = CGPointMake(-(self.scrollView.bounds.size.width-self.imageView.bounds.size.width)/2, -(self.scrollView.bounds.size.height-self.imageView.bounds.size.height)/2);
[self.spinner stopAnimating];
});
});
}
chosenImage and chosenImage2 are images the user has taken using these two functions:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
savedImage = info[UIImagePickerControllerOriginalImage];
// display photo in the correct UIImageView
switch(image_location){
case 1:
chosenImage = info[UIImagePickerControllerOriginalImage];
self.imageView2.image = chosenImage;
image_location++;
break;
case 2:
chosenImage2 = info[UIImagePickerControllerOriginalImage];
self.imageView3.image = chosenImage2;
image_location--;
break;
}
// if user clicked "take photo", it should save photo
// if user clicked "select photo", it should not save photo
/*if (should_save){
UIImageWriteToSavedPhotosAlbum(chosenImage, nil, nil, nil);
}*/
[picker dismissViewControllerAnimated:YES completion:NULL];
}
- (IBAction)takePhoto:(UIButton *)sender {
UIImagePickerController *picker = [[UIImagePickerController alloc] init];
picker.delegate = self;
picker.allowsEditing = NO;
picker.sourceType = UIImagePickerControllerSourceTypeCamera;
//last_pressed = 1;
should_save = 1;
[self presentViewController:picker animated:YES completion:NULL];
}
The stitchesImages function passes an array of these two images to this function:
+ (UIImage*) processWithArray:(NSArray*)imageArray
{
if ([imageArray count]==0){
NSLog (#"imageArray is empty");
return 0;
}
cv::vector<cv::Mat> matImages;
for (id image in imageArray) {
if ([image isKindOfClass: [UIImage class]]) {
cv::Mat matImage = [image CVMat3];
NSLog (#"matImage: %#",image);
matImages.push_back(matImage);
}
}
NSLog (#"stitching...");
cv::Mat stitchedMat = stitch (matImages); // error occurring within stitch function
UIImage* result = [UIImage imageWithCVMat:stitchedMat];
return result;
}
This is where the program is running into a problem. When it is passed images that are saved locally in the application file, it works fine. However, when it is passed images that are saved in variables (chosenImage and chosenImage2), it doesn't work.
Here is the stitch function that is being called in the processWithArray function and is causing the error:
cv::Mat stitch (vector<Mat>& images)
{
imgs = images;
Mat pano;
Stitcher stitcher = Stitcher::createDefault(try_use_gpu);
Stitcher::Status status = stitcher.stitch(imgs, pano);
if (status != Stitcher::OK)
{
cout << "Can't stitch images, error code = " << int(status) << endl;
//return 0;
}
return pano;
}
The error is "Can't stitch images, error code = 1".
You are hitting memory limits. The four demo images included are 720 x 960 px, whereas you are using the full resolution image from the device camera.
Here is an Allocations trace in instruments leading up to the crash, stitching two images from the camera...
The point of this github sample is to illustrate a few things...
(1) how to integrate openCV with iOS;
(2) how to separate Objective-C and C++ code using a wrapper;
(3) how to implement the most basic stitching function in openCV.
It is best regarded as a 'hello world' project for iOS+openCV, and was not designed to work robustly with camera images. If you want to use my code as-is, I would suggest first reducing your camera images to a manageable size (e.g. max 1000 on the long side).
In any case the openCV framework you are using is as old as the project. Thanks to your question, I have just updated it (now arm64-friendly), although the memory limitations still apply.
V2, OpenCVSwiftStitch may be a more interesting starting-point for your experiments - the interface is written in Swift, and it uses cocoaPods to keep up with openCV versions (albeit currently fixed to 2.4.9.1 as 2.4.10 breaks everything). So it still illustrates the three points, and also shows how to use Swift with C++ using an Objective-C wrapper as an intermediary.
I may be able to improve memory handling (by passing around pointers). If so I will push an update to both v1 and v2. If you can make any improvements, please send a pull request.
update i've had another look and I am fairly sure it won't be possible to improve the memory handling without getting deeper into the openCV stitching algorithms. The images are already allocated on the heap so there are no improvements to be made there. I expect the best bet would be to tile and cache the intermediate images which it seems openCV is creating as part of the process. I will post an update if I get any further with this. Meanwhile, resizing the camera images is the way to go.
update 2
Some while later, I found the underlying cause of the issue. When you use images from the iOS camera as your inputs, if those images are in portrait orientation they will have the incorrect input dimensions (and orientation) for openCV. This is because all iOS camera photos are taken natively as 'landscape left'. The pixel dimensions are landscape, with the home button on the right. To display portrait, the 'imageOrientation' flag is set to UIImageOrientationRight. This is only an indication to the OS to rotate the image 90 degrees to the right for display.
The image is stored unrotated, landscape left. The incorrect pixel orientation leads to higher memory requirements and unpredictable/broken results in openCV.
I have fixed this in the latest version of openCVSwiftStitch: when necessary images are rotated pixelwise before adding to the openCV pipeline.
I am writing a program, to display two cameras next to each other. In Qt it is pretty simple with the QCamera. But my Cameras are turned by 90°, so I have to turn the Camera in the porgram too.
The QCamera variable has no command to turn it, so I want to display it in a label, and not in a viewfinder. So I take an Image, turn it and display it in a label.
QImage img;
QPixmap img_;
img = ui->viewfinder->grab().toImage();
img_ = QPixmap::fromImage(img);
img_ = img_.transformed(QTransform().rotate((90)%360));
QImage img2;
QPixmap img2_;
img2 = ui->viewfinder->grab().toImage();
img2_ = QPixmap::fromImage(img2);
img2_ = img2_.transformed(QTransform().rotate((90)%360));
ui->label->setPixmap(img_);
ui->label_2->setPixmap(img2_);
When I start the program there are just two black boxes next to each other.
(In the code there is missing the part where I deklare it, but the camera works fine in the viewfinder so I think there is no problem)
Try this:
img_ = QPixmap::grabWindow(ui->viewfinder->winId(), 0, 0, -1, -1); (for take a snapshot as QPixmap)
or
img = QPixmap::grabWindow(ui->viewfinder->winId(), 0, 0, -1, -1).toImage(); (for take a snapshot as QImage)
You can use the orientation of the camera to correct the image orientation in view finder as described in Qt documentation. Here is the link:
http://doc.qt.io/qt-5/cameraoverview.html
and here is the code found in the documentation:
// Assuming a QImage has been created from the QVideoFrame that needs to be presented
QImage videoFrame;
QCameraInfo cameraInfo(camera); // needed to get the camera sensor position and orientation
// Get the current display orientation
const QScreen *screen = QGuiApplication::primaryScreen();
const int screenAngle = screen->angleBetween(screen->nativeOrientation(), screen->orientation());
int rotation;
if (cameraInfo.position() == QCamera::BackFace) {
rotation = (cameraInfo.orientation() - screenAngle) % 360;
} else {
// Front position, compensate the mirror
rotation = (360 - cameraInfo.orientation() + screenAngle) % 360;
}
// Rotate the frame so it always shows in the correct orientation
videoFrame = videoFrame.transformed(QTransform().rotate(rotation));
It looks like you don't even understand, what you are looking at...
Whats the purpose of pasting stuff like that to forum? Did you read ALL description about this? Its only part of the code that - i see You dont understand, but you try to be smart :)
Making the usual blob tracker with OpenCV and cvBlobsLib, I've come across this problem and it seems no one else had it, which makes me sad. I get the RGB/BGR frame, choose the color to isolate, treshold it into b/w, find the blobs and add the bounding rectangle on each blob, but when I display the final image, the box is stretched on the x-axis: when the object is on the left the box is close to it (although around 2.5 times larger), and as it moves to the right the box moves faster (= more and more far from the object) until it reaches the right end of the window when the object isn't even halfway. This doesn't happen on the y-axis, where everything is fine. It's not a problem with rectangles, it happens when I use fillBlob aswell, the blob shape comes out stretched and misaligned. Also, it's not a problem related to image capturing, since I've tried with kinect (OpenNI), webcam and even using a single image (imread()), and I verified that every ImageGenerator, Mat, IplImage used were 640x480, 8bit depth, for which I used AUTOSIZE for the namedWindow (enlarging to fullscreen window doesn't help either). Showing the BGR frame and the tresholded image gives no problems, they both fit into the window, but the detected blobs seem to belong to a different resolution space when I merge them with the original image. Here's the code, not much has changed from the usual examples found online everywhere:
//[...]
namedWindow("Color Image", CV_WINDOW_AUTOSIZE);
namedWindow("Color Tracking", CV_WINDOW_AUTOSIZE);
//[...] I already got the two cv::Mat I need, imgBGR and imgTresh
CBlobResult blobs;
CBlob *currentBlob;
Point pt1, pt2;
Rect rect;
//had to do Mat to IplImage conversion, since cvBlobsLib doesn't like mats
IplImage iplTresh = imgTresh;
IplImage iplBGR = imgBGR;
blobs = CBlobResult(&iplTresh, NULL, 0);
blobs.Filter(blobs, B_EXCLUDE, CBlobGetArea(), B_LESS, 100);
int nBlobs = blobs.GetNumBlobs();
for (int i = 0; i < nBlobs; i++)
{
currentBlob = blobs.GetBlob(i);
rect = currentBlob->GetBoundingBox();
pt1.x = rect.x;
pt1.y = rect.y;
pt2.x = rect.x + rect.width;
pt2.y = rect.y + rect.height;
cvRectangle(&iplBGR, pt1, pt2, cvScalar(255, 255, 255, 0), 3, 8, 0);
}
//[...]
imshow("Color Image", imgBGR);
imshow("Color Tracking", imgTresh);
The "[...]" is code that shouldn't have nothing to do with this issue, but if you need further info on how I handled the images, let me know and I'll post it.
Based on the fact that the way I capture the image doesn't change anything, that BGR frame and B/W image are well shown, and that after getting blobs any way of displaying them gives the same (wrong) result, the problem must be something between CBlobResult() and matrix2ipl conversion, but I don't really know how to find it out.
Oh god, I spent ages to write the whole problem and the next day I found the answer almost casually. As I created the B/W matrix for tresholding, I didn't make it single-channel; I copied the BGR matrix type, thus having a treshold image with 3 channels which resulted in a widthStep 3 times the frame width. Resolved creating cv::Mat imgTresh with CV_8UC1 as type.