Facebook API: accessing a large version of the user's cover photo? - facebook-graph-api

Using the Facebook Javascript SDK to make an api request for the user's cover photo is done like so:
FB.api('/me', {fields : 'cover'}, function(response) {
console.log(response.cover.source) // url for the cover photo
});
However, the resulting image is relatively small (480x480), whereas the point of a cover photo is to fit wide elements. I'm trying to set the cover photo on my web-app to fit a 780px container, in which the photo becomes pixelated.
Is there another method or field that can let me access the full image?

I've been using this code and it seems to work, hopefully you can manipulate it to work for you!
FB.api('/me?fields=cover', function(cover) {
console.log(cover);
document.getElementById('cover').innerHTML =
'<img src="http://graph.facebook.com/'+cover.cover.id+'/picture?type=normal"/>'
});

Related

Mapbox Geocoding language

The Mapbox API supports geocoding requests fine, but I always get the results in English. I'd like to be able to get results in a specific language.
For the Mapbox.js API, it's possible to display the map in a different language (by changing style), but I can't find a way to translate geocoding requests correctly.
For example, if I pass in the city 'Gent', I would expect to see that it's in province Oost-Vlaanderen and country België. However, I get 'Gent, Oost-Vlanderen, Belgium'.
This would be done using a request like: https://api.mapbox.com/geocoding/v5/mapbox.places/Gent.json?country=be&access_token=MYACCESSTOKEN
Is there a way to get the correctly translated result? Perhaps using a setting or extra parameter?
The localized names that I see in Streets-v8 (and likely in mapbox.places) are name_en, name_es, name_fr, name_ru, & name_zh.
This looks like you'll need to file a feature request with Mapbox, at least you may be able to have support for name_fr.
I like to use the Mapbox Command Line Interface to see the responses from the Mapbox querys. This particular query gives a response of "place_name": "Gent, Oost-Vlanderen, Belgium",
mapbox-cli> mapbox geocoding 'Gent' --country be
I also tried Ghent in the query, but still received English
The town shows as Ghent in the Mapbox language switch example.
Looks like a solution has been implemented!
Just pass in a language field on the initialization object like so:
var geocoder = new MapboxGeocoder({ language: 'es' }); //change lang to spanish
Got it from these docs: https://github.com/mapbox/mapbox-gl-geocoder/blob/master/API.md#mapboxgeocoder

How to filter a Backbone collection on the server

Is there a common pattern for using Backbone with a service that filters a collection on the server? I haven't been able to find anything in Google and Stack Overflow searches, which is surprising, given the number of Backbone apps in production.
Suppose I'm building a new front end for Stack Overflow using Backbone.
On the search screen, I need to pass the following information to the server and get back a page worth of results.
filter criteria
sort criteria
results per page
page number
Backbone doesn't seem to have much interest in offloading filtering to the server. It expects the server to return the entire list of questions and perform filtering on the client side.
I'm guessing that in order to make this work I need to subclass Collection and override the fetch method so that rather than always GETting data from the same RESTful URL, it passes the above parameters.
I don't want to reinvent the wheel. Am I missing a feature in Backbone that would make this process simpler or more compatible with existing components? Is there already a well-established pattern to solve this problem?
If you just want to pass GET parameters on a request, you should just be able to specify them in the fetch call itself.
collection.fetch( {
data: {
sortDir: "ASC",
totalResults: 100
}
} );
The options passed into fetch should directly translate to a jQuery.ajax call, and a data property should automatically get parsed. Of course overriding the fetch method is fine too, especially if you want to standardize portions of the logic.
You're right, creating your own Collection is the way to go, as there are not standards about server pagination except OData.
Instead of overriding 'fetch', what I usually do in these cases is create a collection.url property as a function, an return the proper URL based on the collection state.
In order to do pagination, however, the server must return to you the total number of items so you can calculate how many pages based on X items per page. Nowadays some APIs are using things like HAL or HATEOAS, which are basically HTTP response headers. To get that information, I normally add a listener to the sync event, which is raised after any AJAX operation. If you need to notify external components (normally the view) of the number of available items/pages, use an event.
Simple example: your server returns X-ItemTotalCount in the response headers, and expects parameters page and items in the request querystring.
var PagedCollection = Backbone.Collection.extend({
initialize: function(models,options){
this.listenTo(this, "sync", this._parseHeaders);
this.currentPage = 0;
this.pageSize = 10;
this.itemCount = 0;
},
url: function() {
return this.baseUrl + "?page=" + this.currentPage + "&items=" + this.pageSize;
},
_parseHeaders: function(collection,response){
var totalItems = response.xhr.getResponseHeader("X-ItemTotalCount");
if(totalItems){
this.itemCount = parseInt(totalItems);
//trigger an event with arguments (collection, totalItems)
this.trigger("pages:itemcount", this, this.itemCount);
}
}
});
var PostCollection = PagedCollection.extend({
baseUrl: "/posts"
});
Notice we use another own property, baseUrl to simplify extending the PagedCollection. If you need to add your own initialize, call the parent's prototype one like this, or you won't parse the headers:
PagedCollection.protoype.initialize.apply(this,arguments)
You can even add fetchNext and fetchPrevious methods to the collection, where you simply modify this.currentPage and fetch. Remember to add {reset:true} as fetch options if you want to replace one page with the other instead of appending.
Now if your backend for the project is consistent, any resource that allows pagination on the server may be represented using one PagedCollection-based collection on the client, given the same parameters/responses are used.

RestFB: Get good resolution image from a post

I need good resolution pictures from "photo" type posts. The general "[user_id]/feed" api endpoint gives you a "picture" field with bad resolution. The good resolution ones come in a field called "images" that doesn't seem to be included in that endpoint. I can only get them when calling with [post_id] directly. E.g.: http://graph.facebook.com/10151901949756749
I'm noticing the Post class in com.restfb.types doesn't have an "images" attribute so it doesn't seem like "fetchObject([post_id], Post.class)" would work.
How can I get these images?
In facebook API, we get the pictures with different dimensions, as different sizes of the same images are saved. But some basic conventions can be helpful to identify the resolution of images:
_s.png or _s.jpg , this represents small image.
_n.png or _n.jpg , this represents normal image.
So for example when you call : http://graph.facebook.com/10151901949756749
You get a sub part something like this:
{
"picture":"...._s.png",
"source": ".._n.png",
}
Here, instead of fetching picture you can retrieve source , and the image you will get will be of better resolution.

REST service returning "useful" information for demos

I am looking for a REST service that I could use in demo code. I'd like the service:
To take at least one parameter (as a request parameter, or XML POSTed as the body of the HTTP request).
To return the result as XML (not JSON).
To be accessible anonymously (I'll call the service in sample code, so I don't want to put my key in the code, or request users to get a key).
When the Twitter API supported XML (not just JSON), I was typically using their search API. But really anything mainstream enough, easy enough to understand will do (information about zip code, weather for a city…).
If you are using .Net, why don't you just create a tiny MVC application that has a controller that exposes a method that returns some sort of formatted XML? That way you can run the whole thing locally.
EDIT:
You know, I think you can use Google Maps API without a key. I created a test project a couple of days ago. Here is a .Net code snippet (only included so that you can see how I am calling the service):
private static string GetString(Uri requestUri)
{
var output = string.Empty;
var response = WebRequest.Create(requestUri).GetResponse();
var stream = response.GetResponseStream();
if (stream != null)
{
using (var reader = new StreamReader(stream))
{
output = reader.ReadToEnd();
reader.Close();
}
}
response.Close();
return output;
}
I pass in a uri with a url:
https://maps.googleapis.com/maps/api/directions/xml?mode=walking&origin={0},{1}&destination={2},{3}&sensor=false
Where {0},{1} are the first lat/long, and {2},{3} are the second. I am not attaching a key to this and it worked for testing. My method returns a string that later I handle like so:
var response = XDocument.Parse(GetString(request));
which gives me back xml. Again, I still recommend just creating your own web app and then deploying it somewhere publicly accessible (either in a LAN or on the web), but if you just need a web service to return XML you can use that.
The Yahoo! Weather API can be used for this. It takes a location as a request parameter and returns the weather forecast for that location as XML. It also returns weather information as HTML, which you could display as-is to the user. You can see an example of this below. Also make sure that you respect the term of use described at the bottom of the Weather API documentation page.

Community Page Graph Picture Requires User Access Token?

Starting last month, it appears that community pages either require a user access token to access the graph image, or will not allow application to access the image.
As an example: The community page for Harold and Maude (105636526135846), last month would return a picture -- now calls to the graph do not include the picture string.
{
"id": "105636526135846",
"name": "Harold and Maude",
"link": "http://www.facebook.com/pages/Harold-and-Maude/105636526135846",
"likes": 143886,
"category": "Movie",
"is_community_page": true,
...
At one point it appeared that using an access token would work, however, now requesting '/105636526135846/picture' returns no picture and Facebook's embedded image is
http://external.ak.fbcdn.net/safe_image.php?d=AQBKNDbD3RCI0MXv&w=180&h=540&url=http%3A%2F%2Fupload.wikimedia.org%2Fwikipedia%2Fen%2Fc%2Fc4%2FHarold_and_maude.jpg&fallback=hub_movie
Alternatively FQL appears to return the proper information
>[
>> {
>>> "pic": "http://external.ak.fbcdn.net/safe_image.php?d=AQA4PX9DD7wlHZmC&w=100&h=300&url=http%3A%2F%2Fupload.wikimedia.org%2Fwikipedia%2Fen%2Fc%2Fc4%2FHarold_and_maude.jpg&fallback=hub_movie",<br />
>>> "pic_large": "http://external.ak.fbcdn.net/safe_image.php?d=AQBKNDbD3RCI0MXv&w=180&h=540&url=http%3A%2F%2Fupload.wikimedia.org%2Fwikipedia%2Fen%2Fc%2Fc4%2FHarold_and_maude.jpg&fallback=hub_movie"<br />
>> }<br />
>]
Is there something I'm missing with the graph? I'm concerned that the FQL method may stop working.
Wikipedia have started blocking certain images, based on their licensing. So Facebook runs it through a filter (safe_image.php) to check if it is allowed or not. If not, you get a default image. So using FQL will 'sometimes' return you a usable image, but the graph no longer will.
I have no idea if Facebook plan to continue offering the FQL call. Sorry!