I know that this should be a question, but after such a headache, I thought that I should help some one with this same problem.
I'm using Allauth 0.42.0 and Django 3.0.8, following the allauth documentation, I could not, for the love of my life get the Profile Picture url from the user.
But hours of searching lead me to this solution:
# access_token required by LinkedIn's API
access_token = SocialToken.objects.get(account__user=user, account__provider='linkedin_oauth2')
# The request
r = requests.get(f"https://api.linkedin.com/v2/me?projection=(profilePicture("
f"displayImage~:playableStreams))&oauth2_access_token={access_token}")
# The json on the profile picture key
profile_pic_json = r.json().get('profilePicture')
# There's a lot of pictures sizes, so I put it on a array to easier reading
elements = profile_pic_json['displayImage~']['elements']
# elements[len(elements)-1] returns the pic with the highest resolution
# ['identifiers'][0]['identifier'] is the url itself
url_picture = elements[len(elements)-1]['identifiers'][0]['identifier']
I hope that this help someone.
Related
How to get the page fan's comments from facebook-api graph using koala gem
#user_graph = Koala::Facebook::API.new('XXXXXXXXXXXXX')
lists = #user_graph.get_object("#{pageid}/insights/page_storytellers")
but i want to get the all comments of the page fan's comments.
its giving null array results,
please anyone help me
Storytellers is a count of the unique people who created a story about your page post, it does not give you the full comment or info about the fan.
To get the comments on a page, you would have to first get a list of page posts, then query each post for comments.
You can get this info from any page, you do not need to have access to Insights.
For example:
page_info = #graph.get_object('nytimes')
pageid = page_info["id"]
fb_params = {
:fields => 'admin_creator,from,id,link,message,object_id,source,
status_type,story,story_tags,to,type,created_time,updated_time,
shares,likes.summary(true),comments.summary(true)',
:limit => 100,
:until => DateTime.now.at_end_of_day.to_i,
:since => DateTime.now.years_ago(5).to_i,
:metadata => 1
}
posts = #graph.get_connection(pageid, 'feed', fb_params)
If you include "comments.summary(true)" in the fields you request, you will get the first 25 comments on each post along with paging information (cursors, next and previous URLs).
Loop through each post and each post comment (and if you're up for it, comments on those comments), and you will have your result set.
If you prefer to skip writing the code, you could use Analytics Canvas to accomplish this task with a few clicks.
Full disclosure - I work with nModal on Analytics Canvas
You can do this with koala gem.
access_token = '#{access_token}'
#graph = Koala::Facebook::API.new(access_token)
page_name = '#{page_name}'
node_type = "posts"
# get posts with standard content
posts_standard = #graph.get_connections(page_name, node_type,limit: 5)
# get posts with replies
posts = #graph.get_connections(page_name, node_type, limit: 5,fields: "message,id,created_time,updated_time,likes.summary(true),shares,comments.fields(comments.fields(from,message),message,from),from")
I'm trying to create some social dashboard and therefore I want to retrieve my posts from my page. When I use this one to fetch my posts, it doesn't return me all the information I need (e.g 'picture', 'full_picture', 'attachments')
$user_posts = $facebook->api('/me/posts', 'GET');
print_r($user_posts);
But when I try next one, it still doesn't return me my required information:
$user_posts = $facebook->api('/me/posts?{created_time,id,message,full_picture,picture,attachments{url,subattachments},likes{name},comments{from,message,comment_count,user_likes,likes{name}}}', 'GET');
print_r($user_posts);
Anyone ideas??
I know that this has been asked a long time ago, but maybe useful for someone:
After - me/posts? - you need to make sure to put fields= and then a list of fields required.
So this would be:
$user_posts = $facebook->api('/me/posts?fields={created_time,id,message,full_picture,picture,attachments{url,subattachments},likes{name},comments{from,message,comment_count,user_likes,likes{name}}}', 'GET');
print_r($user_posts);
I've put together a small twitter tool to pull relevant tweets, for later analysis in a latent semantic analysis. Ironically, that bit (the more complicated bit) works fine - it's pulling the tweets that's the problem. I'm using the code below to set it up.
This technically works, but no as expected - the .items(200) parameter I thought would pull 200 tweets per request, but it's being blocked into 15 tweet chunks (so the 200 items 'costs' me 13 requests) - I understand that this is the original/default RPP variable (now 'count' in the Twitter docs), but I've tried that in the Cursor setting (rpp=100, which is the maximum from the twitter documentation), and it makes no difference.
Tweepy/Cursor docs
The other nearest similar question isn't quite the same issue
Grateful for any thoughts! I'm sure it's a minor tweak to the settings, but I've tried various settings on page and rpp, to no avail.
auth = tweepy.OAuthHandler(apikey, apisecret)
auth.set_access_token(access_token, access_token_secret_var)
from tools import read_user, read_tweet
from auth import basic
api = tweepy.API(auth)
current_results = []
from tweepy import Cursor
for tweet in Cursor(api.search,
q=search_string,
result_type="recent",
include_entities=True,
lang="en").items(200):
current_user, created = read_user(tweet.author)
current_tweet, created = read_tweet(tweet, current_user)
current_results.append(tweet)
print current_results
I worked it out in the end, with a little assistance from colleagues. Afaict, the rpp and items() calls are coming after the actual API call. The 'count' option from the Twitter documentation which was formerly RPP as mentioned above, and is still noted as rpp in Tweepy 2.3.0, seems to be at issue here.
What I ended up doing was modifying the Tweepy Code - in api.py, I added 'count' in to the search bind section (around L643 in my install, ymmv).
""" search """
search = bind_api(
path = '/search/tweets.json',
payload_type = 'search_results',
allowed_param = ['q', 'count', 'lang', 'locale', 'since_id', 'geocode', 'max_id', 'since', 'until', 'result_type', **'count**', 'include_entities', 'from', 'to', 'source']
)
This allowed me to tweak the code above to:
for tweet in Cursor(api.search,
q=search_string,
count=100,
result_type="recent",
include_entities=True,
lang="en").items(200):
Which results in two calls, not fifteen; I've double checked this with
print api.rate_limit_status()["resources"]
after each call, and it's only deprecating my remaining searches by 2 each time.
I am implementing social login in my application and I am already getting normal fields as First Name, Last Name on Facebook and LinkedIn... Now I want to get the profile picture. I did it for LinkedIn and it is working perfectly, i can see the URL on the extra fields from the database.
I am trying to do the same with Facebook, as I saw here https://groups.google.com/forum/?fromgroups#!topic/django-social-auth/Mee8H8HhLQk
defining
FACEBOOK_PROFILE_EXTRA_PARAMS = {'fields': 'picture'}
FACEBOOK_EXTRA_DATA = [('profile', 'profile')]
But when i did this, somethings got messed up. I am not able anymore to get the first name and last name in my user table as I was doing before these two lines of code. Does anyone know why this problem is happening?
The picture is under http://graph.facebook.com/<user_id>/picture. The user_id is given by the FacebookBackend.
I've migrated an old joomla installation over to django. The password hashes is an issue though. I had to modify the get_hexdigest in contrib.auth.models to have an extra if statement to reverse the way the hash is generated.
# Custom for Joomla
if algorithm == 'joomla':
return md5_constructor(raw_password + salt).hexdigest()
# Djangos original md5
if algorithm == 'md5':
return md5_constructor(salt + raw_password).hexdigest()
I also added the following to the User model to update the passwords after login if they have the old joomla style:
# Joomla Backwards compat
algo, salt, hsh = self.password.split('$')
if algo == 'joomla':
is_correct = (hsh == get_hexdigest(algo, salt, raw_password))
if is_correct:
# Convert the password to the new more secure format.
self.set_password(raw_password)
self.save()
return is_correct
Everything is working perfectly but I'd rather not edit this code directly in the django tree. Is there a cleaner way to do this in my own project?
Thanks
Your best bet would be to roll a custom auth backend and rewrite get_hexdigest in there. Never done it myself, but documentation on how to do so is available at http://docs.djangoproject.com/en/dev/topics/auth/#authentication-backends.
Thanks for the guidance. For anyone who needs to go the other way (DJango to Joomla) with DJ passwords, the DJ format is Sha1$salt$crypt.
Joomla standard auth plugin and joomla core JUserHelper do not implement the same SHA1 algorithum but it is fairly easy to patch into joomla.php in that plugin, where the plugin normally does an explode on ':'. Do a three-part explode with '$' and use salt = [1], compare that against $encrypted = sha1($salt.$plaintext), match that against the crypt [2].