Error while accessing Google Contacts Groups by API - python-2.7

I want get list of all groups in my contacts. I use that code:
import gdata.gauth
import gdata.contacts.client
token = gdata.gauth.OAuth2Token(client_id = "***.apps.googleusercontent.com",
client_secret = "***",
scope = "https://www.google.com/m8/feeds/",
user_agent = "GC")
gd_client = gdata.contacts.client.ContactsClient(source = 'GCv0.1')
gd_client = token.authorize(gd_client)
gd_client.GetGroups()
But got error:
Traceback (most recent call last):
File "F:/Yandex/Sites/GoogleContacts/cli_contacts.py", line 27, in <module>
gd_client.GetGroups()
File "C:\Users\Ishayahu\27Gdata\lib\site-packages\gdata\contacts\client.py", line 218, in get_groups
return self.get_feed(uri, desired_class=desired_class, auth_token=auth_token, **kwargs)
File "C:\Users\Ishayahu\27Gdata\lib\site-packages\gdata\client.py", line 640, in get_feed
**kwargs)
File "C:\Users\Ishayahu\27Gdata\lib\site-packages\gdata\client.py", line 319, in request
RequestError)
gdata.client.RequestError: Server responded with: 400,
enter code here
I have no idea what a reason and I can't find any clue how to solve it.
UPD: It looks like I should somehow put acess_token or refresh_token into OAuth2Token, but I can't understand how
so it send headers
{'GData-Version': '3', 'Authorization': 'Bearer None', 'User-Agent': 'gdata-py/2.0.17'}
UPD2: by the way, if I test in in OAuth2Playground, it shows me a page with request access to my contacts. That script doesn't ask for it. Maybe that's the problem? How can I change it? I thought, it connect with url_redirect, but I can't uderstand, how to use it
UPD3: I was right: if I add access_token, which I get manualy from Playground, all works. But how should I get it in script?!

Related

Scrapy download files from FTP

I need to download a group of csv using scrapy from FTP. But first I need to scrape a website(https://www.douglas.co.us/assessor/data-downloads/) in order to get the urls of csv in the ftp.I read about how to download files in the documentation(Downloading and processing files and images)
settings
custom_settings = {
'ITEM_PIPELINES': {
'scrapy.pipelines.files.FilesPipeline': 1,
},
'FILES_STORE' : os.path.dirname(os.path.abspath(__file__))
}
parse
def parse(self, response):
self.logger.info("In parse method!!!")
# Property Ownership
property_ownership = response.xpath("//a[contains(., 'Property Ownership')]/#href").extract_first()
# Property Location
property_location = response.xpath("//a[contains(., 'Property Location')]/#href").extract_first()
# Property Improvements
property_improvements = response.xpath("//a[contains(., 'Property Improvements')]/#href").extract_first()
# Property Value
property_value = response.xpath("//a[contains(., 'Property Value')]/#href").extract_first()
item = FiledownloadItem()
self.insert_keyvalue(item,"file_urls",[property_ownership, property_location, property_improvements, property_value])
yield item
But I got the following error
Traceback (most recent call last): File
"/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py",
line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw) File "/usr/local/lib/python2.7/dist-packages/scrapy/pipelines/media.py",
line 79, in process_item
requests = arg_to_iter(self.get_media_requests(item, info)) File "/usr/local/lib/python2.7/dist-packages/scrapy/pipelines/files.py",
line 382, in get_media_requests
return [Request(x) for x in item.get(self.files_urls_field, [])] File
"/usr/local/lib/python2.7/dist-packages/scrapy/http/request/init.py",
line 25, in init
self._set_url(url) File "/usr/local/lib/python2.7/dist-packages/scrapy/http/request/init.py",
line 58, in _set_url
raise ValueError('Missing scheme in request url: %s' % self._url) ValueError: Missing scheme in request url: [
The best explanation to my problem is this answer of this question scrapy error :exceptions.ValueError: Missing scheme in request url:, that explain that the problem is that urls to download are missing the "http://".
What should I do in my case? Can I use FilesPipeline? or I need to do something different?
Thanks in advance.
ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url: [
According to the traceback, scrapy thinks your file url is '['.
My best guess is that you have an error in the insert_keyvalue() method.
Also, why have a method for this? Simple assignment should work.

How to fix this python code that performs login to website

am novice in python.Extracted below code to login to website from an online post, but getting error.
Please help to fix it and an explanation will help me
import requests
with requests.Session() as c:
EMAIL = 'noob.python#gmail.com'
PASSWORD = 'Dabc#123'
URL = 'https://www.linkedin.com/'
c.get(URL)
token = c.cookies['CsrfParam']
# This is the form data that the page sends when logging in
login_data = {loginCsrfParam:token, session_key:EMAIL, session_password:PASSWORD}
# Authenticate
r = c.post(URL, data=login_data)
# Try accessing a page that requires you to be logged in
r = c.get('https://www.linkedin.com/feed/')
print r.content
Am stuck with below Error:
C:\Python27>python website.py
Traceback (most recent call last):
File "website.py", line 8, in <module>
token = c.cookies['CsrfParam']
File "C:\Python27\lib\site-packages\requests\cookies.py", line 329, in __getitem__
return self._find_no_duplicates(name)
File "C:\Python27\lib\site-packages\requests\cookies.py", line 400, in _find_no_duplicates
raise KeyError('name=%r, domain=%r, path=%r' % (name, domain, path))
KeyError: "name='CsrfParam', domain=None, path=None"
The reason you're getting the error is that you're calling a value from a list which is empty. To call the first item in the list you say list[0]. In this case the list you're calling is empty so the first value doesn't exist hence the error.
I've ran your code and there is no #id value of 'recaptcha-token' which is why the code is returning an empty list. The only place a recaptcha token is needed is for signing up so I would suggest trying to log in without creating the authenticity_token.

linkedin api - python - get_connections()

I am working on a simple python scraping script, I am trying to get connections from LinkedIn using their API without a redirect_uri. I worked once with some APIs, that don't require the redirect url or just https://localhost. I got the consumer_key, consumer_secret, user_secret, consumer_secret. Here's the code i am using from https://github.com/ozgur/python-linkedin:
RETURN_URL = ''
url = 'https://api.linkedin.com/v1/people/~'
# Instantiate the developer authentication class
authentication = linkedin.LinkedInDeveloperAuthentication(CONSUMER_KEY, CONSUMER_SECRET,
USER_TOKEN, USER_SECRET,
RETURN_URL, linkedin.PERMISSIONS.enums.values())
# Pass it in to the app...
application = linkedin.LinkedInApplication(authentication)
print application.get_profile() # works
print application.get_connections()
And here's the error I get:
Traceback (most recent call last):
File "getContacts.py", line 20, in <module>
print application.get_connections()
File "/home/imane/Projects/prjL/env/local/lib/python2.7/site-packages/linkedin/linkedin.py", line 219, in get_connections
raise_for_error(response)
File "/home/imane/Projects/prjL/env/local/lib/python2.7/site-packages/linkedin/utils.py", line 63, in raise_for_error
raise LinkedInError(message)
linkedin.exceptions.LinkedInError: 403 Client Error: Forbidden for url: https://api.linkedin.com/v1/people/~/connections: Unknown Error
This is my first question here, so excuse me if I didn't make it clear enough, and thank you for helping me out.
Here's what i tried with python_oauth2:
import oauth2 as oauth
import requests
url = 'https://api.linkedin.com/v1/people/~'
params = {}
token = oauth.Token(key=USER_TOKEN, secret=USER_SECRET)
consumer = oauth.Consumer(key=CONSUMER_KEY, secret=CONSUMER_SECRET)
# Set our token/key parameters
params['oauth_token'] = token.key
params['oauth_consumer_key'] = consumer.key
oauth_request = oauth.Request(method="GET", url=url, parameters=params)
oauth_request.sign_request(oauth.SignatureMethod_HMAC_SHA1(), consumer, token)
signed_url = oauth_request.to_url()
response = requests.get(signed_url)
Connections API calls are a restricted endpoint as of March, 2015. It's possible you're using sample code/documentation that was written at a time when anyone could access those endpoints. You are receiving a 403 response because your application legitimately does not have the permission required to make that request.

Error using OAuth2 to connect to dropbox in Python

On my Raspberry Pi running raspbian jessie I tried to go through the OAuth2 flow to connect a program to my dropbox using the dropbox SDK for Python which I installed via pip.
For a test, I copied the code from the documentation (and defined the app-key and secret, of course):
from dropbox import DropboxOAuth2FlowNoRedirect
auth_flow = DropboxOAuth2FlowNoRedirect(APP_KEY, APP_SECRET)
authorize_url = auth_flow.start()
print "1. Go to: " + authorize_url
print "2. Click \"Allow\" (you might have to log in first)."
print "3. Copy the authorization code."
auth_code = raw_input("Enter the authorization code here: ").strip()
try:
access_token, user_id = auth_flow.finish(auth_code)
except Exception, e:
print('Error: %s' % (e,))
return
dbx = Dropbox(access_token)
I was able to get the URL and to click allow. When I then entered the authorization code however, it printed the following error:
Error: 'str' object has no attribute 'copy'
Using format_exc from the traceback-module, I got the following information:
Traceback (most recent call last):
File "test.py", line 18, in <module>
access_token, user_id = auth_flow.finish(auth_code)
File "/usr/local/lib/python2.7/dist-packages/dropbox/oauth.py", line 180, in finish
return self._finish(code, None)
File "/usr/local/lib/python2.7/dist-packages/dropbox/oauth.py", line 50, in _finish
url = self.build_url(Dropbox.HOST_API, '/oauth2/token')
File "/usr/local/lib/python2.7/dist-packages/dropbox/oauth.py", line 111, in build_url
return "https://%s%s" % (self._host, self.build_path(target, params))
File "/usr/local/lib/python2.7/dist-packages/dropbox/oauth.py", line 89, in build_path
params = params.copy()
AttributeError: 'str' object has no attribute 'copy'
It seems the build_path method expects a dict 'params' and receives a string instead. Any ideas?
Thanks to smarx for his comment. The error is a known issue and will be fixed in version 3.42 of the SDK. source

cant get the facebook-tornado connection

i've a problem, am a beginer and i try to make a simple asynchronous program that posts to facebook, i use the tornado example and the tornado-facebook-sdk, here is the code:
class MainHandler(BaseHandler, tornado.auth.FacebookGraphMixin):
#tornado.web.authenticated
#tornado.web.asynchronous
def get(self):
self.facebook_request("/me/home", self.print_callback, access_token=self.current_user["access_token"])
a = self.current_user["access_token"]
#print a
def print_callback(data):
print data
ioloop.stop()
graph.get_object('/facebook', callback=print_callback)
and i get this error:
TypeError: print_callback() takes exactly 1 argument (2 given)
because i want to understand this example to get the token, and then use the example:
def callback(response):
# ...
graph.put_object('me', 'feed', message="Maoe!!", callback=callback)
to write something on my facebook's wall, i did it with the synchronous library, but sadly this is blocking!
UPDATE: still getting and error:
class MainHandler(BaseHandler, tornado.auth.FacebookGraphMixin):
#tornado.web.authenticated
#tornado.web.asynchronous
def get(self):
self.facebook_request("/me/home", self.print_callback, access_token=self.current_user["access_token"])
a = self.current_user["access_token"]
print a
def print_callback(self, data):
graph.post_wall(self, "heloooooooo")
and got this error:
[E 121009 14:28:47 web:1108] Uncaught exception GET / (::1)
HTTPRequest(.....)
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\tornado-2.4.post1-py2.7.egg\tornado\web.py", line 1043, in _stack_context_handle_exception
raise_exc_info((type, value, traceback))
File "C:\Python27\lib\site-packages\tornado-2.4.post1 py2.7.egg\tornado\stack_context.py", line 237, in _nested
yield vars
File "C:\Python27\lib\site-packages\tornado-2.4.post1-py2.7.egg\tornado\stack_context.py", line 210, in wrapped
callback(*args, **kwargs)
File "C:\Python27\lib\site-packages\tornado-2.4.post1-py2.7.egg\tornado\gen.py", line 405, in inner self.set_result(key, result)
File "C:\Python27\lib\site-packages\tornado-2.4.post1-py2.7.egg\tornado\gen.py", line 335, in set_result
self.run()
File "C:\Python27\lib\site-packages\tornado-2.4.post1-py2.7.egg\tornado\gen.py", line 365, in run
yielded = self.gen.send(next)
File "build\bdist.win-amd64\egg\facebook\graphapi.py", line 129, in _make_request
raise GraphAPIError(data)
GraphAPIError: (#200) This API call requires a valid app_id.
and when i go to Facebook, i see that it's a valide key that i'm using, i even use the generated Token (here the a variable), and pasting it to Api Debug, and i got everything works fine:
Valid : True
Origin : Web
Scopes : create_note photo_upload publish_actions publish_stream read_stream share_item status_update video_upload
Add self to print_callback.
def print_callback(self, data):
print data
ioloop.stop()
graph.get_object('/facebook', callback=print_callback)