Connecting to SNS in Boto and Python - python-2.7

I am new to boto and python, trying to connect sns. Here is my sample code:
import boto
sns = boto.connect_sns(aws_access_key_id="my access", aws_secret_access_key="mysecret", region_name='us-east-1')
I am getting error:
Traceback (most recent call last):
File "sns.py", line 5, in <module>
sns=boto.connect_sns(aws_access_key_id="XXXXXXXX",aws_secret_access_key="XXXXXXX",region_name='us-east-1')
AttributeError: 'module' object has no attribute 'connect_sns'
Any help in this regard is greatly appreciated.

These days, it is recommended that you use boto3 rather than boto (v2). Here's some sample code:
import boto3
client = boto3.client('sns', region_name='ap-southeast-2')
response = client.list_topics()
See: boto3 documentation

Related

Error when sending message from lambda to DLQ

I am following this article in order to send from a lambda to a DLQ:
Using dead-letter queues in Amazon SQS — Boto3 documentation
The code is as follows
from datetime import datetime
import json
import os
import boto3
from botocore.vendored import requests
QUEUE_NAME = os.environ['QUEUE_NAME']
MAX_QUEUE_MESSAGES = os.environ['MAX_QUEUE_MESSAGES']
dead_letter_queue_arn = os.environ['DEAD_LETTER_QUEUE_ARN']
sqs = boto3.resource('sqs')
queue_url = os.environ['SQS_QUEUE_URL']
redrive_policy = {
'deadLetterTargetArn': dead_letter_queue_arn,
'maxReceiveCount': '10'
}
def lambda_handler(event, context):
# Receive messages from SQS queue
queue = sqs.get_queue_by_name(QueueName=QUEUE_NAME)
response = requests.post("http://httpbin.org/status/500", timeout=10)
if response.status_code == 500:
sqs.set_queue_attributes(QueueUrl=queue_url,
Attributes={
'RedrivePolicy': json.dumps(redrive_policy)
}
)
I am doing in this way because I need implement exponential backoff, but I cannot even send to DLQ becase this error
[ERROR] AttributeError: 'sqs.ServiceResource' object has no attribute 'set_queue_attributes'
Traceback (most recent call last):
  File "/var/task/lambda_function.py", line 24, in lambda_handler
    sqs.set_queue_attributes(QueueUrl=queue_url,
According to the set_queue_attributes() documentation, the object has the attribute set_queue_attributes.
Well in case anyone has the same problem, there is a difference between client and resource, I suppose the error has the necessary information but with AWS for me was difficult to spot, according to this
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sqs.html#SQS.Client.set_queue_attributes
response = client.set_queue_attributes(
QueueUrl='string',
Attributes={
'string': 'string'
}
)
You should be using the client
import boto3
client = boto3.client('sqs')
My mistake was I already has something from boto related to sqs
sqs = boto3.resource('sqs')
That is the reason the error
[ERROR] AttributeError: 'sqs.ServiceResource' object has no attribute 'set_queue_attributes'
because I need to use client instead resource from sqs

googleapiclient HttpError 403 require permission

I'm using google api, googleapiclient.discovery and build.instances().list() to extract all instances information under my gcp project. But I'm having permission error.
I can run compute.instances().list() fine on local and compute.instances().list() and by using Application Default Credentials. But the default credentials doesn't work when I publish app to server, and it raise exception:
raise exceptions.DefaultCredentialsError(_HELP_MESSAGE)
2019-07-17T00:55:56.682326031Z google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials.
So I need to set credential explicitly. And I downloaded service account key as "credential.json" and tried to set GOOGLE_APPLICATION_CREDENTIALS as following
import googleapiclient.discovery
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]= "credential.json"
compute = googleapiclient.discovery.build('compute', 'v1')
compute.instances().list(project="projectname", zone = "us-west1-a").execute()
But I'm getting the permission error
Traceback (most recent call last):
File "lst_temp.py", line 6, in <module>
compute.instances().list(project="project-name", zone = "us-west1-a").execute()
File "/usr/local/lib/python3.7/site-packages/googleapiclient/_helpers.py", line 130, in positional_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/googleapiclient/http.py", line 851, in execute
raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 403 when requesting https://www.googleapis.com/compute/v1/projects/project-name/zones/us-west1-a/instances?alt=json returned "Required 'compute.instances.list' permission for 'projects/projectname'">

'module' object has no attribute 'DataReader

import pandas as pd
import pandas.io.data as web # as we have to use only pandas function
#Second, retrieve the data from, say, Google itself:
stock = web.DataReader('IBM',data_source='yahoo',start='01/01/2011', end='01/01/2013')
# end of question 1
print type(stock) # Class Type is pandas.core.frame.DataFrame
IBM_dataframe = pd.DataFrame(stock)
Traceback (most recent call last):
File "", line 2, in
import pandas.io.data as web # as we have to use only pandas function
File "C:\Anaconda2\lib\site-packages\pandas\io\data.py", line 2, in
"The pandas.io.data module is moved to a separate package "
ImportError: The pandas.io.data module is moved to a separate package (pandas-datareader). After installing the pandas-datareader package (https://github.com/pydata/pandas-datareader), you can change the import from pandas.io import data, wb to from pandas_datareader import data, wb.
import pandas_datareader as web
stock = web.DataReader('IBM',data_source='yahoo',start='01/01/2011', end='01/01/2013')
Traceback (most recent call last):
File "", line 1, in
stock = web.DataReader('IBM',data_source='yahoo',start='01/01/2011', end='01/01/2013')
AttributeError: 'module' object has no attribute 'DataReader'
change the import pandas.io.data as web to import pandas_datareader as web but now not able to get data plz suggest getting error
'module' object has no attribute 'DataReader'
Use the following:
from pandas_datareader import data, wb
DAX = data.DataReader(name='^GDAXI', data_source='yahoo',start='2000-1-1')

AWS boto error message ImportError: cannot import name key

>>> import boto
>>> c = boto.connect_s3()
>>> b=c.get_bucket('beaubeaubeau')
>>> from boto.s3.key import key
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name key
I am new to boto
Any idea why I am getting this error?
I think you want:
from boto.s3.key import Key
boto.s3.key is a Python module and Key is a class contained in that module.

google admin sdk directory api 403 python

i want to use admin sdk directory api to create eamil account of users.
i am using google-api-python-client-1.2 library.
in folder /samples/service_account/tasks.py works for me.
but when i chance that file to list users from admin directory api it doesn't works and throws errors.
below is the code i am using.
import httplib2
import pprint
import sys
import inspect
from apiclient.discovery import build
from oauth2client.client import SignedJwtAssertionCredentials
def main(argv):
f = file('my-privatekey.p12', 'rb')
key = f.read()
f.close()
credentials = SignedJwtAssertionCredentials(
'my#developer.gserviceaccount.com',
key,
scope=['https://www.googleapis.com/auth/admin.directory.user', 'https://www.googleapis.com/auth/admin.directory.user.readonly'])
http = httplib2.Http()
http = credentials.authorize(http)
service = build("admin", "directory_v1", http)
list_of_apis = service.users().list(domain='mydomain.com').execute(http=http)
pprint.pprint(list_of_apis)
if __name__ == '__main__':
main(sys.argv)
when i run the above code i get below errors.
$python tasks.py
No handlers could be found for logger "oauth2client.util"
Traceback (most recent call last):
File "tasks.py", line 77, in <module>
main(sys.argv)
File "tasks.py", line 66, in main
list_of_apis = service.users().list(domain='messycoders.com').execute(http=http)
File "/usr/local/lib/python2.7/dist-packages/oauth2client/util.py", line 132, in positional_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/apiclient/http.py", line 723, in execute
raise HttpError(resp, content, uri=self.uri) apiclient.errors.HttpError: <HttpError 403 when requesting https://www.googleapis.com/admin/directory/v1/users?domain=messycoders.com&alt=json returned "Not Authorized to access this resource/api">
Try:
credentials = SignedJwtAssertionCredentials(
'my#developer.gserviceaccount.com',
key,
sub='superadmin#mydomain.com',
scope=['https://www.googleapis.com/auth/admin.directory.user',])
You don't need both scopes, use readonly if you're doing read operations only, use the above if you're doing read and write.
sub= defines which Google Apps account the service account should impersonate to perform the directory operations, it's necessary and the account needs to have the right permissions.
Lastly, be sure that you've granted the service account's client_id access to the directory scopes you need in the Control Panel. The steps to do this are listed in the Drive documentation, just sub in the correct scope(s) for Admin Directory.