Where to look-up what exceptions a BOTO3 function can throw? - amazon-web-services

I’m reading AWS Python docs such as SNS Client Publish() but can’t find the details of what exceptions a function can throw.
E.g., publish() can throw EndpointDisabledException but I can’t find this documented.
Where can I look up the list of exceptions a BOTO3 function can throw (for Python)

This is how to handle such exceptions:
import boto3
from botocore.exceptions import ClientError
import logging
try:
response = platform_endpoint.publish(
Message=json.dumps(message, ensure_ascii=False),
MessageStructure='json')
logging.info("r = %s" % response)
except ClientError as e:
if e.response['Error']['Code'] == 'EndpointDisabled':
logging.info('EndpointDisabledException thrown')

Almost all exceptions are subclassed from BotoCoreError. I am not able to find a method to list all exceptions. Look at Botocore Exceptions file to get a list of possible exceptions. I can't find EndpointDisabledException. Are you using the latest version?
See: Botocore Exceptions

you can find the list of exceptions for publish(**kwargs) -> here <- (bottom of publish(**kwargs) part)
Every exception is linked to its documentation
SNS.Client.exceptions.InvalidParameterException
SNS.Client.exceptions.InvalidParameterValueException
SNS.Client.exceptions.InternalErrorException
SNS.Client.exceptions.NotFoundException
SNS.Client.exceptions.EndpointDisabledException
SNS.Client.exceptions.PlatformApplicationDisabledException
SNS.Client.exceptions.AuthorizationErrorException
SNS.Client.exceptions.KMSDisabledException
SNS.Client.exceptions.KMSInvalidStateException
SNS.Client.exceptions.KMSNotFoundException
SNS.Client.exceptions.KMSOptInRequired
SNS.Client.exceptions.KMSThrottlingException
SNS.Client.exceptions.KMSAccessDeniedException
SNS.Client.exceptions.InvalidSecurityException

Use the client and then find the exception
example : if we are dealing with the cognito then
client = boto3.client(
'cognito-idp',....)
try:
some code .......
except client.exceptions.UsernameExistsException as ex:
print(ex)

Following code will generate an exhaustive list of exceptions from all supported services that it's boto3 client can throw.
#!/usr/bin/env python3
import boto3
with open("README.md", "w") as f:
f.write("# boto3 client exceptions\n\n")
f.write(
"This is a generated list of all exceptions for each client within the boto3 library\n"
)
f.write("""
```python
import boto3
s3 = boto3.client("s3")
try:
s3.create_bucket("example")
except s3.exceptions.BucketAlreadyOwnedByYou:
raise
```\n\n""")
services = boto3.session.Session().get_available_services()
for service in services:
f.write(f"- [{service}](#{service})\n")
for service in services:
f.write(f"### {service}\n")
for _, value in boto3.client(
service, region_name="us-east-1"
).exceptions.__dict__.items():
for exception in value:
f.write(f"- {exception}\n")
f.write("\n")
Example output for S3 boto3 client:
BucketAlreadyExists
BucketAlreadyOwnedByYou
InvalidObjectState
NoSuchBucket
NoSuchKey
NoSuchUpload
ObjectAlreadyInActiveTierError
ObjectNotInActiveTierError
Reference: https://github.com/jbpratt/boto3-client-exceptions/blob/master/generate

Related

How can I combine methods from the EC2 resource and client API?

I'm trying to take in input for stopping and starting instances, but if I use client, it comes up with the error:
'EC2' has no attribute 'instance'
and if I use resource, it says
'ec2.Serviceresource' has no attribute 'Instance'
Is it possible to use both?
#!/usr/bin/env python3
import boto3
import botocore
import sys
print('Enter Instance id: ')
instanceIdIn=input()
ec2=boto3.resource('ec2')
ec2.Instance(instanceIdIn).stop()
stopwait=ec2.get_waiter('instance_stopped')
try:
stopwait.wait(instanceIdIn)
print('Instance Stopped. Starting Instance again.')
except botocore.exceptions.waitError as wex:
logger.error('instance not stopped')
ec2.Instance(instanceIdIn).start()
try:
logger.info('waiting for running state...')
print('Instance Running.')
except botocore.exceptions.waitError as wex2:
logger.error('instance has not been stopped')
In boto3 there's two kinds of APIs for most service - a resource-based API, which is supposed to be an abstraction of the lower level API calls that the client API provides.
You can't directly mix and match calls to these. Instead you should create a separate instance for each of those like this:
import boto3
ec2_resource = boto3.resource("ec2")
ec2_client = boto3.client("ec2")
# Now you can call the client methods on the client
# and resource classes from the resource:
my_instance = ec2_resource.Instance("instance-id")
my_waiter = ec2_client.get_waiter("instance_stopped")

How to access response from boto3 bucket.put_object?

Looking at the boto3 docs, I see that client.put_object has a response shown, but I don't see a way to get the response from bucket.put_object.
Sample snippet:
s3 = boto3.resource(
's3',
aws_access_key_id=redacted,
aws_secret_access_key=redacted,
)
s3.Bucket(bucketName).put_object(Key="bucket-path/" + fileName, Body=blob, ContentMD5=md5Checksum)
logging.info("Uploaded to S3 successfully")
How is this accomplished?
put_object returns S3.Object, which in turn has the wait_until_exists method.
Therefore, something along these lines should be sufficient (my verification code is bellow):
import boto3
s3 = boto3.resource('s3')
with open('test.img', 'rb') as f:
obj = s3.Bucket('test-ssss4444').put_object(
Key='fileName',
Body=f)
obj.wait_until_exists() # optional
print("Uploaded to S3 successfully")
put_object is a blocking operation. Thus it will block your program until your file is uploaded. Therefore wait_until_exists is not really needed. But if you want to make sure that the upload actually went through and the object is in S3 you can use it.
You have to use boto3.client instead of boto3.resource to get the response information like ETag and etc. It has a little bit different syntax.
import boto3
s3 = boto3.resource('s3')
s3.put_object(Bucket='bucket-name', Key='fileName', Body=body)

How to troubleshoot and solve lambda function issue?

import sys
import botocore
import boto3
from botocore.exceptions import ClientError
def lambda_handler(event, context):
# TODO implement
rds = boto3.client('rds')
lambdaFunc = boto3.client('lambda')
print 'Trying to get Environment variable'
try:
funcResponse = lambdaFunc.get_function_configuration(
FunctionName='RDSInstanceStart'
)
#print (funcResponse)
DBinstance = funcResponse['Environment']['Variables']['DBInstanceName']
print 'Starting RDS service for DBInstance : ' + DBinstance
except ClientError as e:
print(e)
try:
response = rds.start_db_instance(
DBInstanceIdentifier=DBinstance
)
print 'Success :: '
return response
except ClientError as e:
print(e)
return
{
'message' : "Script execution completed. See Cloudwatch logs for complete output"
}
I have a running rds instance. Every day I start and stop my RDS instance(db.t2.micro (MSSQL Server)) of AWS using a lambda expression. It was working fine previously but unexpectedly today I faced the issue.
Where my rds instance is not started automatically by the lambda expression. I watched an error log but there is not an issue it usually seems like the daily log. I am unable to troubleshoot and solve the issue. Can anyone tell me about this issue?
FYI, a shortened version would be:
import boto3
import os
def lambda_handler(event, context):
rds_client = boto3.client('rds')
response = rds.start_db_instance(DBInstanceIdentifier=os.environ['DBInstanceName'])
print response
You can see the logs of each lambda calls in cloudwatch or in aws lambda-> monitoring -> view logs in cloud watch. This will open a page with logs of each lambda call.
if there is not logs. it means that lambda is not invoking.
you can check the roles and policies assign to lambda if that is correct.
You should print the response of the api you use to start the db (ex- start-db-instance). The response will be printed to CloudWatch Log.
https://docs.aws.amazon.com/cli/latest/reference/rds/start-db-instance.html
for later automation you might want to create a metric-filter on the Lambda's CloudWatch Logs on a certain keyword such as -
"\"DBInstanceStatus\": \"starting\""
there will be an Alarm created as well with setting say threshold < 1, and if the keyword is not found in a log the metric will push no Value (you can customize this setting under Advanced option) and the Alarm will go in to INSUFFICIENT_DATA and you can set notification for INSUFFICIENT_DATA using SNS.
You can tweak the Alarm a bit to treat missing data as Bad and then Alarm will transition to ALARM state when metric filter does not match with the incoming log.

Python Lambda function to capture AWS cloud watch logs of AWS MQ and send to kinesis

I got an Python script from put_records() only accepts keyword arguments in Kinesis boto3 Python API which will load json files to kinesis stream.
My architecture is something like this
In AWS console, I have created a Lambda function with the above code added to it.Lambda Function
How will I integrate or tell my lambda function that it wakes up every one minute. Should I need to add the capture messages via Cloud-watch events. If so how..?
I did get this solution form below link.
Python Script:-
import time
import boto3
import stomp
kinesis_client = boto3.client('kinesis')
class Listener(stomp.ConnectionListener):
def on_error(self, headers, message):
print('received an error "%s"' % message)
def on_message(self, headers, message):
print('received a message "%s"' % message)
kinesis_client.put_record(
StreamName='inter-lambda',
Data=u'{}\r\n'.format(message).encode('utf-8'),
PartitionKey='0'
)
def handler(event, context):
conn = stomp.Connection(host_and_ports=[('localhost', 61616)])
conn.set_listener('', Listener(conn))
conn.start()
conn.connect(login='user', passcode='pass')
conn.subscribe(destination='A.B.C.D', ack='auto')
print('Waiting for messages...')
time.sleep(10)
conn.close()
return ''
https://github.com/aws-samples/amazonmq-invoke-aws-lambda
You can schedule Lambda functions to run using CloudWatch Events.
Another alternative may be to subscribe to the log events and deliver them to Lambda.

aws boto3 client Stubber help stubbing unit tests

I'm trying to write some unit tests for aws RDS. Currently, the start stop rds api calls have not yet been implemented in moto. I tried just mocking out boto3 but ran into all sorts of weird issues. I did some googling and found http://botocore.readthedocs.io/en/latest/reference/stubber.html
So I have tried to implement the example for rds but the code appears to be behaving like the normal client, even though I have stubbed it. Not sure what's going on or if I am stubbing correctly?
from LambdaRdsStartStop.lambda_function import lambda_handler
from LambdaRdsStartStop.lambda_function import AWS_REGION
def tests_turn_db_on_when_cw_event_matches_tag_value(self, mock_boto):
client = boto3.client('rds', AWS_REGION)
stubber = Stubber(client)
response = {u'DBInstances': [some copy pasted real data here], extra_info_about_call: extra_info}
stubber.add_response('describe_db_instances', response, {})
with stubber:
r = client.describe_db_instances()
lambda_handler({u'AutoStart': u'10:00:00+10:00/mon'}, 'context')
so the mocking WORKS for the first line inside the stubber and the value of r is returned as my stubbed data. When I try and go into my lambda_handler method inside my lambda_function.py and still use the stubbed client it behaves like a normal unstubbed client:
lambda_function.py
def lambda_handler(event, context):
rds_client = boto3.client('rds', region_name=AWS_REGION)
rds_instances = rds_client.describe_db_instances()
error output:
File "D:\dev\projects\virtual_envs\rds_sloth\lib\site-packages\botocore\auth.py", line 340, in add_auth
raise NoCredentialsError
NoCredentialsError: Unable to locate credentials
You will need to patch boto3 where it is called in the routine that you will be testing. Also Stubber responses appear to be consumed on each call and thus will require another add_response for each stubbed call as below:
def tests_turn_db_on_when_cw_event_matches_tag_value(self, mock_boto):
client = boto3.client('rds', AWS_REGION)
stubber = Stubber(client)
# response data below should match aws documentation otherwise more errors due to botocore error handling
response = {u'DBInstances': [{'DBInstanceIdentifier': 'rds_response1'}, {'DBInstanceIdentifierrd': 'rds_response2'}]}
stubber.add_response('describe_db_instances', response, {})
stubber.add_response('describe_db_instances', response, {})
with mock.patch('lambda_handler.boto3') as mock_boto3:
with stubber:
r = client.describe_db_instances() # first_add_response consumed here
mock_boto3.client.return_value = client
response=lambda_handler({u'AutoStart': u'10:00:00+10:00/mon'}, 'context') # second_add_response would be consumed here
# asert.equal(r,response)