How to create Start instance lambda function in aws using python? - amazon-web-services

When trying to create a function lambda in AWS in order to start instance automatically, this is the function:
import boto3
region = 'us-east-1'
instances = ['i-xxx']
ec2 = boto3.client('ec2', region_name=region)
def lambda_handler(event, context):
ec2.stop_instances(InstanceIds=instances)
print('stopped your instances: ' + str(instances))
and after Save and Test - I got this error:
Response:
{
"errorMessage": "2019-09-15T09:54:06.364Z 372c2df4-1303-4326-b882-a04154007881 Task timed out after 3.00 seconds"
}
Request ID:
"372c2df4-1303-4326-b882-a04154007881"
Function Logs:
START RequestId: 372c2df4-1303-4326-b882-a04154007881 Version: $LATEST
END RequestId: 372c2df4-1303-4326-b882-a04154007881
REPORT RequestId: 372c2df4-1303-4326-b882-a04154007881 Duration: 3003.17 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 81 MB Init Duration: 115.73 ms
XRAY TraceId: 1-5d7e0a3b-79a0391249fcda644105b8ba SegmentId: 0eefbaed756a35c4 Sampled: false
2019-09-15T09:54:06.364Z 372c2df4-1303-4326-b882-a04154007881 Task timed out after 3.00 seconds

Check if you have set the aws lambda timeout to some appropriate value, because it seems to be on the default value which is 3 sec and looks like it not going to be sufficient for you.
Timeout – The amount of time that Lambda allows a function to run before stopping it. The default is 3 seconds. The maximum allowed value is 900 seconds.aws docs
To start and stop the instance boto3 docs
import sys
import boto3
from botocore.exceptions import ClientError
instance_id = sys.argv[2]
action = sys.argv[1].upper()
ec2 = boto3.client('ec2')
if action == 'ON':
# Do a dryrun first to verify permissions
try:
ec2.start_instances(InstanceIds=[instance_id], DryRun=True)
except ClientError as e:
if 'DryRunOperation' not in str(e):
raise
# Dry run succeeded, run start_instances without dryrun
try:
response = ec2.start_instances(InstanceIds=[instance_id], DryRun=False)
print(response)
except ClientError as e:
print(e)
else:
# Do a dryrun first to verify permissions
try:
ec2.stop_instances(InstanceIds=[instance_id], DryRun=True)
except ClientError as e:
if 'DryRunOperation' not in str(e):
raise
# Dry run succeeded, call stop_instances without dryrun
try:
response = ec2.stop_instances(InstanceIds=[instance_id], DryRun=False)
print(response)
except ClientError as e:
print(e)

Related

AWS Lambda function Key Error with trigger on S3

Basic lambda function trying to get contents of the bucket but getting errors though
import json
import urllib.parse
import boto3
print('Loading function')
s3 = boto3.client('s3')
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
# Get the object from the event and show its content type
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
try:
response = s3.get_object(Bucket=bucket, Key=key)
print("CONTENT TYPE: " + response['ContentType'])
return response['ContentType']
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
raise e
Here is the error message when i run the lambda function.
Error message
{
"errorMessage": "'Records'",
"errorType": "KeyError",
"requestId": "5c89bb8e-a70e-4c33-ba00-43174095544e",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 13, in lambda_handler\n bucket = event['Records'][0]['s3']['bucket']['name']\n"
]
}
Function Logs
START RequestId: 5c89bb8e-a70e-4c33-ba00-43174095544e Version: $LATEST
[ERROR] KeyError: 'Records'
Traceback (most recent call last):
  File "/var/task/lambda_function.py", line 13, in lambda_handler
    bucket = event['Records'][0]['s3']['bucket']['name']
END RequestId: 5c89bb8e-a70e-4c33-ba00-43174095544e
REPORT RequestId: 5c89bb8e-a70e-4c33-ba00-43174095544e Duration: 1.89 ms Billed Duration: 2 ms Memory Size: 128 MB Max Memory Used: 69 MB Init Duration: 356.28 ms
The problem is that
bucket = event['Records'][0]['s3']['bucket']['name']
Doesn't exist. Check the event object when its been triggered from S3. If you want to test in console you need to pass a similarly shaped object as the event.

Lambda call S3 get public access block using boto3

I'm trying to verify if the public access block of my bucket mypublicbucketname is checked or not through Lambda function. For testing, I create a bucket and I have unchecked the public access block. So, I did this Lambda:
import sys
from pip._internal import main
main(['install', '-I', '-q', 'boto3', '--target', '/tmp/', '--no-cache-dir', '--disable-pip-version-check'])
sys.path.insert(0,'/tmp/')
import json
import boto3
import botocore
def lambda_handler(event, context):
# TODO implement
print(boto3.__version__)
print(botocore.__version__)
client = boto3.client('s3')
response = client.get_public_access_block(Bucket='mypublicbucketname')
print("response:>>",response)
I updated the latest version of boto3 and botocore.
1.16.40 #for boto3
1.19.40 #for botocore
Even if I uploaded them and the function seems correct I got this exception:
[ERROR] ClientError: An error occurred (NoSuchPublicAccessBlockConfiguration) when calling the GetPublicAccessBlock operation: The public access block configuration was not found
Someone can explain me why I have this error ?
For futur users. If you got the same problem with get_public_access_block(). Use this solution:
try:
response = client.get_public_access_block(Bucket='mypublicbucketname')
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == 'NoSuchPublicAccessBlockConfiguration':
print('No Public Access')
else:
print("unexpected error: %s" % (e.response))
for put_public_access_block, it works fine.

lambda function fails with s3 key error. in amazon web services

I have a lambda function that moves files from one s3 bucket to another :
import json
import boto3
from datetime import datetime, timedelta
def lambda_handler(event, context):
# TODO implement
SOURCE_BUCKET = 'source-bucket'
DESTINATION_BUCKET = 'destination-bucket'
s3_client = boto3.client('s3')
# Create a reusable Paginator
paginator = s3_client.get_paginator('list_objects_v2')
# Create a PageIterator from the Paginator
page_iterator = paginator.paginate(Bucket=SOURCE_BUCKET)
# Loop through each object, looking for ones older than a given time period
for page in page_iterator:
for object in page['Contents']:
if object['LastModified'] < datetime.now().astimezone() - timedelta(hours=1): # <-- Change time period here
print(f"Moving {object['Key']}")
# Copy object
s3_client.copy_object(
ACL='bucket-owner-full-control',
Bucket=DESTINATION_BUCKET,
Key=object['Key'],
CopySource={'Bucket':SOURCE_BUCKET, 'Key':object['Key']}
)
# Delete original object
s3_client.delete_object(Bucket=SOURCE_BUCKET, Key=object['Key'])
I am getting error :
Response:
{
"errorMessage": "'Contents'",
"errorType": "KeyError",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 21, in lambda_handler\n for object in page['Contents']:\n"
]
}
Request ID:
"518e0f39-63e4-43df-842d-b73d56f83cd8"
Function Logs:
START RequestId: 518e0f39-63e4-43df-842d-b73d56f83cd8 Version: $LATEST
[ERROR] KeyError: 'Contents'
Traceback (most recent call last):
  File "/var/task/lambda_function.py", line 21, in lambda_handler
    for object in page['Contents']:END RequestId: 518e0f39-63e4-43df-842d-b73d56f83cd8
REPORT RequestId: 518e0f39-63e4-43df-842d-b73d56f83cd8 Duration: 1611.00 ms Billed Duration: 1700 ms Memory Size: 128 MB Max Memory Used: 76 MB Init Duration: 248.12 ms
can someone help here. It has moved all the files but still giving me error.
This is assuming that the key Contents is always returned. If there are not objects in the bucket this will not exist.
Add a simple if "Contents" in page to handle it not always existing.
So your function code might look like
import json
import boto3
from datetime import datetime, timedelta
def lambda_handler(event, context):
# TODO implement
SOURCE_BUCKET = 'source-bucket'
DESTINATION_BUCKET = 'destination-bucket'
s3_client = boto3.client('s3')
# Create a reusable Paginator
paginator = s3_client.get_paginator('list_objects_v2')
# Create a PageIterator from the Paginator
page_iterator = paginator.paginate(Bucket=SOURCE_BUCKET)
# Loop through each object, looking for ones older than a given time period
for page in page_iterator:
if "Contents" in page:
for object in page['Contents']:
if object['LastModified'] < datetime.now().astimezone() - timedelta(hours=1): # <-- Change time period here
print(f"Moving {object['Key']}")
# Copy object
s3_client.copy_object(
ACL='bucket-owner-full-control',
Bucket=DESTINATION_BUCKET,
Key=object['Key'],
CopySource={'Bucket':SOURCE_BUCKET, 'Key':object['Key']}
)
# Delete original object
s3_client.delete_object(Bucket=SOURCE_BUCKET, Key=object['Key'])
else:
print("No Contents key for page!")

concurrent celery tasks execute and stores result but .get not working

I have written a Celery Task class like this:
myapp.tasks.py
from __future__ import absolute_import, unicode_literals
from .services.celery import app
from .services.command_service import CommandService
from exceptions.exceptions import *
from .models import Command
class CustomTask(app.Task):
def run(self, json_string, method_name, cmd_id: int):
command_obj = Command.objects.get(id=cmd_id) # type: Command
try:
val = eval('CommandService.{}(json_string={})'.format(method_name, json_string))
status, error = 200, None
except Exception as e:
auto_retry = command_obj.auto_retry
if auto_retry and isinstance(e, CustomError):
command_obj.retry_count += 1
command_obj.save()
return self.retry(countdown=CustomTask._backoff(command_obj.retry_count), exc=e)
elif auto_retry and isinstance(e, AnotherCustomError) and command_obj.retry_count == 0:
command_obj.retry_count += 1
command_obj.save()
print("RETRYING NOW FOR DEVICE CONNECTION ERROR. TRANSACTION: {} || IP: {}".format(command_obj.transaction_id,
command_obj.device_ip))
return self.retry(countdown=command_obj.retry_count*2, exc=e)
val = None
status, error = self._find_status_code(e)
return_dict = {"error": error, "status_code": status, "result": val}
return return_dict
#staticmethod
def _backoff(attempts):
return 2 ** attempts
#staticmethod
def _find_status_code(exception):
if isinstance(exception, APIException):
detail = exception.default_detail if exception.detail is None else exception.detail
return exception.status_code, detail
return 500, CustomTask._get_generic_exc_msg(exception)
#staticmethod
def _get_generic_exc_msg(exc: Exception):
s = ""
try:
for msg in exc.args:
s += msg + ". "
except Exception:
s = str(exc)
return s
CustomTask = app.register_task(CustomTask())
The Celery App definition:
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery, Task
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings')
_celery_broker = settings.CELERY_BROKER <-- my broker is amqp://username:password#localhost:5672/myhost
app = Celery('myapp', broker=_celery_broker, backend='rpc://', include=['myapp.tasks', 'myapp.controllers'])
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks(['myapp'])
app.conf.update(
result_expires=4800,
task_acks_late=True
)
my init.py the tutorial recommended:
from .celery import app as celery_app
__all__ = ['celery_app']
The controller that is running the task:
from __future__ import absolute_import, unicode_literals
from .services.log_service import LogRunner
from myapp.services.command_service import CommandService
from exceptions.exceptions import *
from myapp.services.celery import app
from myapp.services.tasks import MyTask
from .models import Command
class MyController:
def my_method(self, json_string):
<non-async set up stuff here>
cmd_obj = Command.objects.create(<stuff>) # type: Command
task_exec = MyTask.delay(json_string, MyController._method_name, cmd_obj.id)
cmd_obj.task_id = task_exec
try:
return_dict = task_exec.get()
except Exception as e:
self._logger.error("ERROR: IP: {} and transaction: {}. Error Type: {}, "
"Celery Error: {}".format(ip_addr, transaction_id, type(e), e))
status_code, error = self._find_status_code(e)
return_dict = {"error": error, "status_code": status_code, "result": None}
return return_dict
**So here is my issue: **
When I run this Django controller by hitting the view with one request, one after the other, it works perfectly fine.
However, the external service I am hitting will throw an error for 2 concurrent requests (and that is expected - that is ok). Upon getting the error, I retry my task automatically.
Here is the weird part
Upon retry, the .get() I have in my controller stops working for all concurrent requests. My controller just hangs there! And I know that celery is actually executing the task! Here is logs from the celery run:
[2018-09-25 19:10:24,932: INFO/MainProcess] Received task: myapp.tasks.MyTask[bafd62b6-7e29-4c39-86ff-fe903d864c4f]
[2018-09-25 19:10:25,710: INFO/MainProcess] Received task: myapp.tasks.MyTask[8d3b4279-0b7e-48cf-b45d-0f1f89e213d4] <-- THIS WILL FAIL BUT THAT IS OK
[2018-09-25 19:10:25,794: ERROR/ForkPoolWorker-1] Could not connect to device with IP <some ip> at all. Retry Later plase
[2018-09-25 19:10:25,798: WARNING/ForkPoolWorker-1] RETRYING NOW FOR DEVICE CONNECTION ERROR. TRANSACTION: b_txn || IP: <some ip>
[2018-09-25 19:10:25,821: INFO/MainProcess] Received task: myapp.tasks.MyTask[8d3b4279-0b7e-48cf-b45d-0f1f89e213d4] ETA:[2018-09-25 19:10:27.799473+00:00]
[2018-09-25 19:10:25,823: INFO/ForkPoolWorker-1] Task myapp.tasks.MyTask[8d3b4279-0b7e-48cf-b45d-0f1f89e213d4] retry: Retry in 2s: AnotherCustomError('Could not connect to IP <some ip> at all.',)
[2018-09-25 19:10:27,400: INFO/ForkPoolWorker-2] executed command some command at IP <some ip>
[2018-09-25 19:10:27,418: INFO/ForkPoolWorker-2] Task myapp.tasks.MyTask[bafd62b6-7e29-4c39-86ff-fe903d864c4f] succeeded in 2.4829552830196917s: {'error': None, 'status_code': 200, 'result': True}
<some command output here from a successful run> **<-- belongs to task bafd62b6-7e29-4c39-86ff-fe903d864c4f**
[2018-09-25 19:10:31,058: INFO/ForkPoolWorker-2] executed some command at IP <some ip>
[2018-09-25 19:10:31,059: INFO/ForkPoolWorker-2] Task command_runner.tasks.MyTask[8d3b4279-0b7e-48cf-b45d-0f1f89e213d4] succeeded in 2.404364461021032s: {'error': None, 'status_code': 200, 'result': True}
<some command output here from a successful run> **<-- belongs to task 8d3b4279-0b7e-48cf-b45d-0f1f89e213d4 which errored and retried itself**
So as you can see, the task does run on celery! It's just that the .get() I have in my controller is unable to pick these results back up - regardless of successful tasks or the erroneous tasks.
Often times, the error I get when running concurrent requests Error: "Received 0x50 while expecting 0xce". What is that? what does that mean? Again, weirdly enough, all this works when doing one request after another without Django handling multiple incoming requests. Although, I haven't been able to retry for single requests.
The RPC backend (which is what get is waiting for) is designed to fail if it is used more than once or after a celery restart.
a result can only be retrieved once, and only by the client that initiated the task. Two different processes can’t wait for the same result.
The messages are transient (non-persistent) by default, so the results will disappear if the broker restarts. You can configure the result backend to send persistent messages using the result_persistent setting.
So what looks like it is happening is that the exception causes celery to stop and break its rpc connection with the calling controller. Given your use case, it may make more sense to use a permanent results backend like redis or a database.

Get values for parameters from remote API in cloudformation

We have a remote API (not AWS) from which we can read values for parameters.
Can we read those values in cloudformation and use them as values?
Or is the only possible way to get the values and provide them by using the aws cli and passing the values as values of parameters in a deploy commmand.
You can use the cloudformation custom resource to call a lambda function parse the API output and send it back to cloudformation and get it via !GetAtt
Cloudformation:
Resources:
API:
Type: Custom::API
Version: '1.0'
Properties:
ServiceToken: arn:aws:lambda:us-east-1:acc:function:CALL_API
Outputs:
Status:
Value:
Fn::GetAtt:
- API
- Data
Lambda Script:
import json
import cfnresponse
import boto3
import urllib.request
from botocore.exceptions import ClientError
def handler(event, context):
responseData = {}
try:
with urllib.request.urlopen("http://maps.googleapis.com/maps/api/geocode/json?address=google") as url:
data = json.loads(url.read().decode())
print(data)
responseData['Data'] = data
status=cfnresponse.SUCCESS
except ClientError as e:
responseData['Data'] = "FAILED"
status=cfnresponse.FAILED
print("Unexpected error: %s" % e)
cfnresponse.send(event, context, status, responseData, "CustomResourcePhysicalID")