Get values for parameters from remote API in cloudformation - amazon-web-services

We have a remote API (not AWS) from which we can read values for parameters.
Can we read those values in cloudformation and use them as values?
Or is the only possible way to get the values and provide them by using the aws cli and passing the values as values of parameters in a deploy commmand.

You can use the cloudformation custom resource to call a lambda function parse the API output and send it back to cloudformation and get it via !GetAtt
Cloudformation:
Resources:
API:
Type: Custom::API
Version: '1.0'
Properties:
ServiceToken: arn:aws:lambda:us-east-1:acc:function:CALL_API
Outputs:
Status:
Value:
Fn::GetAtt:
- API
- Data
Lambda Script:
import json
import cfnresponse
import boto3
import urllib.request
from botocore.exceptions import ClientError
def handler(event, context):
responseData = {}
try:
with urllib.request.urlopen("http://maps.googleapis.com/maps/api/geocode/json?address=google") as url:
data = json.loads(url.read().decode())
print(data)
responseData['Data'] = data
status=cfnresponse.SUCCESS
except ClientError as e:
responseData['Data'] = "FAILED"
status=cfnresponse.FAILED
print("Unexpected error: %s" % e)
cfnresponse.send(event, context, status, responseData, "CustomResourcePhysicalID")

Related

Describe listener rule count using Boto3

i need to list listener rules count but still i get a output as null without any error.my complete project was getting email notification from when listener rules created on elastic load balancer.
import json
import boto3
def lambda_handler(event, context):
client = boto3.client('elbv2')
response = client.describe_listeners(
ListenerArns=[
'arn:aws:elasticloadbalancing:ap-south-1:my_alb_listener',
],
)
print('response')
Here is the output of my code
Response
null
Response is null because your indentation is incorrect and you are not returning anything from your handler. It should be:
import json
import boto3
def lambda_handler(event, context):
client = boto3.client('elbv2')
response = client.describe_listeners(
ListenerArns=[
'arn:aws:elasticloadbalancing:ap-south-1:my_alb_listener',
],
)
return response

can't upload to s3 using boto3 and flask

this is basic thing and it seems obvious but am stuck in this one
am using boto3, I have this access key and secret key,
upload file route:
#app.route('/up')
def up():
main = request.files['mainimg']
bucket=<bucketname>
if main:
upload_to_aws(main)
the upload_to_aws function (from github):
import os
import boto3
from werkzeug.utils import secure_filename
def upload_to_aws(file, acl="public-read"):
filename = secure_filename(file.filename)
s3 = boto3.client(
's3',
aws_access_key_id=os.environ.get('FASO_S3_ACCESS_KEY'),
aws_secret_access_key=os.environ.get('FASO_S3_SECRET_KEY')
)
try:
s3.upload_fileobj(
file,
"fasofashion",
file.filename,
ExtraArgs={
"ACL": acl,
"ContentType": file.content_type
}
)
print('uploadeed')
except Exception as e:
# This is a catch all exception, edit this part to fit your needs.
print("Something Happened: ", e)
return e
I keep getting these errors:
Access denied file must be a string
File must be a string

Lambda call S3 get public access block using boto3

I'm trying to verify if the public access block of my bucket mypublicbucketname is checked or not through Lambda function. For testing, I create a bucket and I have unchecked the public access block. So, I did this Lambda:
import sys
from pip._internal import main
main(['install', '-I', '-q', 'boto3', '--target', '/tmp/', '--no-cache-dir', '--disable-pip-version-check'])
sys.path.insert(0,'/tmp/')
import json
import boto3
import botocore
def lambda_handler(event, context):
# TODO implement
print(boto3.__version__)
print(botocore.__version__)
client = boto3.client('s3')
response = client.get_public_access_block(Bucket='mypublicbucketname')
print("response:>>",response)
I updated the latest version of boto3 and botocore.
1.16.40 #for boto3
1.19.40 #for botocore
Even if I uploaded them and the function seems correct I got this exception:
[ERROR] ClientError: An error occurred (NoSuchPublicAccessBlockConfiguration) when calling the GetPublicAccessBlock operation: The public access block configuration was not found
Someone can explain me why I have this error ?
For futur users. If you got the same problem with get_public_access_block(). Use this solution:
try:
response = client.get_public_access_block(Bucket='mypublicbucketname')
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == 'NoSuchPublicAccessBlockConfiguration':
print('No Public Access')
else:
print("unexpected error: %s" % (e.response))
for put_public_access_block, it works fine.

How to create Start instance lambda function in aws using python?

When trying to create a function lambda in AWS in order to start instance automatically, this is the function:
import boto3
region = 'us-east-1'
instances = ['i-xxx']
ec2 = boto3.client('ec2', region_name=region)
def lambda_handler(event, context):
ec2.stop_instances(InstanceIds=instances)
print('stopped your instances: ' + str(instances))
and after Save and Test - I got this error:
Response:
{
"errorMessage": "2019-09-15T09:54:06.364Z 372c2df4-1303-4326-b882-a04154007881 Task timed out after 3.00 seconds"
}
Request ID:
"372c2df4-1303-4326-b882-a04154007881"
Function Logs:
START RequestId: 372c2df4-1303-4326-b882-a04154007881 Version: $LATEST
END RequestId: 372c2df4-1303-4326-b882-a04154007881
REPORT RequestId: 372c2df4-1303-4326-b882-a04154007881 Duration: 3003.17 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 81 MB Init Duration: 115.73 ms
XRAY TraceId: 1-5d7e0a3b-79a0391249fcda644105b8ba SegmentId: 0eefbaed756a35c4 Sampled: false
2019-09-15T09:54:06.364Z 372c2df4-1303-4326-b882-a04154007881 Task timed out after 3.00 seconds
Check if you have set the aws lambda timeout to some appropriate value, because it seems to be on the default value which is 3 sec and looks like it not going to be sufficient for you.
Timeout – The amount of time that Lambda allows a function to run before stopping it. The default is 3 seconds. The maximum allowed value is 900 seconds.aws docs
To start and stop the instance boto3 docs
import sys
import boto3
from botocore.exceptions import ClientError
instance_id = sys.argv[2]
action = sys.argv[1].upper()
ec2 = boto3.client('ec2')
if action == 'ON':
# Do a dryrun first to verify permissions
try:
ec2.start_instances(InstanceIds=[instance_id], DryRun=True)
except ClientError as e:
if 'DryRunOperation' not in str(e):
raise
# Dry run succeeded, run start_instances without dryrun
try:
response = ec2.start_instances(InstanceIds=[instance_id], DryRun=False)
print(response)
except ClientError as e:
print(e)
else:
# Do a dryrun first to verify permissions
try:
ec2.stop_instances(InstanceIds=[instance_id], DryRun=True)
except ClientError as e:
if 'DryRunOperation' not in str(e):
raise
# Dry run succeeded, call stop_instances without dryrun
try:
response = ec2.stop_instances(InstanceIds=[instance_id], DryRun=False)
print(response)
except ClientError as e:
print(e)

How to configure authorization mechanism inline with boto3

I am using boto3 in aws lambda to fecth object in S3 located in Frankfurt Region.
v4 is necessary. otherwise following error will return
"errorMessage": "An error occurred (InvalidRequest) when calling
the GetObject operation: The authorization mechanism you have
provided is not supported. Please use AWS4-HMAC-SHA256."
Realized ways to configure signature_version http://boto3.readthedocs.org/en/latest/guide/configuration.html
But since I am using AWS lambda, I do not have access to underlying configuration profiles
The code of my AWS lambda function
from __future__ import print_function
import boto3
def lambda_handler (event, context):
input_file_bucket = event["Records"][0]["s3"]["bucket"]["name"]
input_file_key = event["Records"][0]["s3"]["object"]["key"]
input_file_name = input_file_bucket+"/"+input_file_key
s3=boto3.resource("s3")
obj = s3.Object(bucket_name=input_file_bucket, key=input_file_key)
response = obj.get()
return event #echo first key valuesdf
Is that possible to configure signature_version within this code ? use Session for example. Or is there any workaround on this?
Instead of using the default session, try using custom session and Config from boto3.session
import boto3
import boto3.session
session = boto3.session.Session(region_name='eu-central-1')
s3client = session.client('s3', config= boto3.session.Config(signature_version='s3v4'))
s3client.get_object(Bucket='<Bkt-Name>', Key='S3-Object-Key')
I tried the session approach, but I had issues. This method worked better for me, your mileage may vary:
s3 = boto3.resource('s3', config=Config(signature_version='s3v4'))
You will need to import Config from botocore.client in order to make this work. See below for a functional method to test a bucket (list objects). This assumes you are running it from an environment where your authentication is managed, such as Amazon EC2 or Lambda with a IAM Role:
import boto3
from botocore.client import Config
from botocore.exceptions import ClientError
def test_bucket(bucket):
print 'testing bucket: ' + bucket
try:
s3 = boto3.resource('s3', config=Config(signature_version='s3v4'))
b = s3.Bucket(bucket)
objects = b.objects.all()
for obj in objects:
print obj.key
print 'bucket test SUCCESS'
except ClientError as e:
print 'Client Error'
print e
print 'bucket test FAIL'
To test it, simply call the method with a bucket name. Your role will have to grant proper permissions.
Using a resource worked for me.
from botocore.client import Config
import boto3
s3 = boto3.resource("s3", config=Config(signature_version="s3v4"))
return s3.meta.client.generate_presigned_url(
"get_object", Params={"Bucket": AIRFLOW_BUCKET, "Key": key}, ExpiresIn=expTime
)