How to use AWS CLI within AWS Lambda? - amazon-web-services

I want to Copy data from an S3 bucket in one account and Region to another account and Region which is why I want to use AWS CLI to be triggered by an entry to the source s3 bucket and the lambda function can then use AWS CLI to run aws s3 sync
So I tried using the techniques given here: https://bezdelev.com/hacking/aws-cli-inside-lambda-layer-aws-s3-sync/
Basically
Install AWS CLI in a local virtual environment
Package AWS CLI and all its dependencies to a zip file
Create a Lambda Layer
However even after I add the layer I still see the error ModuleNotFoundError: No module named 'awscli'

Related

Question on using aws cli to deploy code to a EC2 instance

I am looking at using jenkins to deploy a war file to an EC2 instance. I have set up similar before. Creating an EC2 instance, a S3 Bucket and a Code Deploy application. The way that worked was that :
1)zip up load the war/jar into a S3 Bucket.
2) Use AWS steps createDeployment to deploy the zip file from the S3 Bucket to the EC2. This would also involve creating a appspec.yml and scripts to set up the environment.
But have been told there is another way. that does not need setting up a code deploy.
I have created an Ec2 instance, set up a docker container inside it, with all the environment settings.
And what I would like to do is load my zip file into the EC2. That I dont need a AWS codedeploy application.
is this correct, is there a AWS CLI command to simply load a zip file into the EC2 instance.
Thank you for any help.
You can copy from an s3 bucket
To copy files from a S3 bucket to EC2 instance,
Create an IAM role with S3 write access or admin access
Map the IAM role to an EC2 instance
Install AWS CLI in EC2 instance
Run the AWS s3 cp command to copy the files from S3 to EC2
To copy the files from S3 to EC2, Keep the source as the bucket URL and the destination to your local directory or filename
To copy the files from S3 to EC2
aws s3 cp s3://<S3BucketName> <Fully Qualified Local filename/Directory>
In the previous command, you can see the difference. Here the source is S3 Bucket URL and the destination is a local file name or directory name.

How to copy s3 bucket files in to the Kubernetes running pods?

I have multiple files in s3 bucket which I need to copy to one of the running Kubernetes pods under /tmp path .
Need any reliable command or try and tested way to do the same.
Let's say my bucket name "learning" and pod name is "test-5c7cd9c-l6qng"
AWS CLI commands "aws s3api get-object" or "aws s3 cp" can be used to copy the data onto the Pod from S3. To make these calls AWS Access Keys are required. These keys provide the authentication to call the S3 service. "aws configure" command can be used to configure the Access Keys in the Pod.
Coming to K8S, an Init Container can be used to execute the above command before the actual application container starts. Instead of having the Access Keys directly written into the Pod which is not really safe, K8S Secrets feature can be used to pass/inject the Access Keys to the Pods.
FYI ... the download can be done programmatically by using the AWS SDK and the S3Client Interface for Java.

Run AWS CLI from local without storing credentials in local

How to run aws cli to download s3 bucket data without storing aws credential in local machine?
Please Note that s3 bucket is not a public bucket.
Not sure what your goal is, but you can use environment variables which you are only exporting for the current session/aws_cli run.
To prevent in bash (asuming you are using linux) that the export is written to history, you can use a space infront of the command.
You can start an EC2 instance and give that instance a role that allows it to read from your S3 bucket.
Once started, connect to the EC2 instance using ssh and initiate your S3 transfer using aws s3 cp...ˋ or ˋaws s3 sync...

Amazon Web Services: NoCredentialsError: Unable to locate credentials

I am using amazon web services cli. I use a makefile to to build my lambda project and upload it to aws lambda. I am on a windows machine and using powershell to call make.
I try to delete my lambda function with the following lines
AWS_PATH = /cygdrive/c/Users/TestBox/AppData/Roaming/Python/Scripts/aws
AWS_WIN_PATH = $(shell cygpath -aw ${AWS_PATH})
AWS_REGION = eu-west-2
lambda_delete:
$(AWS_WIN_PATH) lambda delete-function --function-name LambdaTest --region $(AWS_REGION) --debug
I get this error..
NoCredentialsError: Unable to locate credentials
Unable to locate credentials. You can configure credentials by running "aws configure".
Running aws configure list prints out a valid default profile.
I think the problem is because i am using gnu make installed by cygwin on a windows machine. Using powershell to call make.
So the path to credentials looks like this "cygdrive/c/users/testbox/.aws/credentials" instead "c:\users\testbox.aws\credentials", when ~/.aws/credentials is evaluated by aws. I think :)
I had the same problem with the path to aws itself and had to use $(shell cygpath -aw ${AWS_PATH}) to convert it to a path windows python could use.
Is there any way to pass the credentials directly to the lambda delete-function or indirectly through a path to a file? I cant seem to think of a way because the code that searches for the credentials is internal to botocore.
Is there a way around this that you know off?
Alternative solution, consider using AWS SAM templates
Use AWS SAM templates to deploy your Lambda functions and AWS resources using CloudFormation.
Edit your SAM template and define your AWS resources. For example, define Lambda functions/path to your code.
aws cloudformation package to package and upload your local code to S3.
aws cloudformation deploy to provision and update AWS resources with the updated code on S3.
This would work in CMD/Powershell without the make hassle. You will also have the benefit of having your resources versioned as code and you won't need to worry about tracking or adding new AWS APIs in your make file.
More complex serverless frameworks for reference:
AWS Chalice https://github.com/aws/chalice
Django/Flask + Lambda https://github.com/Miserlou/Zappa
Cross cloud serverless solution https://github.com/serverless/serverless

How I can inject artifact from AWS S3 inside Docker image?

I need to prepare Docker image with embedded Jar file to push it into ECR. Jar file is storing in S3 bucket. How I can inject jar inside image without explicit storing AWS access keys into image?
Maybe I can use AWS CLI or exist other way?
Also it is not recommended to add public access to my s3 bucket and set access keys via env variable during execute docker run.
You can define an AWS IAM Role and attach it to EC2 Instances. So any instance that needs to run this docker build command, can do so as long as it has the IAM role attached to it. You can do so from the AWS Console. This solves the problem of you putting AWS credentials on the instance itself.
You will still need to install the aws cli in your Dockerfile. Once IAM Role is attached, you don't have to worry about credentials.
Recommended docs:
IAM Roles for Amazon EC2
Here's an official blog post tutorial on how to do this:
Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI
Just make sure you specify in the IAM Role which S3 Buckets you want these instances to have access to.