How can I make Travis CI automatically trigger an AWS lambda function after all tests have passed and travis CI builds successfully? Please note the github repo is public.
Background
The bigger problem I'm solving is that I have travis CI on a repo. Each time I push, after everything passes, I manually run a lambda which sets off processes in AWS. I will be open sourcing the repo so anyone can contribute, so I want to avoid having to run the lambda manually, but instead have it automatically triggered whenever a pull request is merged successfully.
You could update your travis-ci build to invoke the lambda with the aws-cli, as long as you install it in your travis build. Here is an example:
aws lambda invoke --function-name awesome-function --payload '{"some":"data", "targetState": true}' /dev/stdout
breakdown:
aws lambda invoke is the basic aws-cli command we want to run
--function-name specifies which function to run
--payload specifies the event data to invoke the function with
/dev/stdout specifies that we want the output of the invocation to enter our terminal output
here's the documentation: https://docs.aws.amazon.com/cli/latest/reference/lambda/invoke.html
Just putting down my idea here, just wanted to check If this is possible or not. If your container has internet access.
Why not use a curl command to do a post with appropriate payload on
to the API Gateway endpoint.
The lambda can be backed behind the API Gateway.
IF its going to be a public repo, in that case we don't want to store any credentials on any docker/container.
Create an IAM user for the container with policies to interact with AWS Lambda invoke only and then use the option of aws cli.
- curl -X POST -H "Content-Type: application/json" -d '{"xyz":"testing","abc":"random stuff"}' https://tst.nhsd.io/restapi/Xyzxyz/testing/
Related
I ran this command:
sls deploy function --function [myFunction] -s production
And I got the Error that function is Iwant to udpdate is not yet deployed.
What could be the problme here?
I was trying to deploy a Lambda function.
I was expecting the function to deploy
Unfortunately you have to create the entire CloudFormation stack first. Run sls deploy -s production to create the lambda functions, IAM roles, http/eventbridge events, etc...
Once the lambda function and all associated resources are deployed, you can then simply update the lambda function with the command you posted. In general I do like to use the sls deploy function command if I'm only making code changes, because the lambda is a lot quicker to update, but there are situations where you need to deploy the entire service.
I am using the sam deploy command with the AWS SAM command line tool to deploy.
Now I made some changes with the web IDE in the AWS Console.
How can I pull the changes to the local machine, so that the next sam deploy command won't override them? (I am looking for something similar to a git pull I guess)
To do this you will need to use the AWS CLI, the start of this process will require you to use the get-function function in the AWS CLI.
This will return a pre signed URL in the Code > Location structure, if you then download this (using a CLI tool such as curl) you can then download a zip file containing the contents of the Lambda function.
The expected function would look similar to the below
curl $(aws lambda get-function --function-name $FUNCTION_NAME --output text --query "Code.[Location]")
You should have a single source of truth for your source code. And that should really be your source control repository (Git). If you make changes to your source code in the web IDE then you should copy those changes into your Git repo.
To your original question, to download a Lambda function's source code from the command line, you would use the aws lambda get-function command to download information about the function. Part of the information included in the response is a URL to download the function's deployment package, which is valid for 10 minutes. Then you could download the deployment package at that URL using something like curl.
I have a Bitbucket pipeline where it creates AWS resources using cloudformation and deploys website to it. But deployment fails even the cloudformation creates the stack correctly. What I think the issue is when the deployment happens cloudformation S3 bucket creation may not have been finished.
I have a Hugo website and I have created a bitbucket pipeline to deploy it to server. What it does is it creates S3 bucket using cloudformation to host the website and then upload the Hugo website to it. When I ran the steps in the pipeline manually in a terminal with a delay between each step, it happens successfully. But when it happens on Bitbucket pipeline it gave error saying the S3 bucket that I'm trying to upload content is not available. When I checked in AWS that bucket is actually there. That means Cloudformation has worked correctly. But when the files start to copy, the bucket may have not been available to upload the file. That's my assumption. Is there a workaround for this one. When doing it locally I can wait between the two commands of cloudformation creation and file copying. But how to handle it in Bitbucket pipeline environment. Following is my pipeline code.
pipelines:
pull-requests:
'**':
- step:
script:
- aws cloudformation create-stack --stack-name demo-web --template-body file://cloudformation.json --parameters ParameterKey=S3BucketName,ParameterValue=demo-web
- hugo
- aws s3 cp public/ s3://demo-web/ --recursive
How to handle this scenario in the correct way. Is there a workaround for this situation. Or is the problem that I have identified is not the actual problem.
First, to wait in bitbucket pipelines you should be able to just use sleep x where x is the number of seconds you want to sleep.
A different note - bear in mind that after the first subsequent run of this, deployment will potentially fail the next time as you are using create-stack which will fail if stack already exists...
Using the AWS CloudFormation API for this is best practice.
There is a wait command just as there is a create-stack command, the wait command and its options allows AWS to halt your processing until stack is in COMPLETE status before continuing.
Option 1:
Put your stack creation command into a shell script and as a second step in your shell script you invoke the wait stack-create-complete command.
Then invoke that shell script in pipeline instead of the direct command.
Option 2:
In your bitbucket pipeline, right after your create command, invoke the aws cloudformation await complete command before the upload/hugo command.
I prefer option one because it allows you to manipulate and trace the creation as you wish.
See documentation: https://docs.aws.amazon.com/cli/latest/reference/cloudformation/wait/stack-create-complete.html
Using the CloudFormation API is great because you don't have to guess as to how long to wait, it is more guaranteed to be available.
We are using Jenkins and have got AWS Credentials stored in Jenkins Credential store.
I am now configuring a Build job to get the list of APIs from AWS to allow users to select the API (from parameter dropdown). To do this, I am writing a groovy script to get the list of AWS APIs. E.g.
aws apigateway get-rest-apis
But in order to run the above command, I need to first get the AWS credentials from the Jenkins credentials store. How can I do this?
(Correct me if I am wrong, the script which is going to be part of Extended parameter is going to run on Jenkins master node (and not on the Slave node) and not sure how do I get the AWS credentials)
Adding below in groovy script:
env.AWS_CREDENTIAL_ID="user_id"
user_id is id that is set in credentials for AWS account
for everyone's benefit, I used command like
env AWS_ACCESS_KEY_ID=${YourIDVariable} AWS_SECRET_ACCESS_KEY=${YourKeyVariable} aws apigateway get-rest-apis
and it worked
I am using amazon web services cli. I use a makefile to to build my lambda project and upload it to aws lambda. I am on a windows machine and using powershell to call make.
I try to delete my lambda function with the following lines
AWS_PATH = /cygdrive/c/Users/TestBox/AppData/Roaming/Python/Scripts/aws
AWS_WIN_PATH = $(shell cygpath -aw ${AWS_PATH})
AWS_REGION = eu-west-2
lambda_delete:
$(AWS_WIN_PATH) lambda delete-function --function-name LambdaTest --region $(AWS_REGION) --debug
I get this error..
NoCredentialsError: Unable to locate credentials
Unable to locate credentials. You can configure credentials by running "aws configure".
Running aws configure list prints out a valid default profile.
I think the problem is because i am using gnu make installed by cygwin on a windows machine. Using powershell to call make.
So the path to credentials looks like this "cygdrive/c/users/testbox/.aws/credentials" instead "c:\users\testbox.aws\credentials", when ~/.aws/credentials is evaluated by aws. I think :)
I had the same problem with the path to aws itself and had to use $(shell cygpath -aw ${AWS_PATH}) to convert it to a path windows python could use.
Is there any way to pass the credentials directly to the lambda delete-function or indirectly through a path to a file? I cant seem to think of a way because the code that searches for the credentials is internal to botocore.
Is there a way around this that you know off?
Alternative solution, consider using AWS SAM templates
Use AWS SAM templates to deploy your Lambda functions and AWS resources using CloudFormation.
Edit your SAM template and define your AWS resources. For example, define Lambda functions/path to your code.
aws cloudformation package to package and upload your local code to S3.
aws cloudformation deploy to provision and update AWS resources with the updated code on S3.
This would work in CMD/Powershell without the make hassle. You will also have the benefit of having your resources versioned as code and you won't need to worry about tracking or adding new AWS APIs in your make file.
More complex serverless frameworks for reference:
AWS Chalice https://github.com/aws/chalice
Django/Flask + Lambda https://github.com/Miserlou/Zappa
Cross cloud serverless solution https://github.com/serverless/serverless