I try to run my lambda function with localstack. I installed awscli-local and localstack
pip3 install awscli-local
pip3 install --user localstack --ignore-installed six
And then I started localstack
LAMDBA_EXECUTOR=docker localstack start --docker
When I now want to create my lambda function
aws lambda create-function --function-name Test --zip-file
fileb://myLambda.zip --handler index.handler --runtime
'nodejs6.10' --endpoint http://localhost:4574 --role admin
I get this error
An error occurred (ResourceConflictException) when calling the
CreateFunction operation: Function already exist: Test
Listing the functions returns nothing
aws lambda list-functions --endpoint http://localhost:4574
Does someone know why localstack thinks that the function is already there?
You can invoke lambdas directly in localstack from the Commandeer App. It installs localstack under the hood with docker.
There is a button on the lambda detail that allows you to specify the payload and then view the cloudwatch logs.
I'm also seeing this issue. Though it does not happen each time I try to create a lambda in localstack. What I have noticed is that lambda create seems to take a rather long time and cause a lot of CPU consumption on my mac while it is creating the lambda. My initial guess is that because of the time being take to create the ambda, something is timing out during the lambda creation and it's as if the creation is retried internally and it finds the lambda exits. If I query for the lambda after receiving this error message with awslocal, I see it exists.
I am running this on a MBP with 32Gb of memory and upped the allocation of resources to the Docker engine to 16Gb and 8 processors in hopes of solving this with additional resources, but that has not seemed to help. Suggestions welcome.
Related
I ran this command:
sls deploy function --function [myFunction] -s production
And I got the Error that function is Iwant to udpdate is not yet deployed.
What could be the problme here?
I was trying to deploy a Lambda function.
I was expecting the function to deploy
Unfortunately you have to create the entire CloudFormation stack first. Run sls deploy -s production to create the lambda functions, IAM roles, http/eventbridge events, etc...
Once the lambda function and all associated resources are deployed, you can then simply update the lambda function with the command you posted. In general I do like to use the sls deploy function command if I'm only making code changes, because the lambda is a lot quicker to update, but there are situations where you need to deploy the entire service.
The command cdk deploy ... is used to deploy one or more CloudFormation stacks. When it executes, it displays messages resulting from the deployment of the various stacks and this can take some time.
The deploy command supports the --notification-arns parameter, which is an array of ARNs of SNS topics that CloudFormation will notify with stack related events.
Is it possible to execute cdk deploy and not have it report to the console its progress (i.e. the command exits immediately after uploading the new CloudFormation assets) and simply rely on the SNS topic as a means of getting feedback on the progress of a deployment?
A quick and dirty way (untested) would be to use nohup
$ nohup cdk ... --require-approval never 1>/dev/null
The --require-approval never simply means it wont stop to ask for permission for sercurity requests and obviously nohup allows the command to run with out terminating.
Its the only solution I can think of that is quick.
Another solution for long term would be to use the CdkToolkit to create your own script for deployment. Take a look at cdk command to get an idea. This is been something Ive wanted from aws-cdk for a while - I want custom deploy scripts rather than using the shell.
I found a solution to asynchronously deploy a cdk app via the --no-execute flag:
cdk deploy StackX --no-execute
change_set_name = aws cloudformation list-change-sets --stack-name StackX --query "Summaries[0].ChangeSetId" --output text
aws cloudformation execute-change-set --change-set-name $change_set_name
For my case this works, because I use this method to deploy new stacks only, so there will ever be only exactly 1 change set for this stack and I can retrieve it with the query for the entry with index 0. If you wish to update existing stacks, you will have to select the correct change set from the list returned by the list-change-sets command.
I had a similar issue - I didn't want to keep a console open while waiting for a long-running init script on an ec2 instance to finish. I've hit Ctrl-C after publishing was completed and the changeset got created. It kept running and I've checked the status in the Cloudformation view. Not perfect and not automation-friendly, but I didn't need to keep the process running on my machine.
I'm trying to create an event source mapping with the AWS cli, but I keep getting a combination of errors that don't add up. Here's what I've tried:
aws lambda create-event-source-mapping --function-name someFunctionName --batch-size 100 --starting-position LATEST --event-source arn:aws:sqs:eu-central-1:someARN:SomeQueue.fifo
This results in: An error occurred (InvalidParameterValueException) when calling the CreateEventSourceMapping operation: StartingPosition
is not valid for SQS event sources.
Then I try without the starting position: aws lambda create-event-source-mapping --function-name someFunctionName --batch-size 100 --event-source arn:aws:sqs:eu-central-1:someARN:SomeQueue.fifo which results in: error: argument --starting-position is required
Am I missing something? How am I supposed to call this command?
aws -version tells me I'm running aws-cli/1.15.10 Python/2.7.9 Windows/2012Server botocore/1.10.10. Is this just an out of date version?
So, as I'm writing this question I upgrade the cli to 2.0.9 and option 2 works!!!
Upgraded the cli to 2.0.9 and option 2 works!!!
How can I make Travis CI automatically trigger an AWS lambda function after all tests have passed and travis CI builds successfully? Please note the github repo is public.
Background
The bigger problem I'm solving is that I have travis CI on a repo. Each time I push, after everything passes, I manually run a lambda which sets off processes in AWS. I will be open sourcing the repo so anyone can contribute, so I want to avoid having to run the lambda manually, but instead have it automatically triggered whenever a pull request is merged successfully.
You could update your travis-ci build to invoke the lambda with the aws-cli, as long as you install it in your travis build. Here is an example:
aws lambda invoke --function-name awesome-function --payload '{"some":"data", "targetState": true}' /dev/stdout
breakdown:
aws lambda invoke is the basic aws-cli command we want to run
--function-name specifies which function to run
--payload specifies the event data to invoke the function with
/dev/stdout specifies that we want the output of the invocation to enter our terminal output
here's the documentation: https://docs.aws.amazon.com/cli/latest/reference/lambda/invoke.html
Just putting down my idea here, just wanted to check If this is possible or not. If your container has internet access.
Why not use a curl command to do a post with appropriate payload on
to the API Gateway endpoint.
The lambda can be backed behind the API Gateway.
IF its going to be a public repo, in that case we don't want to store any credentials on any docker/container.
Create an IAM user for the container with policies to interact with AWS Lambda invoke only and then use the option of aws cli.
- curl -X POST -H "Content-Type: application/json" -d '{"xyz":"testing","abc":"random stuff"}' https://tst.nhsd.io/restapi/Xyzxyz/testing/
I have a Bitbucket pipeline where it creates AWS resources using cloudformation and deploys website to it. But deployment fails even the cloudformation creates the stack correctly. What I think the issue is when the deployment happens cloudformation S3 bucket creation may not have been finished.
I have a Hugo website and I have created a bitbucket pipeline to deploy it to server. What it does is it creates S3 bucket using cloudformation to host the website and then upload the Hugo website to it. When I ran the steps in the pipeline manually in a terminal with a delay between each step, it happens successfully. But when it happens on Bitbucket pipeline it gave error saying the S3 bucket that I'm trying to upload content is not available. When I checked in AWS that bucket is actually there. That means Cloudformation has worked correctly. But when the files start to copy, the bucket may have not been available to upload the file. That's my assumption. Is there a workaround for this one. When doing it locally I can wait between the two commands of cloudformation creation and file copying. But how to handle it in Bitbucket pipeline environment. Following is my pipeline code.
pipelines:
pull-requests:
'**':
- step:
script:
- aws cloudformation create-stack --stack-name demo-web --template-body file://cloudformation.json --parameters ParameterKey=S3BucketName,ParameterValue=demo-web
- hugo
- aws s3 cp public/ s3://demo-web/ --recursive
How to handle this scenario in the correct way. Is there a workaround for this situation. Or is the problem that I have identified is not the actual problem.
First, to wait in bitbucket pipelines you should be able to just use sleep x where x is the number of seconds you want to sleep.
A different note - bear in mind that after the first subsequent run of this, deployment will potentially fail the next time as you are using create-stack which will fail if stack already exists...
Using the AWS CloudFormation API for this is best practice.
There is a wait command just as there is a create-stack command, the wait command and its options allows AWS to halt your processing until stack is in COMPLETE status before continuing.
Option 1:
Put your stack creation command into a shell script and as a second step in your shell script you invoke the wait stack-create-complete command.
Then invoke that shell script in pipeline instead of the direct command.
Option 2:
In your bitbucket pipeline, right after your create command, invoke the aws cloudformation await complete command before the upload/hugo command.
I prefer option one because it allows you to manipulate and trace the creation as you wish.
See documentation: https://docs.aws.amazon.com/cli/latest/reference/cloudformation/wait/stack-create-complete.html
Using the CloudFormation API is great because you don't have to guess as to how long to wait, it is more guaranteed to be available.