hope you're all good,
I keep getting this error when using sls deploy with runtime node16x:
npx sls deploy --stage dev --alias iteration01
Deploying my-service to stage dev (us-east-1)
Packaging
Adding build information (0.18.0-63aef72)
Adding branch information (features/iteration01)
Excluding development dependencies for service package
Retrieving CloudFormation stack
Preparing alias ...
Processing custom resources
Removing resources:
Processing functions
Processing API
Configuring stage
Processing event source subscriptions
Processing SNS Lambda subscriptions
Adding deployment date information
Uploading
Uploading CloudFormation file to S3
Uploading State file to S3
Uploading service my-service.zip file to S3 (16.59 MB)
✖ Stack my-service-dev failed to deploy (62s)
Environment: darwin, node 16.17.0, framework 3.18.2 (local), plugin 6.2.2, SDK 4.3.2
Credentials: Local, environment variables
Docs: docs.serverless.com
Support: forum.serverless.com
Bugs: github.com/serverless/serverless/issues
Error:
The CloudFormation template is invalid: Template format error: Unresolved resource dependencies [HelloWorld
LambdaPermissionApiGateway] in the Resources block of the template
This is happening to all of our services that we try to upgrade runtime to nodejs16.x
following are my package.json and serverless.yml
serverless.yml
package.json
If I check the cloudformation template generated it's there under resources:
"HelloWorldLambdaPermissionApiGateway": {
"Type": "AWS::Lambda::Permission",
"Properties": {
"FunctionName": {
"Fn::GetAtt": [
"HelloWorldLambdaFunction",
"Arn"
]
},
"Action": "lambda:InvokeFunction",
"Principal": "apigateway.amazonaws.com",
"SourceArn": {
"Fn::Join": [
"",
[
"arn:",
{
"Ref": "AWS::Partition"
},
":execute-api:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "ApiGatewayRestApi"
},
"/*/*"
]
]
}
}
}
Have anyone seeing this error before? If I delete the http handler it works fine
PS: Upgrade to serverless 3.22.0 didn't work.
Related
We have an AWS Amplify project that I am in the process of migrating the API from Transformer 1 to 2.
As part of this, we have a number of custom resolvers that previously had their own stack JSON template in the stacks/ folder as generated by the Amplify CLI.
As per the migration instructions, I have created new custom resources using amplify add custom which allow me to create either CDK (Cloud development kit) resource or a CloudFormation template. I just want a lift n shift for now so I've gone with the template option and moved the content from the stack JSON to the new custom resolver JSON template.
This seems like it should work, but the custom templates no longer have access to the parameters shared from the parent stack:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Metadata": {},
"Parameters": {
"AppSyncApiId": {
"Type": "String",
"Description": "The id of the AppSync API associated with this project."
},
"S3DeploymentBucket": {
"Type": "String",
"Description": "The S3 bucket containing all deployment assets for the project."
},
"S3DeploymentRootKey": {
"Type": "String",
"Description": "An S3 key relative to the S3DeploymentBucket that points to the root of the deployment directory."
}
},
...
}
So these are standard parameters that were used previously and my challenge now is in accessing the deployment bucket and root key as these values are generated upon deployment.
The exact use case is for the AppSync function configuration when I attempt to locate the request and response mapping template S3 locations:
"RequestMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/MyCustomResolver.req.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
},
The error message I am receiving is
AWS::CloudFormation::Stack Tue Feb 15 2022 18:20:42 GMT+0000 (Greenwich Mean Time) Parameters: [S3DeploymentBucket, AppSyncApiId, S3DeploymentRootKey] must have values
I feel like I am missing a step to plumb the output values to the parameters in the JSON but I can't find any documentation to suggest how to do this using the updated Amplify CLI options.
Let me know if you need any further information and fingers crossed it is something simple for you Amplify/CloudFormation ninjas out there!
Thank you in advance!
I am trying to set up a resource pipeline, where I want to deploy all my resources using CloudFormation. I have a separate pipeline to deploy code.
Using below CloudFormation template
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"Properties": {
"BucketName": "devanimalhubstorage"
}
},
"HelloLambdaRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": "HelloLambdaRole",
"AssumeRolePolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
}
},
"AnimalhubLambdaFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"FunctionName": "AnimalhubLambdaFunction",
"Role": {
"Fn::GetAtt": ["HelloLambdaRole","Arn"]
},
"Code": {},
"Runtime": "dotnetcore2.1",
"Handler": "myfirstlambda::myfirstlambda.Function::FunctionHandler"
}
}
}
}
Problem Resource handler returned message: "Please provide a source for function code. (Service: Lambda, Status Code: 400, Request ID: 377fff66-a06d-495f-823e-34aec70f0e22, Extended Request ID: null)" (RequestToken: 9c9beee7-2c71-4d5d-e4a8-69065e12c5fa, HandlerErrorCode: InvalidRequest)
I want to build a separate pipeline for code build and deployment. Can't we deploy Lambda function without code?
What is the recommended solution in AWS? (I used to follow this approach in Azure, am new in AWS)
First, I would advise you to look into AWS SAM. It is very helpful when creating and deploying serverless applications and will have a lot of examples to help you with your use case.
Second, using separate pipelines for this purpose is not the recommended way in AWS. Using dummy code, as the other answer suggests, is also quite dangerous, since an update to your cloudformation would override any other code that you have deployed to the lambda function using your other pipeline.
In a serverless application like this, you could make a separation into two or more cloudformation stacks. For example, you could create your S3 buckets and other more "stable" infrastructure in one stack, and deploy this either manually or in a pipeline. And deploy your code in a separate pipeline using another cloudformation stack. Any values (ARNs etc) that would be needed from the more stable resources, you could inject as a parameter in the template, or use the ImportValue function of CloudFormation. I'd personally recommend using the parameter since it is more flexible for future changes.
I have been trying to use the update functionality of the AWS CLI to update codepipeline with the following command:
aws codepipeline update-pipeline --cli-input-json file://Pipelines/AWS/SomeName.json
And I keep getting the following error
Unknown parameter in pipeline.stages[0].actions[0]: "region", must be one of: name, actionTypeId, runOrder, configuration, outputArtifacts, inputArtifacts, roleArn
I have checked the documentation for AWS and I don´t think theres anything wrong with the way actions is set up, here is the snippet from JSON:
"actions": [
{
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"provider": "S3",
"version": "1"
},
"runOrder": 1,
"configuration": {
"PollForSourceChanges": "false",
"S3Bucket": "some-bucket-name",
"S3ObjectKey": "someEnvironment/someZip.zip"
},
"outputArtifacts": [
{
"name": "SourceArtifact"
}
],
"inputArtifacts": [],
"region": "eu-west-1"
},...
]
According to the documentation provided at https://docs.aws.amazon.com/codepipeline/latest/userguide/action-reference-S3.html
everything seems to be correct. Removing the region parameter updates the pipeline correctly but I am unsure of the consequences that could have on the updates itself
Any help is appreciated.
Cheers
Sky
If you try to create the pipeline through the AWS console and choose S3 as a source you will notice the region option is not available (as shown in the screenshot below). I would say this is a current limitation to the service more then a chosen design and a gap in documentation (however, happy to be proven wrong).
However, you could try include the full S3 bucket Arn which would include the region. Or take comfort in that any action deployed (without a region specified) defaults to the same region that the codepipeline is in, as per the AWS documentation.
I need to run a docker container in AWS ECS. I do NOT have access to the source code for the image. This is a private image, from a private repo that I have uploaded to AWS ECR. I have created an AWS ECS Task Definition to run the container inside a service, inside a cluster. The image shows as being up and running but I cannot hit it via my browser. I know that all the network settings are correct because I can hit a simple hello world app that I also deployed to test.
There is also a command I need to run before: docker run --env-file <environment_variables_file> <image>:<tag> rake db:reset && rake db:seed.
According to the instructions for this docker image, the run command for it is: docker run -d --name <my_image_name> --env-file <environment_variables_file> -p 8080:80 <image>:<tag>.
I can run this image locally on my laptop with no issues, deploying it to AWS is that problem.
My question is how do I provide the environment_variables_file to the image? Where do I upload the file and how do I pass it? How do I run the command to init the DB before the image runs?
Since Nov 2020, ECS does support env files (blog post), but they must be hosted on S3:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/taskdef-envfiles.html
Pasting the essentials for reference. Under container definition:
"environmentFiles": [
{
"value": "arn:aws:s3:::s3_bucket_name/envfile_object_name.env",
"type": "s3"
}
]
The task execution role also needs the following permission:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::s3_bucket_name/envfile_object_name.env"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::s3_bucket_name"
]
}
]
}
Amazon ECS doesn't support environment variable files. You can set environment variables inside task definition. For example:
"environment" : [
{ "name" : "string", "value" : "string" },
{ "name" : "string", "value" : "string" }
]
Please read following instructions for more details.
Update:
AWS now provides a way -
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/taskdef-envfiles.html
I have a series of tasks defined in ECS that run on a recurring schedule. I recently made a minor change to update my task definition in Terraform to change default environment variables for my container (from DEBUG to PRODUCTION):
"environment": [
{"name": "ENVIRONMENT", "value": "PRODUCTION"}
]
I had this task running using the Scheduled Tasks feature of Fargate, setting it at a rate of every 4 hours. However, after updating my task definition, I began to see that the tasks were not being triggered by CloudWatch, since my last container log was from several days ago.
I dug deeper into the issue using CloudTrail, and noticed one particular part of the entry for a RunTask event:
"eventTime": "2018-12-10T17:26:46Z",
"eventSource": "ecs.amazonaws.com",
"eventName": "RunTask",
"awsRegion": "us-east-1",
"sourceIPAddress": "events.amazonaws.com",
"userAgent": "events.amazonaws.com",
"errorCode": "InvalidParameterException",
"errorMessage": "TaskDefinition is inactive",
Further down in the log, I noticed that the task definition ECS was attempting to run was
"taskDefinition": "arn:aws:ecs:us-east-1:XXXXX:task-
definition/important-task-name:2",
However, in my ECS task definitions, the latest version of important-task-name was 3. So it looks like the events are not triggering because I am using an "inactive" version of my task definition.
Is there any way for me to schedule tasks in AWS Fargate without having to manually go through the console and stop/restart/update each cluster's scheduled update? Isn't there any way to simply ask CloudWatch to pull the latest active task definition?
You can use CloudWatch Event Rules to control scheduled tasks and whenever you update a task definition you can also update your rule. Say you have two files:
myRule.json
{
"Name": "run-every-minute",
"ScheduleExpression": "cron(0/1 * * * ? *)",
"State": "ENABLED",
"Description": "a task that will run every minute",
"RoleArn": "arn:aws:iam::${IAM_NUMBER}:role/ecsEventsRole",
"EventBusName": "default"
}
myTargets.json
{
"Rule": "run-every-minute",
"Targets": [
{
"Id": "scheduled-task-example",
"Arn": "arn:aws:ecs:${REGION}:${IAM_NUMBER}:cluster/mycluster",
"RoleArn": "arn:aws:iam::${IAM_NUMBER}:role/ecsEventsRole",
"Input": "{\"containerOverrides\":[{\"name\":\"myTask\",\"environment\":[{\"name\":\"ENVIRONMENT\",\"value\":\"production\"},{\"name\":\"foo\",\"value\":\"bar\"}]}]}",
"EcsParameters": {
"TaskDefinitionArn": "arn:aws:ecs:${REGION}:${IAM_NUMBER}:task-definition/myTaskDefinition",
"TaskCount": 1,
"LaunchType": "FARGATE",
"NetworkConfiguration": {
"awsvpcConfiguration": {
"Subnets": [
"subnet-xyz1",
"subnet-xyz2",
],
"SecurityGroups": [
"sg-xyz"
],
"AssignPublicIp": "ENABLED"
}
},
"PlatformVersion": "LATEST"
}
}
]
}
Now, whenever there's a new revision of myTaskDefinition you may update your rule, e.g.:
aws events put-rule --cli-input-json file://myRule.json --region $REGION
aws events put-targets --cli-input-json file://myTargets.json --region $REGION
echo 'done'
But of course, replace IAM_NUMBER and REGION with your credentials,
Cloud Map seems like a solution for these types of problems.
https://aws.amazon.com/about-aws/whats-new/2018/11/aws-fargate-and-amazon-ecs-now-integrate-with-aws-cloud-map/