Old Task Definitions Becoming InActive on Creating New Definitions with serverless - amazon-web-services

I am Using Serverless to create task definitions and use this to work with step functions.
Here When I update my task definition in serverless and deploy it, prev task definitions become inactive and not able to use them further.
How to not make inactive prev Task-definitions?
Below is my serverless.yml
service: my-service
useDotenv: true
provider:
name: aws
runtime: python3.8
stage: ${opt:stage, 'alpha'}
region: ${opt:region, 'ap-south-1'}
profile: ${opt:profile, 'default'}
memorySize: 1280
timeout: 600
plugins:
- serverless-step-functions
package:
individually: true
patterns:
- '!./**'
stepFunctions:
stateMachines:
...
...
validate: true # enable pre-deployment definition validation (disabled by default)
resources:
Resources:
fargateProjectCluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: ${self:service}-${self:provider.stage}
fargateTaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
ContainerDefinitions:
- Name: new_definition
Image: ...
Privileged: false
ReadonlyRootFilesystem: false
Essential: true
LogConfiguration:
...
...
Cpu: ${env:CPU_UNITS}
ExecutionRoleArn: !GetAtt ecsTaskExecutionRole.Arn
Family: "${self:service}-${self:provider.stage}"
Memory: ${env:MEMORY_UNITS}
NetworkMode: "awsvpc"
RuntimePlatform:
OperatingSystemFamily: LINUX
CpuArchitecture: X86_64
RequiresCompatibilities:
- FARGATE
TaskRoleArn: !GetAtt fargateTaskRole.Arn
fargateCloudWatchLogsGroup: ...

Related

Error: Unsupported apiGateway.restApiId object

In our project we are using serverless for AWS. After added serverless-domain-manager plugin we are experiencing strange issue - Error: Unsupported apiGateway.restApiId object
custom:
customDomain:
domainName: test.test.com
stage: ${self:provider.stage}
basePath: plugin
certificateName: test.test.com
createRoute53Record: true
createRoute53IPv6Record: false
endpointType: REGIONAL
securityPolicy: tls_1_2
apiType: rest
autoDomain: true
preserveExternalPathMappings: true
provider:
name: aws
region: eu-central-1
stage: prod
runtime: nodejs16.x
logRetentionInDays: 7
apiGateway:
restApiId: !Ref EmployeeApiGateway
restApiRootResourceId:
Fn::GetAtt:
- EmployeeApiGateway
- RootResourceId
We try to add domain-manager mapping

unable to create OpenSearch resource using serverless template

i'm building a webapp for which i need to utilize AWS resources I am able to create rest of the resources using serverless.yml file using cloudformation template but unable to create Opensearch resource, where am I going wrong ?
This is my serverless.yml file
service: localstack-lambda
plugins:
# - serverless-plugin-warmup
- serverless-localstack
custom:
localstack:
debug: true
stages:
- local
- dev
endpointFile: localstack_endpoints.json
# frameworkVersion: "2"
provider:
name: aws
runtime: nodejs12.x
#functions are pretty important
functions:
uploadFiles:
handler: s3handler.uploadFiles
events:
- http:
path: uploadFiles
method: any
listFiles:
handler: s3handler.listFiles
events:
- http:
path: listFiles
method: any
osHandler:
handler: OpenSearchHandler.osHandler
events:
- http:
path: osHandler
method: any
resources: # CloudFormation template syntax from here on
Resources:
S3Bucket:
Type: "AWS::S3::Bucket"
DeletionPolicy: Retain
Properties:
BucketName: testbucket
OpenSearchServiceDomain:
Type: AWS::OpenSearchService::Domain
Properties:
DomainName: "myopensearch_1"
EngineVersion: "OpenSearch_1.0"
ClusterConfig:
DedicatedMasterEnabled: true
InstanceCount: "2"
ZoneAwarenessEnabled: true
InstanceType: "m3.medium.search"
DedicatedMasterType: "m3.medium.search"
DedicatedMasterCount: "3"
EBSOptions:
EBSEnabled: true
Iops: "0"
VolumeSize: "20"
VolumeType: "gp2"
thank you for your help in advanced :)

How can I deploy a specific AWS Lambda function to a specific Stage

I have two AWS Lambda functions. I have 3 stacks dev, test, and PROD.
I want a deploy a specific the Lambda function to only dev and test but not prod.
I want the trial Lambda function to be only in test and dev stages but not in PROD stage.
How can I achieve that? Here is my serverless.yml:
service:
name: demo-app
# Add the serverless-webpack plugin
plugins:
- serverless-webpack
- serverless-offline
provider:
name: aws
runtime: nodejs12.x
timeout: 30
stage: dev
region: us-west-2
profile: serverless-admin
custom:
region: ${self:provider.region}
stage: ${opt:stage, self:provider.stage}
prefix: ${self:service}-${self:custom.stage}
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
functions:
toggle:
handler: src/functions/unleash-toggle/handler.main
timeout: 900
events:
- http:
path: /toggle
method: POST
trial:
handler: src/functions/city/handler.main
timeout: 900
events:
- http:
path: /trial
method: POST
resources:
Resources:
taskTokenTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:service}-${self:custom.stage}-tokenTable
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
After doing some research I found out this can be done.
This can be done using serverless plugin: serverless-plugin-ifelse and defining your conditions under custom block.
you can see the same on Serverless Docs.
The plugin is available on npm
custom:
serverlessIfElse:
- If: '"${self:custom.stage}" == "prod"'
Exclude:
- functions.<functionName>
Complete serverless.yml file
service:
name: demo-app
# Add the serverless-webpack plugin
plugins:
- serverless-webpack
- serverless-plugin-ifelse
- serverless-offline
provider:
name: aws
runtime: nodejs12.x
timeout: 30
stage: dev
region: us-west-2
profile: serverless-admin
custom:
region: ${self:provider.region}
stage: ${opt:stage, self:provider.stage}
prefix: ${self:service}-${self:custom.stage}
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
serverlessIfElse:
- If: '"${self:custom.stage}" == "prod"'
Exclude:
- functions.trail
functions:
toggle:
handler: src/functions/unleash-toggle/handler.main
timeout: 900
events:
- http:
path: /toggle
method: POST
trial:
handler: src/functions/city/handler.main
timeout: 900
events:
- http:
path: /trial
method: POST
The same thing can be achieved by another plugin serverless-plugin-conditional-functions

Make my DBCluster DependOn serverless vpc plugin

I want to create a VPC with a DBCluster in it using the serverless-vpc-plugin. If I do it in two steps, first the VPC, and then the cluster, everything works. But if I do it simutaneously serverless fails, complaining that the DBSubnetGroup has not been created yet.
I tried makind the DBCluster DependsOn: VPC but nothing. Here are the relevant parts:
service: vpn
frameworkVersion: '2'
custom:
stage: ${opt:stage, self:provider.stage}
region: ${opt:region, self:provider.region}
vpcConfig:
createNatGateway: 1
createNetworkAcl: true
subnetGroups:
- rds
provider:
name: aws
runtime: nodejs14.x
lambdaHashingVersion: 20201221
resources:
Resources:
ClusterSecret:
Type: AWS::SecretsManager::Secret
Properties:
[...]
AuroraDBCluster:
Type: AWS::RDS::DBCluster
DependsOn: VPC
Properties:
DatabaseName: [...]
DBClusterIdentifier: [...]
DBSubnetGroupName: ${self:service}-${self:custom.stage}
Engine: aurora-postgresql
EngineMode: serverless
EngineVersion: "10.14"
MasterUsername: [...]
MasterUserPassword: [...]
plugins:
- serverless-vpc-plugin
- serverless-offline
DependsOn: RDSSubnetGroup instead of DependsOn: VPC did the job

CloudFormation template stuck at CREATE_IN_PROGRESS when creating ECS service

I'm creating a ECS service in CloudFormation.
I receive no error, it just will sit at the CREATE_IN_PROGRESS on the logical ID = Service phase..
Here's my CF template (ECS cluster & some other stuff above but cut out due to relevance).
TaskDefinition:
Type: 'AWS::ECS::TaskDefinition'
Properties:
Family: flink
Memory: 2048
Cpu: 512
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
ContainerDefinitions:
- Name: flink-jobmanager
Image: ACCOUNT_ID.dkr.ecr.us-west-1.amazonaws.com/teststack-flink:latest
Essential: true
PortMappings:
- ContainerPort: 8081
HostPort: 8081
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: ecs/flink-stream
awslogs-region: !Ref AWS::Region
awslogs-stream-prefix: ecs
Command:
- jobmanager
- Name: flink-taskmanager
Image: ACCOUNT_ID.dkr.ecr.us-west-1.amazonaws.com/teststack-flink:latest
Essential: true
Command:
- taskmanager
ExecutionRoleArn: !Sub arn:aws:iam::${AWS::AccountId}:role/ecsTaskExecutionRole
Volumes: []
TaskRoleArn: !Sub arn:aws:iam::${AWS::AccountId}:role/ecsTaskExecutionRole
Tags:
-
Key: EnvironmentStage
Value: !Ref EnvironmentStage
Service:
Type: 'AWS::ECS::Service'
Properties:
ServiceName: !Join ['', [!Ref EnvironmentStage, '-', !Ref 'AWS::StackName']]
Cluster: !Join ['', ['arn:aws:ecs:', !Ref 'AWS::Region', ':', !Ref 'AWS::AccountId', ':cluster/', !Ref ECSCluster]]
LaunchType: FARGATE
DeploymentConfiguration:
MaximumPercent: 200
MinimumHealthyPercent: 75
TaskDefinition: !Join ['', ['arn:aws:ecs:', !Ref 'AWS::Region', ':', !Ref 'AWS::AccountId', ':task-definition/', !Ref TaskDefinition]]
# TaskDefinition: !Ref TaskDefinition
DesiredCount: 1
DeploymentController:
Type: ECS
EnableECSManagedTags: true
PropagateTags: TASK_DEFINITION
SchedulingStrategy: REPLICA
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: ENABLED
SecurityGroups:
- !Ref FlinkSecurityGroup
Subnets:
- subnet-466da11c
- subnet-6fe65509
Tags:
-
Key: EnvironmentStage
Value: !Ref EnvironmentStage
The containers both deploy to the cluster when I set it up manually
After checking clusters -> CLUSTER_NAME -> tasks -> stopped I saw the following:
Status reason CannotStartContainerError: Error response from daemon:
failed to initialize logging driver: failed to create Cloudwatch log stream:
ResourceNotFoundException: The specified log group does not exist.
The issue was simply that I forgot to add the creating of a log group to to my CF template.. So I added this:
LogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: !Sub ${EnvironmentStage}-service-flink
Then modified the LogConfiguration in TaskDefinition to this:
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref LogGroup
awslogs-region: !Ref 'AWS::Region'
awslogs-stream-prefix: flink
Now the CF template works like a charm :)