What is the AWS Attribute outputs and ImportValue in cloudformation - amazon-web-services

i'm starting my AWS journey and today got a chance to Create cloudformation stack for creating a filesystem on the AWS, i was able to spun the filesystem, however I have few
doubts about some values and functions/attributes as those were given by someone in the team and he on long vacations so, asking here for help.
Below is cloudfoemation Stack which works Just fine.
Cloudformaton Stack:
---
Description: "Create FSxN filesystem"
Resources:
MytestCluster:
Type: "AWS::FSx::FileSystem"
Properties:
FileSystemType: "ONTAP"
StorageCapacity: "1024"
SubnetIds: ['subnet-0f349h6eee098b0pg']
OntapConfiguration:
DeploymentType: "SINGLE_AZ_1"
PreferredSubnetId: "subnet-0f349h6eee098b0pg"
ThroughputCapacity: "128"
FsxAdminPassword: '{{resolve:secretsmanager:fsx_admin_password}}'
SecurityGroupIds:
- !ImportValue 'KPCL-FSxforONTAPsgID'
Tags:
- Key: "Backup"
Value: "None"
MytestSVM:
Type: "AWS::FSx::StorageVirtualMachine"
Metadata:
cfn-lint:
config:
ignore_checks:
- E3001
Properties:
FileSystemId: !Ref MytestCluster
Name: svmdemo
RootVolumeSecurityStyle: "UNIX"
SvmAdminPassword: '{{resolve:secretsmanager:svm_admin_password}}'
Tags:
- Key: "Backup"
Value: "None"
fsxndemovolume:
Type: "AWS::FSx::Volume"
Metadata:
cfn-lint:
config:
ignore_checks:
- E3001
Properties:
Name: myTestVol001
OntapConfiguration:
JunctionPath: /myVolume001
SizeInMegabytes: 1536000
StorageEfficiencyEnabled: true
StorageVirtualMachineId: !Ref MytestSVM
VolumeType: "ONTAP"
Tags:
- Key: "Backup"
Value: "None"
Outputs:
FileSystemId:
Value: !Ref "MytestCluster"
SvmId:
Value: !Ref "MytestSVM"
...
I would like Understand:
I have few doubts to myself to clear which i tried to understand from document but couldn't comprehend well, hence though taking expert suggestion..
First one: below under SecurityGroupIds what does - !ImportValue mean here.
SecurityGroupIds:
- !ImportValue 'KPCL-FSxforONTAPsgID'
Second one: What is outputs means here.
Outputs:
FileSystemId:
Value: !Ref "MytestCluster"
SvmId:
Value: !Ref "MytestSVM"
Last one: what is ignore_checks: and its value - E3001 here.
ignore_checks:
- E3001
Please help me to understand.

First one: below under SecurityGroupIds what does - !ImportValue mean here.
The following:
SecurityGroupIds:
- !ImportValue 'KPCL-FSxforONTAPsgID'
means that in the current stack your are going to import security group ID which was exported by some other stack.
This export/import functionality allows you to decouple and reuse your infrastructure. Instead of having everything in one stack, you can make one stack with network resources (its a common setup), such as security groups, subnets, VPCs, and other stacks that actual use those resources.
Second one: What is outputs means here.
Outputs allow you to return values from your stacks. You can think of them as a type of return values from functions in common programming languages.
Output values have lots of use-cases. Examples are: they can be exported, and imported in other stacks. They can also be queried programmatically, in case your stacks are part of some CI/CD pipelines or other application. They can be used as input parameters to other stacks, again as port of some CI/CD pipeline. This is alternative to export/import functionality.
Last one: what is ignore_checks: and its value - E3001 here.
This is some extra code not related to CloudFormation itself. It is actually a hint to Visual Studio Code
cfn-lint-visual-studio-code editor to ignore some auto checks it does.

Outputs in stack creates exports in cloudformation which can be listed in AWS Console, !Import directive is used to reference to export from another stack.
cfn-lint section in metadata is used to silent errors in CloudFormation Linter tool and has no impact to the resource itself.

Related

How can I conditionally deploy a ContainerDefinition inside a TaskDefinition with CloudFormation?

I'm using CloudFormation to deploy some resources into AWS. What I want to do is - based on a condition deploy(or not) a ContainerDefinition inside a TaskDefinition.
Type: 'AWS::ECS::TaskDefinition'
Properties:
RequiresCompatibilities:
- FARGATE
---Other unimportant properties---
ContainerDefinitions:
- Name: someServiceName
---Unimportant---
- Name: ServiceToDeployBasedOnCondition
Condition: IsProduction (this is defined in Conditions)
---Unimportant---
This is what I tried and I get "Resource handler returned message: "Model validation failed (#: extraneous key [Condition] is not permitted".
How can I bypass this? Is it even possible?
Nice approach. Utilizing Conditions is surely the way to fulfil your requirement here. You are receiving a model validation error because Conditions have to be placed at the top level of your resource https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-section-structure.html
# Example obtained from the docs
Resources:
EC2Instance:
Type: 'AWS::EC2::Instance'
Properties:
ImageId: ami-0ff8a91507f77f867
MountPoint:
Type: 'AWS::EC2::VolumeAttachment'
Condition: CreateProdResources
Properties:
InstanceId: !Ref EC2Instance
VolumeId: !Ref NewVolume
Device: /dev/sdh
One "quick and dirty" way to solve your issue would be to have two Task Definitions (one triggering with Condition: IsProduction and one not). However, if you wanted to solve your problem in a more elegant way (use a single task definition), I would have a look at CloudFormation Condition Functions https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-conditions.html#intrinsic-function-reference-conditions-if

Using a Resource from a Nested Stack in Another Nested Stack with DependsOn

I have been refactoring what has become a rather large stack because it is brushing up against size limits for CloudFormation scripts on AWS. In doing so I have had to resolve some dependencies (typically using Outputs) but I've run into a situation that I have never run into before...
How do I use a resource created in one nested stack (A) in another nested stack (B) when using DependsOn?
This question is a duplicate question but the answer does not fit because it doesn't actually resolve the issue I have, it takes a different approach based on that particular user's needs.
Here is the resource in nested stack A:
EndpointARestApi:
Type: AWS::ApiGateway::RestApi
Properties:
Body:
Fn::Transform:
Name: 'AWS::Include'
Parameters:
Location: !Join ['/', [ 's3:/', !Ref SharedBucketName, !Ref WorkspacePrefix, 'endpoint.yaml' ]]
And here is the DependsOn request in stack B:
EndpointUserPoolResourceServer:
Type: Custom::CognitoUserPoolResourceServer
DependsOn:
- EndpointARestApi
- CustomResource ## this resource is in the same stack and resolves properly
This occurs with one other resource I have in this stack so I am hoping that I can do this easily. If not, I believe I would have to refactor some more.
As suggested in comments I moved the DependsOn statement up to the primary CFN script in the resource requiring the dependency and made sure the dependency was on the other resource, not the nested resource, like this:
Primary
ResourceA
ResourceB
DependsOn: ResourceA
Which ends up looking like this in the CloudFormation script:
EndpointUserPoolResourceServer:
Type: "AWS::CloudFormation::Stack"
DependsOn:
- EndpointARestApiResource
Properties:
Parameters:
AppName: !Ref AppName
Environment: !Ref Environment
DeveloperPrefix: !Ref DeveloperPrefix
DeployPhase: !Ref DeployPhase

How do I get the ARN of an AWS Lambda function for a Cloud Formation specific resource property?

I can't seem to get Ref or Fn:GetAtt to return a valid value for use with setting up a resource.
serverless.yml
...etc...
functions:
bearerTokenAuthentication:
handler: app.bearerTokenAuthentication
name: ${self:service}-auth-bearer
resources:
- ${file(./serverless_resources.yml)}
serverless_resources.yml
Resources:
ApiGateway:
Type: AWS::ApiGateway::RestApi
Properties:
Name: restapi-${self:provider.stage}
Description: Endpoints
ApiKeySourceType: HEADER # (to read the API key from the X-API-Key header of a request)
ApiGatewayBearerAuthorizer:
Type: AWS::ApiGateway::Authorizer
Properties:
Type: token
IdentitySource: method.request.header.Authorization
Name: BearerAuthorization
AuthorizerResultTtlInSeconds: 300
AuthorizerUri: !Join #arn:aws:apigateway:${self:provider.region}:lambda:path/${self:functions.bearerTokenAuthentication.name}
- ''
- - 'arn:aws:apigateway:'
- !Ref 'AWS::Region'
- ':lambda:path/2015-03-31/functions/'
- !GetAtt
- bearerTokenAuthentication # also tried !Ref bearerTokenAuthentication and '${self:functions.bearerTokenAuthentication.name}'
- Arn
- /invocations
RestApiId: !Ref ApiGateway
No matter what I do, GetAtt cannot find the ARN for the Lambda function declared in bearerTokenAuthentication. I just keep getting this error:
Error: The CloudFormation template is invalid: Template error: instance of Fn::GetAtt references undefined resource bearerTokenAuthentication
... or if trying Ref ...
Error: The CloudFormation template is invalid: Template format error: Unresolved resource dependencies [bearerTokenAuthentication] in the Resources block of the template
Is it possible to reference Lambda ARNs from the resource section? It seems by the error messages it is looking for "resource" names. I always thought the lambda function declaration was also considered a resource (besides the obvious Resources: block of course), perhaps I am misunderstanding something.
I figured it out. I had a NodeJS project and was using the "serverless" command line (sls) to deploy using serverless.yml. It turns out it creates a .serverless sub-directroy with some files in it. One of them is a compiled template for AWS Cloud Formation called cloudformation-template-update-stack.json. It appears that the utility likes to mangle the names by making the first character uppercase and adding "LambdaFunction" to all the function names (for whatever reason). In this case, bearerTokenAuthentication was renamed to BearerTokenAuthenticationLambdaFunction (the actual resource name). After looking into the compiled template it all became clear. The utility also seems to figure out the dependencies as well, which was good to know. This was the final result:
AuthorizerUri: !Join
- ''
- - 'arn:aws:apigateway:'
- !Ref 'AWS::Region'
- ':lambda:path/2015-03-31/functions/'
- !GetAtt [ BearerTokenAuthenticationLambdaFunction, Arn ]
- '/invocations'
Other "Gotchas":
DO NOT define the AWS::ApiGateway::RestApi resource (like I did in my question) if you are also using event mappings with the functions, otherwise you will get 2 APIs created. event entries automatically cause an API to be created called "ApiGatewayRestApi" - which is the resource name generated by the sls utility. The last line of the last file was changed to this:
RestApiId: !Ref ApiGatewayRestApi
And my ApiGateway: section was removed.
Credit goes to this post which helped make it more clear to me what was really going on: https://forum.serverless.com/t/fixed-how-do-i-get-reference-api-gateway-restapi-id-in-serverless-yml/3397/5
Previous Answer:
I found another way as well. This is what I resorted to doing until I found the proper (shorter) way. I was able to pull the lambda name and manually stitch together the required URI:
AuthorizerUri: !Join
- ''
- - 'arn:aws:apigateway:'
- !Ref 'AWS::Region'
- ':lambda:path/2015-03-31/functions/arn:aws:lambda:'
- !Ref 'AWS::Region'
- ':'
- !Ref 'AWS::AccountId'
- ':function:'
- '${self:functions.bearerTokenAuthentication.name}'
- '/invocations'
I hope that helps save someone some time trying to understand the complicated .yml file. I also cannot understand why it is so hard to make it simple to understand. All someone had to do is say (for me) was "sls takes a 'serverless.yml' file, and optional include files (such as declarations specific to the cloud system itself, like AWS Cloud Formation), and generates a template JSON file that is used by the target cloud services system to deploy your solution. Also, the names you give may get mangled, so check the template." I'm also surprised that no one has created an editor to make all this easier by now - perhaps something I'll look into myself one day. ;)
You can always go to the deployed lambda and look for the aws:cloudformation:logical-id tag. That way you get the logical ID you should be using in your serverless.yaml. (Don't like this behind-the-scenes names tricks either...)

ECS/ECR: is common practice to have one repository per image (and associated versions)?

So I'm new to ecs/ecr, but it seems like I have to name (with a tag) the image after the repository name in order to push that image to the repository.
So my question is: Is it intended that the user (me) would ONLY be pushing a single image and any associated versions of that image to a single repository in ecr, thus creating ANOTHER repository if i need to push a completely different image?
basically, one repo for nginx, one repo for postgressql, etc.
Yes. And also, possibly, no.
You push images to ECR. How you configure your image is up to you. Ideally, you'd have an image with a single responsibility, but this is your decision.
If you have multiple images, you push to multiple ECRs. If you have a single image doing many things, you can get away with a single ECR.
You can also push multiple images to the same ECR with creative use of tags (e.g. having the "image name or flavour" in the tag using your own naming convention.
It is recommended to push images with the version number of the same type.
For example your-repo:1.1, your-repo:1.2
If you push images with the same that exist in the ECR repository than your old image will be replaced with the new image you are pushing.
It depends on how your application is working. It is always advised to separate container working logically separate.
For example The database image with a persistence volume. So, If a
database container dies than it would not affect your data.
In our case, we wanted to have one repository for all our services, because otherwise we would have to create and maintain ECR infrastructure for each single service.
What we did was basically creating one shared repository for all services (Cloudformation in this case):
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
registryName:
Type: String
Default: services
Resources:
ecr:
Type: AWS::ECR::Repository
Properties:
RepositoryName: !Ref registryName
ImageTagMutability: MUTABLE
… and then when building the services, we would use SERVICENAME_VERSION as the convention for the actual image/version:
#!/bin/bash
set -e
export AWS_ACCOUNT="123456789000"
export AWS_DEFAULT_REGION="eu-central-1"
export SERVICE_NAME="demo-service"
export SERVICE_VERSION="${SERVICE_VERSION:-latest}"
export IMAGE_NAME="$AWS_ACCOUNT.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/services:${SERVICE_NAME}_${SERVICE_VERSION}"
aws ecr get-login-password | docker login --username AWS --password-stdin "$AWS_ACCOUNT.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
docker build -t $IMAGE_NAME .
docker push $IMAGE_NAME
(Simplified, but works.)
UPDATE:
In a real-world example, when you want to pull images into an ECS cluster that is placed in a VPC, you will need to set up VPC endpoints on the ECR. The Cloudformation code for this looks something like this:
privateLinkEcrApi:
Type: AWS::EC2::VPCEndpoint
Properties:
ServiceName: !Sub "com.amazonaws.${AWS::Region}.ecr.api"
PrivateDnsEnabled: true
VpcId: !ImportValue vpc
SecurityGroupIds:
- !ImportValue albSecurityGroup
SubnetIds:
- !ImportValue publicSubnetA
- !ImportValue publicSubnetB
VpcEndpointType: Interface
privateLinkEcrDkr:
Type: AWS::EC2::VPCEndpoint
Properties:
ServiceName: !Sub "com.amazonaws.${AWS::Region}.ecr.dkr"
PrivateDnsEnabled: true
VpcId: !ImportValue vpc
SecurityGroupIds:
- !ImportValue albSecurityGroup
SubnetIds:
- !ImportValue publicSubnetA
- !ImportValue publicSubnetB
VpcEndpointType: Interface
privateLinkEcrLogs:
Type: AWS::EC2::VPCEndpoint
Properties:
ServiceName: !Sub "com.amazonaws.${AWS::Region}.logs"
PrivateDnsEnabled: true
VpcId: !ImportValue vpc
SecurityGroupIds:
- !ImportValue albSecurityGroup
SubnetIds:
- !ImportValue publicSubnetA
- !ImportValue publicSubnetB
VpcEndpointType: Interface
privateLinkEcrS3:
Type: AWS::EC2::VPCEndpoint
Properties:
ServiceName: !Sub "com.amazonaws.${AWS::Region}.s3"
VpcId: !ImportValue vpc
SecurityGroupIds:
- !ImportValue albSecurityGroup
SubnetIds:
- !ImportValue publicSubnetA
- !ImportValue publicSubnetB
VpcEndpointType: Interface
privateLinkEcrS3Gw:
Type: AWS::EC2::VPCEndpoint
Properties:
ServiceName: !Sub "com.amazonaws.${AWS::Region}.s3"
VpcId: !ImportValue vpc
RouteTableIds:
- !ImportValue publicRouteTable
- !ImportValue privateRouteTableA
- !ImportValue privateRouteTableB
VpcEndpointType: Gateway
(NB: You will have to adapt this code as the actual VPC, subnets etc. are set up in a different template, and the actual configuration depends very much on your own environment. But this should get you on the right track.)

AWS/Cloudformation: How to export/import parameter value to another stack (YAML)

I have a simple question. I am testing export/import of values in cloud formation.
Question is: How to create resources based on linked conditions from another stack?
I think I should import the value from other stack, but don't know how....
This is my "export-test-stack"
AWSTemplateFormatVersion: '2010-09-09'
Description: Export
Parameters:
EnvType:
Description: How many Instances you want to deploy?
Default: two
Type: String
AllowedValues:
- two
- three
ConstraintDescription: must specify number of deployed Instances
Conditions:
Deploy3EC2: !Equals [ !Ref EnvType, three ]
Resources:
Ec2Instance1:
Type: AWS::EC2::Instance
Properties:
InstanceType: t2.micro
SecurityGroupIds:
- sg-5d011027
ImageId: ami-0b33d91d
Ec2Instance2:
Type: AWS::EC2::Instance
Properties:
InstanceType: t2.micro
SecurityGroupIds:
- sg-5d011027
ImageId: ami-0b33d91d
Ec2Instance3:
Type: AWS::EC2::Instance
Condition: Deploy3EC2
Properties:
InstanceType: t2.micro
SecurityGroupIds:
- sg-5d011027
ImageId: ami-0b33d91d
Outputs:
EC2Conditions:
Description: Number of deployed instances
Value: !Ref EnvType
Export:
Name: blablabla
This is my "import-test-stack"
AWSTemplateFormatVersion: '2010-09-09'
Description: Import
Resources:
Ec2Instance1:
Type: AWS::EC2::Instance
Properties:
InstanceType: t2.micro
SecurityGroupIds:
- sg-7309dd0a
ImageId: ami-70edb016
Ec2Instance2:
Type: AWS::EC2::Instance
Condition: ?????? <<<<<<<<<
Properties:
InstanceType: t2.micro
SecurityGroupIds:
- sg-7309dd0a
ImageId: ami-70edb016
It's about cross stack reference, so I want to deploy Ec2Instance2 in "import-test-stack" only if I choose to deploy three Instances in previous "export-test-stack". How to do this?
So if I choose to deploy three instances, I want to use condition in "import stack" to deploy another two instances, if I choose to deploy two, it will deploy only one instance in "import-stack"
I know how conditions working, but still not able to find the way, how to use in cross reference stacks.
I know it's stupid example, but I just wanted to test that on as simple template as possible.
You have two choices: continue with separated stacks or combine them to create a nested stack.
With nested stacks you can use outputs from one stack as inputs to another stack.
If you want to keep using separated stacks use Fn::ImportValue function to import output values exported from another stack.
The both angles have been covered in Exporting Stack Output Values page. Also, the cross-stack reference walkthrough might help you if you choose to use Fn::ImportValue.
This will get you to import the correct value:
Fn::ImportValue: EC2Conditions
You can also use rules. You can make the rule be based on the value of your output.
we cannot use import value here as cloudformation does not allow to use intrinsic values in the parameter. But there is an option of using SSM (AWS System Management parameter store ) parameters in AWS which allows us to use the parameter in stack B which is created in stack A
Please check the link below article from AWS knowledge center
https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-systems-manager-parameter/