Is it possible to create accesspoints for an existing EFS through CloudFormation?
For example, in the CloudFormation template:
First reference the EFS by a parameter
FileSystemResource:
Type: 'AWS::EFS::FileSystem'
Properties:
FileSystemId: !Ref some_param
BackupPolicy:
Status: ENABLED
PerformanceMode: generalPurpose
Encrypted: true
ThroughputMode: bursting
Then reference the EFS to create the access point?
SomeAccessPoint:
Type: 'AWS::EFS::AccessPoint'
Properties:
FileSystemId: !Ref FileSystemResource
You have to import your existing filesystem into CloudFormation first, before you can do this.
Related
I'm trying to define some common resources (specifically, a couple of IAM roles) that will be shared between two environments, via a nested stack. The first environment to use the nested stack's resources creates ok, but the second one fails when trying to run the nested stack. Am I not understanding something about how nested stacks work, or am I just doing something wrong?
My nested stack is defined as:
AWSTemplateFormatVersion: '2010-09-09'
Description: Defines common resources shared between environments.
Parameters:
ParentStage:
Type: String
Description: The Stage or environment name.
Default: ""
ParentVpcId:
Type: "AWS::EC2::VPC::Id"
Description: VpcId of your existing Virtual Private Cloud (VPC)
Default: ""
Resources:
LambdaFunctionSG:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Identifies Lamdba functions to VPC resources
GroupName: BBA-KTDO-SG-LambdaFunction
VpcId: !Ref ParentVpcId
RdsAccessSG:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allows Lambda functions access to RDS instances
GroupName: BBA-KTDO-SG-RDS
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 3306
ToPort: 3306
SourceSecurityGroupId: !Ref LambdaFunctionSG
VpcId: !Ref ParentVpcId
I've uploaded that YAML file to an S3 bucket, and I'm then trying to use it in two separate stack files (i.e. app_dev.yaml and app_prod.yaml) as:
Resources:
CommonResources:
Type: 'AWS::CloudFormation::Stack'
Properties:
TemplateURL: "https://my-buildbucket.s3-eu-west-1.amazonaws.com/common/common.yaml"
Parameters:
ParentStage: !Ref Stage
ParentVpcId: !Ref VpcId
And referring to its outputs as (e.g):
VpcConfig:
SecurityGroupIds:
- !GetAtt [ CommonResources, Outputs.LambdaFunctionSGId ]
The first environment creates fine, including the nested resources. When I try to run the second environment, it fails with error:
Embedded stack
arn:aws:cloudformation:eu-west-1:238165151424:stack/my-prod-stack-CommonResources-L94ZCIP0UD9W/f9d06dd0-994d-11eb-9802-02554f144c21
was not successfully created: The following resource(s) failed to
create: [LambdaExecuteRole, LambdaFunctionSG].
Is it not possible to share a single resource definition between two separate stacks like this, or have I just missed something in the implementation?
As #jasonwadsworth mentioned that's correct names of the stacks are always amended with a random string at the end AWS::CloudFormation::Stack check the return values. Use GetAtt to get the name of the stack and construct the output. How do I pass values between nested stacks within the same parent stack in AWS CloudFormation?
Plus use aws cloudformation package command for packaging the nested stacks, no need to manually upload them to s3 bucket.
someting like
aws cloudformation package \
--template-file /path_to_template/template.json \
--s3-bucket bucket-name \
--output-template-file packaged-template.json
Take a look on the cloudformation output exports as well in case you are curious Difference between an Output & an Export
I have two Cloudformation files and I want to reference already created Ressources from one template, in another template. For Example: In the first one I create an ECS Cluster. In the second one I want to reference this cluster and build a Service in it. How can I do it?
To do this you have to exporting stack output values from the first template. Presumably this would be ECS Cluster name and/or its arn:
MyCluster:
Type: AWS::ECS::Cluster
Properties:
#....
Outputs:
MyClusterName:
Value: !Ref MyCluster
Export:
Name: ECSClusterName
Then in the second template you would use ImportValue to reference the exported output:
MyESSService:
Type: AWS::ECS::Service
Properties:
Cluster: !ImportValue ECSClusterName
I need to do the following actions in sequence and wondering if I should use CloudFormation to achieve this:
Launch a new EC2 instance (currently I'm manually doing it by selecting "Launch more like these" on a specific instance.
Stop the new instance.
Detach the volume from the new instance.
Create a new volume from a previously created snapshot.
Attached that newly created volume to the new EC2 instance created in step 1.
Restart the EC2 instance.
If this can't be done via CloudFormation would it be possible to automate it somehow?
It sounds like you are wanting to launch an Amazon EC2 instance with the boot disk coming from an Amazon EBS Snapshot.
Might I suggest a simpler process?
Rather than creating a Snapshot of the Amazon EBS volume, instead create an Amazon Machine Image (AMI) of the original instance. Then, when launching the new Amazon EC2 instance, simply select the AMI. This will result in a new instance starting up with the desired boot disk.
Alternatively, you can create an AMI from an existing Amazon EBS Snapshot by selecting the Snapshot and choosing the Create Image command. (But I think this only works for Linux, not Windows.) Then, launch new EC2 instances from the AMI.
Behind-the-scenes, an AMI is actually just an Amazon EBS Snapshot with some additional information.
Take Johns advice and use an AMI. This sample will get you started, it launches a single EC2 using an AMI (latest patched one) in an Auto Scale Group of Min 1 - Max 1 so one EC2 instance will always be on regardless of a power failure, AZ going down, etc.
Replace XYZ with your products name:
Parameters:
KeyPairName:
Description: >-
Mandatory. Enter a Public/private key pair. If you do not have one in this region,
please create it before continuing
Type: 'AWS::EC2::KeyPair::KeyName'
EnvType:
Description: Environment Name
Default: dev
Type: String
AllowedValues: [dev, test, prod]
Subnet1ID:
Description: 'ID of the subnet 1 for auto scaling group into'
Type: 'AWS::EC2::Subnet::Id'
Subnet2ID:
Description: 'ID of the subnet 2 for auto scaling group'
Type: 'AWS::EC2::Subnet::Id'
Subnet3ID:
Description: 'ID of the subnet 3 for auto scaling group'
Type: 'AWS::EC2::Subnet::Id'
Resources:
XYZMainLogGroup:
Type: 'AWS::Logs::LogGroup'
SSHMetricFilter:
Type: 'AWS::Logs::MetricFilter'
Properties:
LogGroupName: !Ref XYZMainLogGroup
FilterPattern: ON FROM USER PWD
MetricTransformations:
- MetricName: SSHCommandCount
MetricValue: 1
MetricNamespace: !Join
- /
- - AWSQuickStart
- !Ref 'AWS::StackName'
XYZAutoScalingGroup:
Type: 'AWS::AutoScaling::AutoScalingGroup'
Properties:
LaunchConfigurationName: !Ref XYZLaunchConfiguration
AutoScalingGroupName: !Join
- '.'
- - !Ref 'AWS::StackName'
- 'ASG'
VPCZoneIdentifier:
- !Ref Subnet1ID
- !Ref Subnet2ID
- !Ref Subnet3ID
MinSize: 1
MaxSize: 1
Cooldown: '300'
DesiredCapacity: 1
Tags:
- Key: Name
Value: 'The Name'
PropagateAtLaunch: 'true'
XYZLaunchConfiguration:
Type: 'AWS::AutoScaling::LaunchConfiguration'
Properties:
AssociatePublicIpAddress: 'false'
PlacementTenancy: default
KeyName: !Ref KeyPairName
ImageId: ami-123432164a1b23da1
IamInstanceProfile: "BaseInstanceProfile"
InstanceType: t2.small
SecurityGroups:
- Fn::If: [CreateDevResources, !Ref DevSecurityGroup, !Ref "AWS::NoValue"]
Yes, you can automated all these tasks using SSM Automation.
Specifically, your SSM Automation can consist of the following documents/actions:
AWS-AttachEBSVolume
AWS-DetachEBSVolume
AWS-StopEC2Instance
AWS-StartEC2Instance
AWS-RestartEC2Instance
Your SSM Automation can be triggered by CloudWatch Events. Also the SSM Automation can be constructed using CloudFormation.
I am trying to create an EBS Volume and attach it to my EC2 instance. The instance has its own Auto Scaling Group and Launch Configuration. I want it such that if this instance becomes unhealthy and terminates, the EBS volume should automatically get attached to the new instance that is spun up by the Auto Scaling Group. The mount commands are in the Launch Configuration so that's not a problem.
Here is my code:
Influxdbdata1Asg:
Type: 'AWS::AutoScaling::AutoScalingGroup'
Properties:
TargetGroupARNs:
- !Ref xxxx
VPCZoneIdentifier:
- !GetAtt 'NetworkInfo.PrivateSubnet1Id'
LaunchConfigurationName: !Ref yyyy
MinSize: 1
MaxSize: 1
DesiredCapacity: 1
Data1:
Type: AWS::EC2::Volume
DeletionPolicy: Retain
Properties:
Size: !Ref 'DataEbsVolumeSize'
AvailabilityZone: !GetAtt 'NetworkInfo.PrivateSubnet1Id'
Tags:
- Key: Name
Value: !Join
- '-'
- - !Ref 'AWS::StackName'
- data1
Attachdata1:
Type: AWS::EC2::VolumeAttachment
Properties:
InstanceId: !Ref ????
VolumeId: !Ref Data1
Device: /dev/xvdb
Unfortunately you can't do this using:
Attachdata1:
Type: AWS::EC2::VolumeAttachment
Properties:
InstanceId: !Ref ????
VolumeId: !Ref Data1
Device: /dev/xvdb
The reason is that instance are being launched by ASG and you will not have its ideas.
Attaching must be done outside of CloudFormation, as can't know upfront what would be the instance id in future. As other answer mentions Lifecycle Hooks.
Or even better use, storage independent of ASG, such as EFS which would automatically persist between instance launches and terminations and could be mounted by multiple instances.
For this problem you would specifically want to make use of Lifecycle Hooks which trigger whenever an instance terminates or is launched.
To do this your lifecycle hook would notify your SNS notification, which would then invoke a Lambda function. This Lambda function would perform the change, before acknowledging the lifecycle action is complete.
There is a blog post written about this here.
Your question mentions CloudFormation, however this would still involve lifecycle hooks to trigger the action. You would need a CloudFormation stack with a AWS::EC2::VolumeAttachment resource. The Lambda would need to update the "InstanceId" property in the stack to perform this change.
I want to launch more than one Ec2 instances using aws cloudformation template without using AutoScaling.
Please let me know how can I launch?
There are several ways to launch multiple instances using CloudFormation without having Autoscaling Group in place.
Create required number of resources in same Cloudformation template.
Eg. If you want to launch 3 instances then you must write the code to launch 3 AWS instances in your Cloudformation Template.
Following template has 2 resource which will launch 2 EC2 instance. You can add more resources as per requirement.
server1:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref Server1InstanceType
KeyName: !Ref ServerKeypair
ImageId: !Ref ServerImageId
SecurityGroupIds:
- !Ref ServerSG
SubnetId: !Ref PrivateWeb1b
Tags:
- Key: Name
Value: server1
server2:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref Server2InstanceType
KeyName: !Ref ServerKeypair
ImageId: !Ref ServerImageId
SecurityGroupIds:
- !Ref ServerSG
SubnetId: !Ref PrivateWeb1b
Tags:
- Key: Name
Value: server2
Create multiple Cloudformation Stacks using same Cloudformation template. Eg. You have to create 2 Cloudformation stacks from same Cloudformation template which has Resource to launch 1 EC2 instance each.
Following template has 1 resource which will launch 1 EC2 instance. As per 2nd method, you can create multiple Cloudformation stacks using same template to get multiple EC2 instances.
server1:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref Server1InstanceType
KeyName: !Ref ServerKeypair
ImageId: !Ref WebserverImageId
SecurityGroupIds:
- !Ref WebserverSG
SubnetId: !Ref PrivateWeb1b
Tags:
- Key: Name
Value: server1
Try using Type: AWS::EC2::EC2Fleet
You can specifies a configuration information to launch a fleet--or group--of instances. An EC2 Fleet can launch multiple instance types across multiple Availability Zones, using the On-Demand Instance, Reserved Instance, and Spot Instance purchasing models together. Using EC2 Fleet, you can define separate On-Demand and Spot capacity targets, specify the instance types that work best for your applications, and specify how Amazon EC2 should distribute your fleet capacity within each purchasing model.
**YAML**
Type: AWS::EC2::EC2Fleet
Properties:
ExcessCapacityTerminationPolicy: String
LaunchTemplateConfigs:
- FleetLaunchTemplateConfigRequest
OnDemandOptions:
OnDemandOptionsRequest
ReplaceUnhealthyInstances: Boolean
SpotOptions:
SpotOptionsRequest
TagSpecifications:
- TagSpecification
TargetCapacitySpecification:
TargetCapacitySpecificationRequest
TerminateInstancesWithExpiration: Boolean
Type: String
ValidFrom: String
ValidUntil: String
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-ec2fleet.html