I'm looking for a way to refactor repeated value imports in my Cloud Formation template.
I have the following template which configures a simple app:
Parameters:
KeyName:
Description: Name of an existing EC2 KeyPair to enable SSH access into the server
Type: AWS::EC2::KeyPair::KeyName
S3StackName:
Description: Name of S3 Stack
Type: String
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
config:
packages:
yum:
httpd: []
php: []
files:
/var/www/html/index.html:
source:
Fn::Sub:
- https://s3.amazonaws.com/${bucketName}/index.html
- bucketName:
Fn::ImportValue:
!Sub "${S3StackName}-s3Bucket"
/var/www/html/styles.css:
source:
Fn::Sub:
- https://s3.amazonaws.com/${bucketName}/styles.css
- bucketName:
Fn::ImportValue:
!Sub "${S3StackName}-s3Bucket"
/var/www/html/script.js:
source:
Fn::Sub:
- https://s3.amazonaws.com/${bucketName}/script.js
- bucketName:
Fn::ImportValue:
!Sub "${S3StackName}-s3Bucket"
services:
sysvinit:
httpd:
enabled: true
ensureRunning: true
AWS::CloudFormation::Authentication:
S3AccessCreds:
type: S3
roleName: !Ref EC2InstanceRole
buckets:
-
Fn::ImportValue:
!Sub "${S3StackName}-s3Bucket"
Properties:
IamInstanceProfile: !Ref EC2InstanceProfile
InstanceType: t2.micro
ImageId: ami-1853ac65
SecurityGroupIds:
- !Ref MySecurityGroup
KeyName: !Ref KeyName
UserData:
'Fn::Base64':
!Sub |
#!/bin/bash -xe
# Ensure AWS CFN Bootstrap is the latest
yum install -y aws-cfn-bootstrap
# Install the files and packages from the metadata
/opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource EC2Instance --region ${AWS::Region}
MySecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Open Ports 22 and 80
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: '80'
ToPort: '80'
CidrIp: 0.0.0.0/0
Outputs:
Website:
Description: The Public DNS for the EC2 Instance
Value: !Sub 'http://${EC2Instance.PublicDnsName}'
You'll notice that there's quite a bit of repetition, particularly importing a value that has been exported from an already existing stack, e.g:
Fn::Sub:
- https://s3.amazonaws.com/${bucketName}/index.html
- bucketName:
Fn::ImportValue:
!Sub "${S3StackName}-s3Bucket"
This pattern is used a grand total of 4 times in the template that I posted above. I want to simplify this some I'm not repeating the same block of YAML over and over again.
My first thought was adding a value to the Metadata section of the template, but that didn't work, as the resources section cannot !Ref from the metadata section.
How can I reduce the amount of repeated YAML in this template?
You should be able to achieve this with a CloudFormation Macro.
This blog post gives a good overview of macros. You can define a macro that calls a simple lambda function and transforms the template, so you can do lots of interesting things with macros. Here are some examples on GitHub.
Another option to investigate is cfndsl, a domain specific language that makes some things like parameters and templates a bit easier.
You can use Parameters:
Example:
Parameters:
FunctionRepeat:
Fn::Sub:
- https://s3.amazonaws.com/${bucketName}/index.html
- bucketName:
Fn::ImportValue:
!Sub "${S3StackName}-s3Bucket"
Then you can reuse this block wherever you like.
Example:
files:
/var/www/html/index.html:
source:
Ref: FunctionRepeat
/var/www/html/styles.css:
source:
Ref: FunctionRepeat
/var/www/html/script.js:
source:
Ref: FunctionRepeat
For more information you can go to:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/gettingstarted.templatebasics.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html
Related
I have a cloudformation template to create an ec2-instance. That template also starts an httpd along with some content that is served.
I'm using the Parameter section to allow a key to be specified or selected - see snippet below:
Parameters:
paramKeyPair:
Description: KeyPairName
Type: AWS::EC2::KeyPair::KeyName
I'm calling the ec2-instance through the AWS CLI like this :
aws cloudformation create-stack --stack-name stack-ec2instance --template-body file://demo-ec2instance --parameters ParameterKey=paramKeyPair,ParameterValue=peterKeyPair
So the instance can be created and the keypair can be passed through as an argument - BUT - frankly I don't actually care that much if the instance can be access. It's just a web server that can be spun up or down. SSH access is nice but no big deal.
In fact, if I removed the keypair Parameter from the cloudformation template - and removed the associated reference in the AWS CLI call - Cloudformation will happily spin up the instance without a keypair. Great !
What I would really like is for cloudformation to deal with the keypair being present or not.
I thought the best way to do this would be to update the code so that the parameter has a default value of "None" (for example) and then the ec2-instance could be run from the AWS CLI and if the keypair parameter is not specified then AWS would know not to bother with the keypair at all.
The problem is that by specifying the Type as AWS::EC2::KeyPair::KeyName, the AWS CLI expects an actual value.
I'm out of ideas - if anyone else has figured this out - I would really appreciate it.
Thankyou
Peter.
If I understand you correctly you want to be able to keep the parameter in your Cloudformation template, but only "allocate" a key pair to an instance if you specify a value, otherwise don't allocate a key pair to the ec2 instance resource. You can do this with AWS::NoValue pseudo parameter.
Here is a sample template:
Description: My EC2 instance
Parameters:
SSHKeyName:
Type: String
Conditions:
Has-EC2-Key:
!Not [ !Equals [ !Ref SSHKeyName, '' ] ]
Resources:
Instance:
Type: AWS::EC2::Instance
Properties:
ImageId: <InstanceImageID>
InstanceType: t2.micro
KeyName: !Ref SSHKeyName
KeyName:
Fn::If:
- Has-EC2-Key
- Ref: SSHKeyName
- Ref: AWS::NoValue
<other properties as required
So what this does is the condition checks if a SSHKeyName value is blank, if it's blank then the KeyName property will be ignored, if it isn't blank then it will use the value of SSHKeyName.
Thankyou WarrenG, your solution worked with one small exception which was to change the parameter type from AWS::EC2::KeyPair::KeyName to String.
Without your help I am certain I would have burned many more hours on this.
So in conclusion, the fix was
1: Change the Parameter type to String.
Parameters:
SSHKeyName:
Type: String
2: Add a function that determines if the key is present.
Conditions:
Has-EC2-Key:
!Not [ !Equals [ !Ref SSHKeyName, '' ] ]
Use the function within the resources section.
KeyName:
Fn::If:
- Has-EC2-Key
- Ref: SSHKeyName
- Ref: AWS::NoValue
Within my question I kept the code snippets to a minimum for readability but now I have marked this as solved I'm adding two blocks of code just for documentation and incase this helps anyone else.
One example of calling the template through the AWS CLI.
aws cloudformation create-stack --stack-name stack-ec2instance --template-body file://demo-ec2instance --parameters ParameterKey=paramSubnetId,ParameterValue=$SubnetId ParameterKey=paramKeyPair,ParameterValue=peterKeyPair ParameterKey=paramSecurityGroupIds,ParameterValue=$SecurityGroupId
The template to create a EC2 instance.
AWSTemplateFormatVersion: 2010-09-09
Parameters:
SSHKeyName:
Description: EC2 KeyPair for SSH access.
Type: String
Conditions:
Has-EC2-Key:
!Not [ !Equals [ !Ref SSHKeyName, '' ] ]
Mappings:
RegionMap:
eu-west-1:
AMI: ami-3bfab942
eu-west-2:
AMI: ami-098828924dc89ea4a
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
config:
packages:
yum:
httpd: []
php: []
files:
/var/www/html/index.php:
content: !Sub |
<?php print "Hello Peter !"; ?>
services:
sysvinit:
httpd:
enabled: true
ensureRunning: true
Properties:
InstanceType: t2.micro
ImageId:
Fn::FindInMap:
- RegionMap
- !Ref AWS::Region
- AMI
SecurityGroupIds:
- !Ref MySecurityGroup
KeyName:
Fn::If:
- Has-EC2-Key
- Ref: SSHKeyName
- Ref: AWS::NoValue
UserData:
'Fn::Base64':
!Sub |
#!/bin/bash -xe
# Ensure AWS CFN Bootstrap is the latest
yum install -y aws-cfn-bootstrap
# Install the files and packages from the metadata
/opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource EC2Instance --region ${AWS::Region}
MySecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Open Ports 22 and 80
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: '80'
ToPort: '80'
CidrIp: 0.0.0.0/0
Outputs:
Website:
Description: The Public DNS for the EC2 Instance
Value: !Sub 'http://${EC2Instance.PublicDnsName}'
UserData:
'Fn::Base64': |
#!/bin/bash
yum -y install docker
dockerd
docker pull apache/superset
In above mentioned Cloudformation UserData tag:
Everything works up until dockerd. docker pull command doesnt execute.
Template doesnt generate any error.
But when I ssh into the ec2 instance created by my cloudformation template - I dont see the docker image.
I am able to manually run docker pull <image> on ec2 and it works.
Is there any specific setting required to pull an image from docker hub (not ECR) on ec2 from cloud formation template?
My entire CF template for reference:
Parameters:
InstanceType:
Type: String
Default: t2.micro
Description: Enter instance size. Default is t3a.medium.
AllowedValues: # dropdown options
- t1.nano
- t1.micro
- t2.micro
Key:
Type: AWS::EC2::KeyPair::KeyName
Default: aseem-ec2-eu-west-1
Description: The key used to access the instance.
Mappings:
AmiIdForRegion:
us-east-1:
AMI: ami-04ad2567c9e3d7893
eu-west-1:
AMI: ami-09d4a659cdd8677be
Resources:
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 172.34.0.0/16
EnableDnsSupport: true
EnableDnsHostnames: true
InstanceTenancy: default
Tags:
- Key: Name
Value: Linux VPC
InternetGateway:
Type: AWS::EC2::InternetGateway
VPCGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
VpcId: !Ref VPC
InternetGatewayId: !Ref InternetGateway
SubnetA:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: eu-west-1a
VpcId: !Ref VPC
CidrBlock: 172.34.1.0/24
MapPublicIpOnLaunch: true
RouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
InternetRoute:
Type: AWS::EC2::Route
DependsOn:
- InternetGateway
- VPCGatewayAttachment
Properties:
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
RouteTableId: !Ref RouteTable
SubnetARouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref RouteTable
SubnetId: !Ref SubnetA
SecurityGroup:
Type: 'AWS::EC2::SecurityGroup'
Properties:
GroupDescription: Enable HTTP access via port 80
GroupName: superset-ec2-security-group-3
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 8080 # HTTP- port 80
ToPort: 8080
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 22 # ssh
ToPort: 22
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: 0.0.0.0/0
SecurityGroupEgress: # all external traffic
- IpProtocol: -1
CidrIp: 0.0.0.0/0
ElasticIP:
Type: AWS::EC2::EIP
Properties:
Domain: vpc
InstanceId: !Ref LinuxEc2
LinuxEc2:
Type: AWS::EC2::Instance
Properties:
SubnetId: !Ref SubnetA
SecurityGroupIds:
- !Ref SecurityGroup
ImageId: !FindInMap [ AmiIdForRegion,!Ref AWS::Region,AMI ]
KeyName: !Ref Key
InstanceType: !Ref InstanceType
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeSize: 100
Tags:
- Key: Name # naming your instance
Value: superset-6
UserData:
'Fn::Base64': |
#!/bin/bash
yum -y install docker
dockerd
docker pull apache/superset
Outputs:
PublicDnsName:
Value: !GetAtt LinuxEc2.PublicDnsName
PublicIp:
Value: !GetAtt LinuxEc2.PublicIp
You shouldn't execute dockerd in your user data. This starts the docker daemon and freezes further executions. Instead it should be:
UserData:
'Fn::Base64': |
#!/bin/bash
yum -y install docker
systemctl enable docker
systemctl start docker
docker pull apache/superset
I want to test the deploy of an ECS stack using AWS CLI from a GitLab pipeline.
My test project's core is a variation of the Docker Compose Flask app.
The file app.py:
import time
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello():
return 'Hello World!'
with its requirements.txt:
flask
The Dockerfile is:
FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
CMD ["flask", "run"]
and the Docker Compose file is:
version: "3.9"
services:
web:
image: registry.gitlab.com/<MYNAME>/<MYPROJECT>
x-aws-pull_credentials: "<CREDENTIALS>"
ports:
- "5000:5000"
I generate a CloudFormationTemplate.yml file using an ECS Docker Context (using an ecs Docker Context myecscontext, not the default one) and the command
docker compose convert > CloudFormationTemplate.yml
When I try to deploy on AWS from my local workstation (Win10):
aws cloudformation deploy --template-file CloudFormationTemplate.yml --stack-name test-stack
I get the error
unacceptable character #x0000: special characters are not allowed
in "<unicode string>", position 3
What's wrong? Thanks.
===========
ADDED
Here the CloudFormationTemplate.yml:
AWSTemplateFormatVersion: 2010-09-09
Resources:
CloudMap:
Properties:
Description: Service Map for Docker Compose project cloudformation
Name: cloudformation.local
Vpc: vpc-XXXXXXXX
Type: AWS::ServiceDiscovery::PrivateDnsNamespace
Cluster:
Properties:
ClusterName: cloudformation
Tags:
- Key: com.docker.compose.project
Value: cloudformation
Type: AWS::ECS::Cluster
Default5000Ingress:
Properties:
CidrIp: 0.0.0.0/0
Description: web:5000/tcp on default network
FromPort: 5000
GroupId:
Ref: DefaultNetwork
IpProtocol: TCP
ToPort: 5000
Type: AWS::EC2::SecurityGroupIngress
DefaultNetwork:
Properties:
GroupDescription: cloudformation Security Group for default network
Tags:
- Key: com.docker.compose.project
Value: cloudformation
- Key: com.docker.compose.network
Value: default
VpcId: vpc-XXXXXXXX
Type: AWS::EC2::SecurityGroup
DefaultNetworkIngress:
Properties:
Description: Allow communication within network default
GroupId:
Ref: DefaultNetwork
IpProtocol: "-1"
SourceSecurityGroupId:
Ref: DefaultNetwork
Type: AWS::EC2::SecurityGroupIngress
LoadBalancer:
Properties:
LoadBalancerAttributes:
- Key: load_balancing.cross_zone.enabled
Value: "true"
Scheme: internet-facing
Subnets:
- subnet-XXXXXXXX
- subnet-XXXXXXXX
- subnet-XXXXXXXX
Tags:
- Key: com.docker.compose.project
Value: cloudformation
Type: network
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
LogGroup:
Properties:
LogGroupName: /docker-compose/cloudformation
Type: AWS::Logs::LogGroup
WebService:
DependsOn:
- WebTCP5000Listener
Properties:
Cluster:
Fn::GetAtt:
- Cluster
- Arn
DeploymentConfiguration:
MaximumPercent: 200
MinimumHealthyPercent: 100
DeploymentController:
Type: ECS
DesiredCount: 1
LaunchType: FARGATE
LoadBalancers:
- ContainerName: web
ContainerPort: 5000
TargetGroupArn:
Ref: WebTCP5000TargetGroup
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: ENABLED
SecurityGroups:
- Ref: DefaultNetwork
Subnets:
- subnet-XXXXXXXX
- subnet-XXXXXXXX
- subnet-XXXXXXXX
PlatformVersion: 1.4.0
PropagateTags: SERVICE
SchedulingStrategy: REPLICA
ServiceRegistries:
- RegistryArn:
Fn::GetAtt:
- WebServiceDiscoveryEntry
- Arn
Tags:
- Key: com.docker.compose.project
Value: cloudformation
- Key: com.docker.compose.service
Value: web
TaskDefinition:
Ref: WebTaskDefinition
Type: AWS::ECS::Service
WebServiceDiscoveryEntry:
Properties:
Description: '"web" service discovery entry in Cloud Map'
DnsConfig:
DnsRecords:
- TTL: 60
Type: A
RoutingPolicy: MULTIVALUE
HealthCheckCustomConfig:
FailureThreshold: 1
Name: web
NamespaceId:
Ref: CloudMap
Type: AWS::ServiceDiscovery::Service
WebTCP5000Listener:
Properties:
DefaultActions:
- ForwardConfig:
TargetGroups:
- TargetGroupArn:
Ref: WebTCP5000TargetGroup
Type: forward
LoadBalancerArn:
Ref: LoadBalancer
Port: 5000
Protocol: TCP
Type: AWS::ElasticLoadBalancingV2::Listener
WebTCP5000TargetGroup:
Properties:
Port: 5000
Protocol: TCP
Tags:
- Key: com.docker.compose.project
Value: cloudformation
TargetType: ip
VpcId: vpc-XXXXXXXX
Type: AWS::ElasticLoadBalancingV2::TargetGroup
WebTaskDefinition:
Properties:
ContainerDefinitions:
- Command:
- XXXXXXXX.compute.internal
- cloudformation.local
Essential: false
Image: docker/ecs-searchdomain-sidecar:1.0
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group:
Ref: LogGroup
awslogs-region:
Ref: AWS::Region
awslogs-stream-prefix: cloudformation
Name: Web_ResolvConf_InitContainer
- DependsOn:
- Condition: SUCCESS
ContainerName: Web_ResolvConf_InitContainer
Essential: true
Image: registry.gitlab.com/MYUSER/cloudformation
LinuxParameters: {}
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group:
Ref: LogGroup
awslogs-region:
Ref: AWS::Region
awslogs-stream-prefix: cloudformation
Name: web
PortMappings:
- ContainerPort: 5000
HostPort: 5000
Protocol: tcp
RepositoryCredentials:
CredentialsParameter: arn:aws:secretsmanager:XXXXXXXXXXXXXXXXXXXXXXXX
Cpu: "256"
ExecutionRoleArn:
Ref: WebTaskExecutionRole
Family: cloudformation-web
Memory: "512"
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
Type: AWS::ECS::TaskDefinition
WebTaskExecutionRole:
Properties:
AssumeRolePolicyDocument:
Statement:
- Action:
- sts:AssumeRole
Condition: {}
Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Version: 2012-10-17
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
Policies:
- PolicyDocument:
Statement:
- Action:
- secretsmanager:GetSecretValue
- ssm:GetParameters
- kms:Decrypt
Condition: {}
Effect: Allow
Principal: {}
Resource:
- arn:aws:secretsmanager:XXXXXXXXXXXXXXXXXXXXXXXX
PolicyName: webGrantAccessToSecrets
Tags:
- Key: com.docker.compose.project
Value: cloudformation
- Key: com.docker.compose.service
Value: web
Type: AWS::IAM::Role
docker compose convert does not create valid CloudFormation (CFN) template in default context. Before you attempt to generate it, you have to create ECS context:
docker context create ecs myecscontext
Then you have to switch from default context to myecscontext:
docker context use myecscontext
Use docker context ls to confirm that you are in the correct context (i.e., myecscontext). Then you can use your convert command
docker compose convert
to generate actual CFN template.
I am trying to launch a jupyterlab instance using cloudformation (its something I do a lot and sagemaker does not have a 1y free tier) so the beginning looks like this which does not work. Specifically the password parameter
# AWSTemplateFormatVersion: "2010-09-09"
Description: Creates a Jupyter Lab Instance with an Elastic Load Balancer
Parameters:
KeyName:
Description: >-
Name of an existing EC2 KeyPair to enable SSH access to the instance
Type: AWS::EC2::KeyPair::KeyName
ConstraintDescription: Must be the name of an existing EC2 KeyPair.
Default: eduinstance
VPC:
Description: VPC ID of the VPC in which to deploy this stack.
Type: AWS::EC2::VPC::Id
ConstraintDescription: Must be the name of a valid VPC
Default: vpc-10a7ac6a
Subnets:
Type: List<AWS::EC2::Subnet::Id>
Default: subnet-8cde25d3,subnet-531fda72,subnet-4bbe3006
Description: >-
Subnets for the Elastic Load Balancer.
Please include at least two subnets
Password:
Type: String
NoEcho: false
MinLength: 4
Default: '{{resolve:ssm:JLabPassword:1}}'
Description: Password to set for Jupyter Lab
EBSVolumeSize:
Type: Number
Description: EBS Volume Size (in GiB) for the instance
Default: 8
MinValue: 8
MaxValue: 64000
ConstraintDescription: Please enter a value between 8 GB and 64 TB
EC2InstanceType:
Type: String
Default: t2.micro
AllowedValues:
- t2.micro
- c5.large
- m5.large
Description: Enter t2.micro, c5.large or m5.large. Default is t2.micro.
Conditions:
JupyterPasswordDefault: !Equals
- !Ref Password
- DEFAULT
Resources:
ALB:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
IpAddressType: ipv4
Scheme: internet-facing
SecurityGroups:
- !GetAtt [ALBSG, GroupId]
Subnets: !Ref Subnets
Type: application
ALBListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: forward
TargetGroupArn: !Ref ALBTargetGroup
LoadBalancerArn: !Ref ALB
Port: 80
Protocol: HTTP
ALBTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
Port: 8888
Protocol: HTTP
Targets:
- Id: !Ref ComputeInstance
TargetType: instance
VpcId: !Ref VPC
ComputeInstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Roles:
- !Ref ComputeIAMRole
ComputeInstance:
Type: AWS::EC2::Instance
Properties:
InstanceType: t2.micro
SubnetId: !Select [0, !Ref Subnets]
KeyName: !Ref KeyName
ImageId: '{{resolve:ssm:/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2:33}}'
SecurityGroupIds:
- !GetAtt [ComputeSG, GroupId]
IamInstanceProfile: !Ref ComputeInstanceProfile
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeType: gp2
VolumeSize: !Ref EBSVolumeSize
DeleteOnTermination: true
UserData:
Fn::Base64: !Sub
- |
#!/bin/bash
yum update -y
yum install python3-pip -y
yum install java-1.8.0-openjdk -y
cd /home/ec2-user/
wget https://repo.anaconda.com/archive/Anaconda3-2020.11-Linux-x86_64.sh
sudo -u ec2-user bash Anaconda3-2020.11-Linux-x86_64.sh -b -p /home/ec2-user/anaconda
echo "PATH=/home/ec2-user/anaconda/bin:$PATH" >> /etc/environment
source /etc/environment
jupyter notebook --generate-config
mkdir .jupyter
cp /root/.jupyter/jupyter_notebook_config.py /home/ec2-user/.jupyter/
echo "c = get_config()" >> .jupyter/jupyter_notebook_config.py
echo "c.NotebookApp.ip = '*'" >> .jupyter/jupyter_notebook_config.py
NB_PASSWORD=$(python3 -c "from notebook.auth import passwd; print(passwd('${password}'))")
echo "c.NotebookApp.password = u'$NB_PASSWORD'" >> .jupyter/jupyter_notebook_config.py
rm Anaconda3-2020.11-Linux-x86_64.sh
mkdir Notebooks
chmod 777 -R Notebooks .jupyter
su -c "jupyter lab" -s /bin/sh ec2-user
- password: !Ref Password #!If [JupyterPasswordDefault, '{{resolve:ssm:JupyterLabPassword:1}}', !Ref Password]
ALBSG:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Security Group for JupyterLab ALB. Created Automatically.
SecurityGroupIngress:
- CidrIp: 0.0.0.0/0
Description: Allows HTTP Traffic from anywhere
FromPort: 80
ToPort: 80
IpProtocol: tcp
ComputeSG:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Security Group for JupyterLab EC2 Instance. Created Automatically.
SecurityGroupIngress:
- Description: Allows JupyterLab Server Traffic from ALB.
FromPort: 8888
IpProtocol: tcp
SourceSecurityGroupId: !GetAtt [ALBSG, GroupId]
ToPort: 8890
- CidrIp: 0.0.0.0/0
Description: Allows SSH Access from Anywhere
FromPort: 22
ToPort: 22
IpProtocol: tcp
ComputeIAMRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- ec2.amazonaws.com
Action:
- 'sts:AssumeRole'
Description: Allows EC2 Access to S3. Created Automatically.
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonS3FullAccess
- arn:aws:iam::aws:policy/AmazonSageMakerFullAccess
Outputs:
URL:
Description: URL of the ALB
Value: !Join
- ''
- - 'http://'
- !GetAtt
- ALB
- DNSName
ConnectionString:
Description: Connection String For SSH On EC2
Value: !Join
- ''
- - 'ssh -i "'
- !Ref KeyName
- '.pem" ec2-user#'
- !GetAtt
- ComputeInstance
- PublicDnsName
It however interprets the string literally so I don't actually get my password but the resolve... itself.
Based on the comments and new, updated template by OP, and to expand on #DennisTraub answer.
SSM parameters resolve in almost all cases in the template, with the exception of UserData (btw, Init will also not work). This means, that dynamic reference will not resolve when used in the context of UserData. This is due to security issues.
UserData can be read in plain text by anyone who can view basic attributes of the instance. This means, that your JLabPassword would be in plain text available in UserData for everyone to see, if such resolution would be possible.
To rectify the issue, the SSM parameters should be used in UserData as follows:
Attach IAM permission ssm:GetParameter to the instance role/profile which allows instance to access the SSM Parameter Store.
Instead on {{resolve:ssm:JLabPassword:1}} in your Parameter, you can just pass JLabPassword so that the name of the SSM paramtter gets passed into the UserData, not the actual value of it.
In the UserData, please use AWS CLI get-parameter to get the actual value of your JLabPassword.
The above ensures that the value of JLabPassword is kept private and not visible in plain text in UserData.
Your passwort parameter's default value is missing the service name (ssm) as well as single quotes.
// What you have:
Password:
Default: {{resolve:JupyterPassword:1}}
...
// What it should be:
Password:
Default: '{{resolve:ssm:JupyterPassword:1}}'
...
Update: You've fixed the code in your question. Did my answer and the comments below solve your question? If not, I'm not sure what else you need.
I use cloudformation to launch a ec2 instance. Below is the cloudformation template:
Parameters:
KeyName:
Description: The EC2 Key Pair to allow SSH access to the instance
Type: 'AWS::EC2::KeyPair::KeyName'
Resources:
Ec2Instance:
Type: 'AWS::EC2::SpotFleet'
Properties:
SecurityGroups:
- !Ref InstanceSecurityGroup
- MyExistingSecurityGroup
KeyName: !Ref KeyName
ImageId: ami-07d0cf3af28718ef8
InstanceType: p2.8xlarge
AllocationStrategy: lowestPrice
SpotPrice: 1
InstanceSecurityGroup:
Type: 'AWS::EC2::SecurityGroup'
Properties:
GroupDescription: Enable SSH access via port 22
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp: 0.0.0.0/0
I created a stack in cloudformation and specify the Key Name from a drop list of key pair. After than the stack rolled back and I see this error message Encountered unsupported property KeyName. I wonder what wrong with my configuration?
Check documentation on AWS::EC2::SpotFleet. It only supports SpotFleetRequestConfigData as property.
You will probably need to specify something like:
Ec2Instance:
Type: 'AWS::EC2::SpotFleet'
Properties:
SpotFleetRequestConfigData:
SpotPrice: 1
AllocationStrategy: lowestPrice
LaunchSpecifications:
- InstanceType: p2.8xlarge
SecurityGroups:
- !Ref InstanceSecurityGroup
- MyExistingSecurityGroup
KeyName: !Ref KeyName
ImageId: ami-07d0cf3af28718ef8
Check the AWS::EC2::SpotFleet documentation, it has a quite elaborate example.