I am trying to create a CloudFormation stack which has UserData script to install java, tomcat, httpd and java application on launch of an EC2 instance.
However, the stack gets created successfully with all the resources but when I connect to EC2 instance to check the configuration of above applications I don't find any. My usecase is to spin-up an instance with all the above applications/software to be installed with automation.
UserData:
Fn::Base64:
Fn::Join:
- ' '
- - '#!/bin/bash -xe\n'
- 'sudo yum update && install pip && pip install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz\n'
- 'date > /home/ec2-user/starttime\n'
- 'sudo yum update -y aws-cfn-bootstrap\n'
# Initialize CloudFormation bits\n
- ' '
- '/opt/aws/bin/cfn-init -v\n'
- ' --stack\n'
- '!Ref AWS::StackName\n'
- ' --resource LaunchConfig\n'
- 'ACCESS_KEY=${HostKeys}&SECRET_KEY=${HostKeys.SecretAccessKey}\n'
# Start servers\n
- 'service tomcat8 start\n'
- '/etc/init.d/httpd start\n'
- 'date > /home/ec2-user/stoptime\n'
Metadata:
AWS::CloudFormation::Init:
config:
packages:
yum:
- java-1.8.0-openjdk.x86_64: []
- tomcat8: []
- httpd: []
services:
sysvinit:
httpd:
enabled: 'true'
ensureRunning: 'true'
files:
- /usr/share/tomcat8/webapps/sample.war:
- source: https://s3-eu-west-1.amazonaws.com/testbucket/sample.war
- mode: 000500
- owner: tomcat
- group: tomcat
CfnUser:
Type: AWS::IAM::User
Properties:
Path: '/'
Policies:
- PolicyName: Admin
PolicyDocument:
Statement:
- Effect: Allow
Action: '*'
Resource: '*'
HostKeys:
Type: AWS::IAM::AccessKey
Properties:
UserName: !Ref CfnUser
The problem is in the way you have formatted your UserData. I would suggest that you launch the EC2 instance and manually test the script first. It has a number of problems in it.
Try formatting your UserData like this:
UserData:
Fn::Base64:
!Sub |
#!/bin/bash -xe
# FIXME. This won't work either.
# sudo yum update && install pip && pip install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz
date > /home/ec2-user/starttime
sudo yum update -y aws-cfn-bootstrap
# Initialize CloudFormation bits
/opt/aws/bin/cfn-init -v \
--stack ${AWS::StackName} \
--resource LaunchConfig
# FIXME. Not sure why these are here.
# ACCESS_KEY=${HostKeys}
# SECRET_KEY=${HostKeys.SecretAccessKey}
# Start servers\n
service tomcat8 start
/etc/init.d/httpd start
date > /home/ec2-user/stoptime
Things to note:
You can't interpolate here using !Ref notation. Notice I changed it to ${AWS::StackName} and notice the whole block is inside !Sub.
As my comments indicate, the yum update line has invalid commands in it.
As noted in the comments, it is a bad practice to inject access keys. Also, the keys don't seem to be required for anything in this script.
Note also that the files section is specified incorrectly in the MetaData, as Arrays instead of Hash keys.
It should be:
files:
/usr/share/tomcat8/webapps/sample.war:
source: https://s3-eu-west-1.amazonaws.com/testbucket/sample.war
mode: '000500'
owner: tomcat
group: tomcat
Related
I am getting the following error on my batch job.
I am running into an issue where I get a java.lang.RuntimeException: java.io.IOException: No space left on device in my Batch jobs. I thought that the EBS volume that is used as a mount dir has EBS auto-scaling.
My batch job is running bbnorm.sh bbtools on two paired fq.gz each file is approximately 22 GB in size.
This base project that this template came from can be found here: Genomics Secondary Analysis Using AWS Step Functions and AWS Batch.
Here is my template:
Resources:
LaunchTemplate:
Type: "AWS::EC2::LaunchTemplate"
Properties:
LaunchTemplateData:
BlockDeviceMappings:
- Ebs:
# root volume
Encrypted: True
DeleteOnTermination: True
VolumeSize: 50
VolumeType: gp2
DeviceName: /dev/xvda
- Ebs:
# ecs optimized ami docker storage volume, kept for compatibility
Encrypted: True
DeleteOnTermination: True
VolumeSize: 22
VolumeType: gp2
DeviceName: /dev/xvdcz
- Ebs:
# docker storage volume (amazon-ebs-autoscale managed)
Encrypted: True
DeleteOnTermination: True
VolumeSize: 100
VolumeType: gp2
DeviceName: /dev/sdc
TagSpecifications:
- ResourceType: volume
Tags:
- Key: Project
Value: !Ref Project
- Key: SolutionId
Value: !FindInMap ['solution', 'metadata', 'id']
UserData:
Fn::Base64: |
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==BOUNDARY=="
--==BOUNDARY==
Content-Type: text/cloud-config; charset="us-ascii"
packages:
- jq
- btrfs-progs
- wget
- git
- bzip2
runcmd:
- pip3 install -U awscli boto3
- systemctl stop ecs
- systemctl stop docker
# install amazon-ebs-autoscale
- cp -au /var/lib/docker /var/lib/docker.bk
- rm -rf /var/lib/docker/*
- EBS_AUTOSCALE_VERSION=$(curl --silent "https://api.github.com/repos/awslabs/amazon-ebs-autoscale/releases/latest" | jq -r .tag_name)
- cd /opt && git clone https://github.com/awslabs/amazon-ebs-autoscale.git
- cd /opt/amazon-ebs-autoscale && git checkout $EBS_AUTOSCALE_VERSION
- sh /opt/amazon-ebs-autoscale/install.sh /var/lib/docker /dev/sdc 2>&1 > /var/log/ebs-autoscale-install.log
- sed -i 's+OPTIONS=.*+OPTIONS="--storage-driver btrfs"+g' /etc/sysconfig/docker-storage
- cp -au /var/lib/docker.bk/* /var/lib/docker
# install miniconda/awscli
- wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
- bash Miniconda3-latest-Linux-x86_64.sh -b -f -p /opt/miniconda
- /opt/miniconda/bin/conda install -c conda-forge -y awscli
- chown -R ec2-user:ec2-user /opt/miniconda
- rm Miniconda3-latest-Linux-x86_64.sh
- trap "systemctl start docker;systemctl enable --now --no-block ecs" INT ERR EXIT
--==BOUNDARY==--
After researching I have found that that is was much easier to update the launch spec with an increased volume.
AWS Batch doesn't support updating a compute environment with a new
launch template version. If you update your launch template, you must
create a new compute environment with the new template for the changes
to take effect.
https://docs.aws.amazon.com/batch/latest/userguide/launch-templates.html
I'm trying to use the CloudFormation cfn-init to bootstrap creation of on-demand compute nodes in a cluster built on Ubuntu 18.04. For some reason, cnf-init enter a dead loop. This is the CloudFormation that I am trying to use:
Resources:
InstanceRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: ec2.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: root
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- ec2:DescribeTags
- cloudformation:DescribeStackResource
Resource: '*'
InstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Roles:
- !Ref InstanceRole
LaunchTemplate:
Type: AWS::EC2::LaunchTemplate
Metadata:
AWS::CloudFormation::Init:
configSets:
default:
- "basic"
basic:
files:
/home/ubuntu/.emacs:
content: !Sub |
;; ========== Configuring basic Emacs behavior ==========
;; Try to auto split vertically all the time
(setq split-height-threshold nil)
;; ========== Enable Line and Column Numbering ==========
;; Show line-number in the mode line
(line-number-mode 1)
;; Show column-number in the mode line
(column-number-mode 1)
;; Display size in human format in Dired mode
(setq dired-listing-switches "-alh")
mode: "000644"
owner: "ubuntu"
group: "ubuntu"
packages:
apt:
build-essential: []
emacs-nox: []
Properties:
LaunchTemplateData:
ImageId: ami-07a29e5e945228fa1
IamInstanceProfile:
Arn: !GetAtt [ InstanceProfile, Arn ]
UserData:
Fn::Base64:
!Sub |
#!/bin/bash -x
# Install the aws CloudFormation Helper Scripts
apt-get update -y && apt-get upgrade -y
apt-get install -y python2.7
update-alternatives --install /usr/bin/python python /usr/bin/python2.7 1
curl https://bootstrap.pypa.io/get-pip.py --output get-pip.py
python get-pip.py
rm get-pip.py
pip install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz
## Running init stack
cfn-init -v --stack ${AWS::StackName} --resource LaunchTemplate --region ${AWS::Region}
LaunchTemplateName: MyLaunchTemplate
Looking at /var/log/cfn-init.log is not really helpfull
2020-11-11 17:17:59,172 [DEBUG] CloudFormation client initialized with endpoint https://cloudformation.us-west-2.amazonaws.com
2020-11-11 17:17:59,172 [DEBUG] Describing resource LaunchTemplate in stack LaunchTemplate
2020-11-11 17:17:59,237 [ERROR] Throttling: Rate exceeded
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/cfnbootstrap/util.py", line 162, in _retry
return f(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/cfnbootstrap/util.py", line 234, in _timeout
raise exc[0]
HTTPError: 400 Client Error: Bad Request
2020-11-11 17:17:59,237 [DEBUG] Sleeping for 0.143176 seconds before retrying
2020-11-11 17:17:59,381 [DEBUG] Describing resource LaunchTemplate in stack LaunchTemplate
2020-11-11 17:17:59,445 [ERROR] Throttling: Rate exceeded
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/cfnbootstrap/util.py", line 162, in _retry
return f(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/cfnbootstrap/util.py", line 234, in _timeout
raise exc[0]
HTTPError: 400 Client Error: Bad Request
2020-11-11 17:17:59,445 [DEBUG] Sleeping for 1.874780 seconds before retrying
2020-11-11 17:18:01,323 [DEBUG] Describing resource LaunchTemplate in stack LaunchTemplate
Investigating /var/log/cloud-init.log, I can see where it breaks first:
(...)
2020-11-11 17:16:57,175 - util.py[DEBUG]: Running command ['/var/lib/cloud/instance/scripts/part-001'] with allowed return codes [0] (shell=False, capture=False)
2020-11-11 17:21:17,126 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [1]
2020-11-11 17:21:17,129 - util.py[DEBUG]: Failed running /var/lib/cloud/instance/scripts/part-001 [1]
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 878, in runparts
File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 2164, in subp
break
cloudinit.util.ProcessExecutionError: Unexpected error while running command.
Command: ['/var/lib/cloud/instance/scripts/part-001']
Exit code: 1
Reason: -
Stdout: -
Stderr: -
(...)
which is the content of the UserData of the template:
$ cat /var/lib/cloud/instance/scripts/part-001
#!/bin/bash -x
# Install the aws CloudFormation Helper Scripts
apt-get update -y && apt-get upgrade -y
apt-get install -y python2.7
update-alternatives --install /usr/bin/python python /usr/bin/python2.7 1
curl https://bootstrap.pypa.io/get-pip.py --output get-pip.py
python get-pip.py
rm get-pip.py
pip install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz
## Running init stack
cfn-init -v --stack LaunchTemplate --resource LaunchTemplate --region us-west-2
Even thought I set cloudformation:DescribeStackResource in the InstanceRole, running the script as root returns the following error:
(...)
Successfully built aws-cfn-bootstrap
+ cfn-init -v --stack LaunchTemplate --resource LaunchTemplate --region us-west-2
AccessDenied: Instance i-0bcf477579987a0e8 is not allowed to call DescribeStackResource for LaunchTemplate
This is really strange as when I do the same within a AWS::EC2::Instance using the same AMI work just fine. Any idea what's going on here? What am'I missing?
Thanks
This could be because --resource LaunchTemplate is incorrect. It should the ASG or instance resource that uses the launchtemplate, not the LaunchTemplate itself.
I have an experiment I'd like to run 100 different times, each with a command line flag set to a different integer value. Each experiment will output the result to a text file. Experiments take about 2 hours each and are independent of each other.
I currently have a Docker image that can run the experiment when provided the command line flag.
I am curious if there is a way to write a script that can launch 100 AWS instances (one for each possible flag value), run the Docker image, and then output the result to a shared text file somewhere. Is this possible? I am very inexperienced with AWS so I'm not sure if this is the proper tool or what steps would be required (besides building the Docker image).
Thanks.
You could do this using vagrant with the vagrant-aws plugin to spin up the instances and the Docker Provisioner to pull your images / run your containers or the Ansible Provisioner. For example:
.
├── playbook.yml
└── Vagrantfile
The Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
N = 100
(1..N).each do |server_id|
config.vm.box = "dummy"
config.ssh.forward_agent = true
config.vm.define "server#{server_id}" do |server|
server.vm.provider :aws do |aws, override|
aws.access_key_id = ENV["AWS_ACCESS_KEY_ID"]
aws.secret_access_key = ENV["AWS_SECRET_ACCESS_KEY"]
aws.instance_type = "t2.micro"
aws.block_device_mapping = [
{
"DeviceName" => "/dev/sda1",
"Ebs.VolumeSize" => 30
}
]
aws.tags = {
"Name" => "node#{server_id}.example.com",
"Environment" => "stage"
}
aws.subnet_id = "subnet-d65893b0"
aws.security_groups = [
"sg-deadbeef"
]
aws.region = "eu-west-1"
aws.region_config "eu-west-1" do |region|
region.ami = "ami-0635ad49b5839867c"
region.keypair_name = "ubuntu"
end
aws.monitoring = true
aws.associate_public_ip = false
aws.ssh_host_attribute = :private_ip_address
override.ssh.username = "ubuntu"
override.ssh.private_key_path = ENV["HOME"] + "/.ssh/id_rsa"
override.ssh.forward_agent = true
end
if server_id == N
server.vm.provision :ansible do |ansible|
ansible.limit = "all"
ansible.playbook = "playbook.yml"
ansible.compatibility_mode = "2.0"
ansible.raw_ssh_args = "-o ForwardAgent=yes"
ansible.extra_vars = {
"ansible_python_interpreter": "/usr/bin/python3"
}
end
end
end
end
end
Note: this example does ansible parallel execution from the Tips & Tricks.
The ansible playbook.yml:
- hosts: all
pre_tasks:
- name: get instance facts
local_action:
module: ec2_instance_facts
filters:
private-dns-name: '{{ ansible_fqdn }}'
"tag:Environment": stage
register: _ec2_instance_facts
- name: add route53 entry
local_action:
module: route53
state: present
private_zone: yes
zone: 'example.com'
record: '{{ _ec2_instance_facts.instances[0].tags["Name"] }}'
type: A
ttl: 7200
value: '{{ _ec2_instance_facts.instances[0].private_ip_address }}'
wait: yes
overwrite: yes
tasks:
- name: install build requirements
apt:
name: ['python3-pip', 'python3-socks', 'git']
state: present
update_cache: yes
become: true
- name: apt install docker requirements
apt:
name: ['apt-transport-https', 'ca-certificates', 'curl', 'gnupg-agent', 'software-properties-common']
state: present
become: true
- name: add docker apt key
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
become: true
- name: add docker apt repository
apt_repository:
repo: 'deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable'
state: present
become: true
- name: apt install docker-ce
apt:
name: ['docker-ce', 'docker-ce-cli', 'containerd.io']
state: present
update_cache: yes
become: true
- name: get docker-compose
get_url:
url: 'https://github.com/docker/compose/releases/download/1.24.1/docker-compose-{{ ansible_system }}-{{ ansible_userspace_architecture }}'
dest: /usr/local/bin/docker-compose
mode: '0755'
become: true
- name: pip install docker and boto3
pip:
name: ['boto3', 'docker', 'docker-compose']
executable: pip3
- name: create docker config directory
file:
path: /etc/docker
state: directory
become: true
- name: copy docker daemon.json
copy:
content: |
{
"group": "docker",
"log-driver": "journald",
"live-restore": true,
"experimental": true,
"insecure-registries" : [],
"features": { "buildkit": true }
}
dest: /etc/docker/daemon.json
become: true
- name: enable docker service
service:
name: docker
enabled: yes
become: true
- name: add ubuntu user to docker group
user:
name: ubuntu
groups: docker
append: yes
become: true
- name: restart docker daemon
systemd:
state: restarted
daemon_reload: yes
name: docker
no_block: yes
become: true
# pull your images then run your containers
The only approach that I can think of is using AWS SSM to run multiple commands but still, you might need to spin 100's of instances and that would not be the good approach.
Below are the set of commands you can use :
Spin instance using below Cloudformation template, run it in loop to create multiple instances :
---
Resources:
MyInstance:
Type: AWS::EC2::Instance
Properties:
AvailabilityZone: <region>
ImageId: <amiID>
InstanceType: t2.micro
KeyName : <KeyName>
Use below command to get the intance ID :
aws ec2 describe-instances --filters 'Name=tag:Name,Values=EC2' --query 'Reservations[*].Instances[*].InstanceId' --output text
Using that instance-id, run below command :
aws ssm send-command --instance-ids "<instanceID>" --document-name "AWS-RunShellScript" --comment "<COMMENT>" --parameters commands='sudo yum update -y' --output text
I don't think docker will be of any help here as that would complicate things for you due to SSM agent installation. So your best bet would be running commands one by one and finally storing your output in S3.
I created two Amazon EC2 instances in AWS CloudFormation using a YAML template. I want to take private IP address of one EC2 instance to the other EC2 instance which has a public IP address. As per AWS documentation we can do that using !GetAtt JMeterServer1Instance.PrivateIp
I want to know under which section of the public EC2 instance I should add that in the template. (Please consider this is a YAML template.)
How do I check that we have received it?
It appears that your requirement is:
Create two instances in a CloudFormation template
In the User Data for Instance-A, refer to Instance-B
This is quite simple. First, define that Instance-B DependsOn Instance-A to ensure the creation of Instance-A before Instance-B.
Then, in the User Data for Instance-B, refer to Instance-A:
UserData:
"Fn::Base64":
!Sub |
#!/bin/bash
echo "${InstanceA.PrivateIp}" >foo
A 'better' method would be to use DNS names with a Hosted Zone for VPC in Route 53. This would create a DNS zone for the VPC, then define a DNS name that can be resolved locally. Link it to Instance-B and then Instance-A could refer to Instance-B by DNS name rather than IP address. This allows the DNS name to point to a different instance in future if desired, and creates less dependencies between Instance-A and Instance-B. (But, admittedly, more setup.)
As per the AWS document
Fn::GetAtt
will do the trick here.
My case is:
EC2Instance001 needs to be created 1st
EC2Instance002 needs to use IP of EC2Instance001.
At EC2Instance002 instance is created with two specific settings:
"DependsOn": [ "EC2Instance001"] as I want EC2Instance001 to be created first.
Under Userdata (or metadata) use { "Fn::GetAtt" : [ "EC2Instance001", "PrivateIp" ] } for getting IP of 1st Instance (EC2Instance001)
Here is how I achieved it (EC2Instance002):
---
EC2Instance002:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
configSets:
InstallAndRun:
- Install
- Configure
Install:
packages:
yum:
git: []
files:
"/tmp/bootstrap.sh":
content:
Fn::Join:
- ''
- - "#!/bin/bash\n"
- 'set -x
'
- 'echo "============================"
'
- 'sudo hostname >> /tmp/EC2Instance.txt
'
- MASTERIP=
- Fn::GetAtt:
- EC2Instance001
- PrivateIp
- "\n"
- "echo $MASTERIP > masterIP.txt \n"
mode: '755'
owner: ec2-user
group: ec2-user
Configure:
commands:
runBootstrapScript:
command: "./bootstrap.sh"
cwd: "/tmp"
DependsOn:
- EC2Instance001
Properties:
InstanceType:
Ref: InstanceType
SecurityGroups:
- Ref: InstanceSecurityGroup
KeyName:
Ref: KeyName
UserData:
Fn::Base64:
Fn::Join:
- ''
- - "#!/bin/bash -xe\n"
- 'yum install -y aws-cfn-bootstrap
'
- "# Install the files and packages from the metadata\n"
- "/opt/aws/bin/cfn-init -v"
- " --stack "
- Ref: AWS::StackName
- " --resource EC2Instance002 "
- " --configsets InstallAndRun "
- " --region "
- Ref: AWS::Region
- "\n"
ImageId:
Fn::FindInMap:
- AWSRegionArch2AMI
- Ref: AWS::Region
- Fn::FindInMap:
- AWSInstanceType2Arch
- Ref: InstanceType
- Arch
You can see that under Metadata I am capturing IP of EC2Instance001 instance in variable $MASTERIP.
NOTE: Same line in JSON will be written as:
"MASTERIP=",{ "Fn::GetAtt" : [ "EC2Instance001", "PrivateIp" ] }, "\n",
It depends on what you'd like to with the private IP on the other machine.
If you'd like to use it in a script on the other VM, pass it down in the user data script like in this example: UserData script with Resource Attribute CloudFormation
The example on the link is showing the attribute of a NetworkInterface instead of an instance attribute, but it's the same with !GetAtt JMeterServer1Instance.PrivateIp
Cloudformation appears to have an "Outputs" section where you can have a value referenced for other stacks, or to display back to the user, etc.
The limited doc is here.
Is it possible to use this to make the contents of a file available?
e.g. I've got a Jenkins install where the initial admin password is stored within:
/var/lib/jenkins/secrets/initialAdminPassword
I'd love to have that value available after deploying our Jenkins Cloudformation stack without having to then SSH into the server.
Is this possible with the outputs section, or any other way with cloudformation templates?
The Outputs section Cloud Formation template are meant to help you find your resource easily.
For any resource you create, you can output the properties defined in Fb::GetAtt Documentation.
For example, to get the connection string for the RDS Instance which was created using Cloud formation template, you can use the following
"Outputs" : {
"JDBCConnectionString": {
"Description" : "JDBC connection string for the master database",
"Value" : { "Fn::Join": [ "",
[ "jdbc:mysql://",
{ "Fn::GetAtt": [ "MyDatabase", "Endpoint.Address" ] },
":",
{ "Fn::GetAtt": [ "MyDatabase", "Endpoint.Port" ] },
"/",
{ "Ref": "MyDBName" }]
]}
}
}
It is not possible to output contents from a file. Moreover, outputs are visible to all the users having access to your AWS account. So, having password as an output is not recommended.
I would suggest you to upload your secrets to a private S3 bucket after the cloud formation create stack operation is successful and download the secrets whenever required.
Hope this helps.
I know this question has been answered but I wanted to offer another solution.
I found myself wanting to do exactly what you (the, OP) were trying to do: use Cloudformation to install Jenkins on an EC2 instance and then print the initial admin password to the Cloudformation outputs.
I ended up working around trying to read the file with the password and instead used the Jenkins CLI from the UserData section to update the admin user with a password that I specified.
Here’s what I did (showing snippets from the template in YAML):
Added a parameter to the template inputs to get the password:
Parameters:
KeyName:
ConstraintDescription: Must be the name of an existing EC2 KeyPair.
Description: Name of an existing EC2 KeyPair for SSH access
Type: AWS::EC2::KeyPair::KeyName
PassWord:
AllowedPattern: '[-_a-zA-Z0-9]*'
ConstraintDescription: A complex password at least eight chars long with alphanumeric characters, dashes and underscores.
Description: Password for the admin account
MaxLength: 64
MinLength: 8
NoEcho: true
Type: String
In the UserData section, I used the PassWord parameter in a call to the jenkins-cli to update the admin account :
UserData: !Base64
Fn::Join:
- ''
- - "#!/bin/bash -x\n"
- "exec > /tmp/user-data.log 2>&1\nunset UCF_FORCE_CONFFOLD\n"
- "export UCF_FORCE_CONFFNEW=YES\n"
- "ucf --purge /boot/grub/menu.lst\n"
- "export DEBIAN_FRONTEND=noninteractive\n"
- "echo \"deb http://pkg.jenkins-ci.org/debian binary/\" > /etc/apt/sources.list.d/jenkins.list\n"
- "wget -q -O jenkins-ci.org.key http://pkg.jenkins-ci.org/debian-stable/jenkins-ci.org.key\n\
apt-key add jenkins-ci.org.key\n"
- "apt-get update\n"
- "apt-get -o Dpkg::Options::=\"--force-confnew\" --force-yes -fuy upgrade\n"
- "apt-get install -y python-pip\n"
- "pip install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz\n"
- "apt-get install -y nginx\n"
- "apt-get install -y openjdk-8-jdk\n"
- "apt-get install -y jenkins\n"
- "# Wait for Jenkins to Set Up\n
- until [ $(curl -o /dev/null --silent\
\ --head --write-out '%{http_code}\n' http://localhost:8080) -eq 403\
\ ]; do sleep 1; done\nsleep 10\n# Change the password for the admin\
\ account\necho 'jenkins.model.Jenkins.instance.securityRealm.createAccount(\"\
admin\", \""
- !Ref 'PassWord'
- "\")' | java -jar /var/cache/jenkins/war/WEB-INF/jenkins-cli.jar -s\
\ \"http://localhost:8080/\" -auth \"admin:$(cat /var/lib/jenkins/secrets/initialAdminPassword)\"\
\ groovy =\n/usr/local/bin/cfn-init --resource=Instance --region="
- !Ref 'AWS::Region'
- ' --stack='
- !Ref 'AWS::StackName'
- "\n"
- "unlink /etc/nginx/sites-enabled/default\nsystemctl reload nginx\n"
- /usr/local/bin/cfn-signal -e $? --resource=Instance --region=
- !Ref 'AWS::Region'
- ' --stack='
- !Ref 'AWS::StackName'
- "\n"
Using this method, when Jenkins starts up, I don’t get the “enter the initial admin password” screen but instead I get a screen where i can just log in as admin with the password used in the parameters.
In terms of adding something to the outputs from a file on the system, I think there is a way to do it using WaitCondition and and passing data back using a cfn-signal command. But once I figured out that all I needed to do was set the password I didn’t pursue the WaitCondition method.
Again, I know you have your answer, but I wanted to share in case anyone else happens to be searching for a way to do this. This way worked for me! :D