I'm trying to use the CloudFormation cfn-init to bootstrap creation of on-demand compute nodes in a cluster built on Ubuntu 18.04. For some reason, cnf-init enter a dead loop. This is the CloudFormation that I am trying to use:
Resources:
InstanceRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: ec2.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: root
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- ec2:DescribeTags
- cloudformation:DescribeStackResource
Resource: '*'
InstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Roles:
- !Ref InstanceRole
LaunchTemplate:
Type: AWS::EC2::LaunchTemplate
Metadata:
AWS::CloudFormation::Init:
configSets:
default:
- "basic"
basic:
files:
/home/ubuntu/.emacs:
content: !Sub |
;; ========== Configuring basic Emacs behavior ==========
;; Try to auto split vertically all the time
(setq split-height-threshold nil)
;; ========== Enable Line and Column Numbering ==========
;; Show line-number in the mode line
(line-number-mode 1)
;; Show column-number in the mode line
(column-number-mode 1)
;; Display size in human format in Dired mode
(setq dired-listing-switches "-alh")
mode: "000644"
owner: "ubuntu"
group: "ubuntu"
packages:
apt:
build-essential: []
emacs-nox: []
Properties:
LaunchTemplateData:
ImageId: ami-07a29e5e945228fa1
IamInstanceProfile:
Arn: !GetAtt [ InstanceProfile, Arn ]
UserData:
Fn::Base64:
!Sub |
#!/bin/bash -x
# Install the aws CloudFormation Helper Scripts
apt-get update -y && apt-get upgrade -y
apt-get install -y python2.7
update-alternatives --install /usr/bin/python python /usr/bin/python2.7 1
curl https://bootstrap.pypa.io/get-pip.py --output get-pip.py
python get-pip.py
rm get-pip.py
pip install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz
## Running init stack
cfn-init -v --stack ${AWS::StackName} --resource LaunchTemplate --region ${AWS::Region}
LaunchTemplateName: MyLaunchTemplate
Looking at /var/log/cfn-init.log is not really helpfull
2020-11-11 17:17:59,172 [DEBUG] CloudFormation client initialized with endpoint https://cloudformation.us-west-2.amazonaws.com
2020-11-11 17:17:59,172 [DEBUG] Describing resource LaunchTemplate in stack LaunchTemplate
2020-11-11 17:17:59,237 [ERROR] Throttling: Rate exceeded
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/cfnbootstrap/util.py", line 162, in _retry
return f(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/cfnbootstrap/util.py", line 234, in _timeout
raise exc[0]
HTTPError: 400 Client Error: Bad Request
2020-11-11 17:17:59,237 [DEBUG] Sleeping for 0.143176 seconds before retrying
2020-11-11 17:17:59,381 [DEBUG] Describing resource LaunchTemplate in stack LaunchTemplate
2020-11-11 17:17:59,445 [ERROR] Throttling: Rate exceeded
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/cfnbootstrap/util.py", line 162, in _retry
return f(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/cfnbootstrap/util.py", line 234, in _timeout
raise exc[0]
HTTPError: 400 Client Error: Bad Request
2020-11-11 17:17:59,445 [DEBUG] Sleeping for 1.874780 seconds before retrying
2020-11-11 17:18:01,323 [DEBUG] Describing resource LaunchTemplate in stack LaunchTemplate
Investigating /var/log/cloud-init.log, I can see where it breaks first:
(...)
2020-11-11 17:16:57,175 - util.py[DEBUG]: Running command ['/var/lib/cloud/instance/scripts/part-001'] with allowed return codes [0] (shell=False, capture=False)
2020-11-11 17:21:17,126 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [1]
2020-11-11 17:21:17,129 - util.py[DEBUG]: Failed running /var/lib/cloud/instance/scripts/part-001 [1]
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 878, in runparts
File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 2164, in subp
break
cloudinit.util.ProcessExecutionError: Unexpected error while running command.
Command: ['/var/lib/cloud/instance/scripts/part-001']
Exit code: 1
Reason: -
Stdout: -
Stderr: -
(...)
which is the content of the UserData of the template:
$ cat /var/lib/cloud/instance/scripts/part-001
#!/bin/bash -x
# Install the aws CloudFormation Helper Scripts
apt-get update -y && apt-get upgrade -y
apt-get install -y python2.7
update-alternatives --install /usr/bin/python python /usr/bin/python2.7 1
curl https://bootstrap.pypa.io/get-pip.py --output get-pip.py
python get-pip.py
rm get-pip.py
pip install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz
## Running init stack
cfn-init -v --stack LaunchTemplate --resource LaunchTemplate --region us-west-2
Even thought I set cloudformation:DescribeStackResource in the InstanceRole, running the script as root returns the following error:
(...)
Successfully built aws-cfn-bootstrap
+ cfn-init -v --stack LaunchTemplate --resource LaunchTemplate --region us-west-2
AccessDenied: Instance i-0bcf477579987a0e8 is not allowed to call DescribeStackResource for LaunchTemplate
This is really strange as when I do the same within a AWS::EC2::Instance using the same AMI work just fine. Any idea what's going on here? What am'I missing?
Thanks
This could be because --resource LaunchTemplate is incorrect. It should the ASG or instance resource that uses the launchtemplate, not the LaunchTemplate itself.
Related
getting this error after creating system manager document,
---
schemaVersion: "2.2"
description: "Command Document Example JSON Template"
parameters:
Message:
type: "String"
description: "Example"
default: "Hello World"
mainSteps:
- action: "aws:runPowerShellScript"
name: "example"
inputs:
runCommand:
- 'sudo apt update -y && sudo apt upgrade -y'
- 'sudo apt install -y httpd'
- 'sudo systemctl start httpd'
- 'sudo systemctl enanle httpd'
- 'echo "{{Message}} from $(hostname -f)" > /var/www/html/index.html'
Please let me know wht i have to do for this error ?
Need to install httpd in all 3 servers i have.
I am trying to deploy my serverless project locally with LocalStack and serverless-local plugin. When I try to deploy it with serverless deploy it throws an error and its failing to create the cloudformation stack.But, I manage to create the same stack when I deploy the project in to real aws environment. What is the possible issue here. I checked answers in all the previous questions asked on similar issue, nothing seems to work.
docker-compose.yml
version: "3.8"
services:
localstack:
container_name: "serverless-localstack_main"
image: localstack/localstack
ports:
- "4566-4597:4566-4597"
environment:
- AWS_DEFAULT_REGION=eu-west-1
- EDGE_PORT=4566
- SERVICES=lambda,cloudformation,s3,sts,iam,apigateway,cloudwatch
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
serverless.yml
service: serverless-localstack-test
frameworkVersion: '2'
plugins:
- serverless-localstack
custom:
localstack:
debug: true
host: http://localhost
edgePort: 4566
autostart: true
lambda:
mountCode: True
stages:
- local
endpointFile: config.json
provider:
name: aws
runtime: nodejs12.x
lambdaHashingVersion: 20201221
stage: local
region: eu-west-1
deploymentBucket:
name: deployment
functions:
hello:
handler: handler.hello
Config.json (which has the endpoints)
{
"CloudFormation": "http://localhost:4566",
"CloudWatch": "http://localhost:4566",
"Lambda": "http://localhost:4566",
"S3": "http://localhost:4566"
}
Error in Localstack container
serverless-localstack_main | 2021-06-04T17:41:49:WARNING:localstack.utils.cloudformation.template_deployer: Error calling
<bound method ClientCreator._create_api_method.<locals>._api_call of
<botocore.client.Lambda object at 0x7f31f359a4c0>> with params: {'FunctionName':
'serverless-localstack-test-local-hello', 'Runtime': 'nodejs12.x', 'Role':
'arn:aws:iam::000000000000:role/serverless-localstack-test-local-eu-west-1-lambdaRole',
'Handler': 'handler.hello', 'Code': {'S3Bucket': '__local__', 'S3Key':
'/Users/charles/Documents/Practice/serverless-localstack-test'}, 'Timeout': 6,
'MemorySize': 1024} for resource: {'Type': 'AWS::Lambda::Function', 'Properties':
{'Code': {'S3Bucket': '__local__', 'S3Key':
'/Users/charles/Documents/Practice/serverless-localstack-test'}, 'Handler':
'handler.hello', 'Runtime': 'nodejs12.x', 'FunctionName': 'serverless-localstack-test-
local-hello', 'MemorySize': 1024, 'Timeout': 6, 'Role':
'arn:aws:iam::000000000000:role/serverless-localstack-test-local-eu-west-1-lambdaRole'},
'DependsOn': ['HelloLogGroup'], 'LogicalResourceId': 'HelloLambdaFunction',
'PhysicalResourceId': None, '_state_': {}}
I fixed that problem using this plugin: https://www.serverless.com/plugins/serverless-deployment-bucket
You need to make some adjustments in your files.
Update your docker-compose.yml, use the reference docker compose from
localstack, you can check it here.
Use a template that works correctly, AWS docs page have several
examples, you can check it here.
Run it with next command aws cloudformation create-stack --endpoint-url http://localhost:4566 --stack-name samplestack --template-body file://lambda.yml --profile dev
You can also run localstack using Python with next commands
pip install localstack
localstack start
File "/var/task/django/db/backends/postgresql/base.py", line 29, in
raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: libpq.so.5: cannot open shared object file: No such file or directory
I'm receiving the following error after deploying a Django application via Serverless. If I deploy the application via our Bitbucket Pipeline.
Here's the Pipeline:
- step: &Deploy-serverless
caches:
- node
image: node:11.13.0-alpine
name: Deploy Serverless
script:
# Initial Setup
- apk add curl postgresql-dev gcc python3-dev musl-dev linux-headers libc-dev
# Load our environment.
...
- apk add python3
- npm install -g serverless
# Set Pipeline Variables
...
# Configure Serverless
- cp requirements.txt src/requirements.txt
- printenv > src/.env
- serverless config credentials --provider aws --key ${AWS_KEY} --secret ${AWS_SECRET}
- cd src
- sls plugin install -n serverless-python-requirements
- sls plugin install -n serverless-wsgi
- sls plugin install -n serverless-dotenv-plugin
Here's the Serverless File:
service: serverless-django
plugins:
- serverless-python-requirements
- serverless-wsgi
- serverless-dotenv-plugin
custom:
wsgi:
app: arc.wsgi.application
packRequirements: false
pythonRequirements:
dockerFile: ./serverless-dockerfile
dockerizePip: non-linux
pythonBin: python3
useDownloadCache: false
useStaticCache: false
provider:
name: aws
runtime: python3.6
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: "Allow"
Action:
- s3:GetObject
- s3:PutObject
Resource: "arn:aws:s3:::*"
functions:
app:
handler: wsgi.handler
events:
- http: ANY /
- http: "ANY {proxy+}"
timeout: 60
Here's the Dockerfile:
FROM lambci/lambda:build-python3.7
RUN yum install -y postgresql-devel python-psycopg2 postgresql-libs
And here's the requirements:
amqp==2.6.1
asgiref==3.3.1
attrs==20.3.0
beautifulsoup4==4.9.3
billiard==3.6.3.0
boto3==1.17.29
botocore==1.20.29
celery==4.4.7
certifi==2020.12.5
chardet==4.0.0
click==7.1.2
coverage==5.5
Django==3.1.7
django-cachalot==2.3.3
django-celery-beat==2.2.0
django-celery-results==2.0.1
django-filter==2.4.0
django-google-analytics-app==5.0.2
django-redis==4.12.1
django-timezone-field==4.1.1
djangorestframework==3.12.2
Djaq==0.2.0
drf-spectacular==0.14.0
future==0.18.2
idna==2.10
inflection==0.5.1
Jinja2==2.11.3
joblib==1.0.1
jsonschema==3.2.0
kombu==4.6.11
livereload==2.6.3
lunr==0.5.8
Markdown==3.3.4
MarkupSafe==1.1.1
mkdocs==1.1.2
nltk==3.5
psycopg2-binary==2.8.6
pyrsistent==0.17.3
python-crontab==2.5.1
python-dateutil==2.8.1
python-dotenv==0.15.0
pytz==2021.1
PyYAML==5.4.1
redis==3.5.3
regex==2020.11.13
requests==2.25.1
sentry-sdk==1.0.0
six==1.15.0
soupsieve==2.2
sqlparse==0.4.1
structlog==21.1.0
tornado==6.1
tqdm==4.59.0
uritemplate==3.0.1
urllib3==1.26.3
uWSGI==2.0.19.1
vine==1.3.0
Werkzeug==1.0.1
And here's the database settings:
# Database Defintions
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql_psycopg2",
"HOST": load_env("PSQL_HOST", "127.0.0.1"),
"NAME": load_env("PSQL_DATABASE", ""),
"PASSWORD": load_env("PSQL_PASSWORD", ""),
"USER": load_env("PSQL_USERNAME", ""),
"PORT": load_env("PSQL_PORT", "5432"),
"TEST": {
"NAME": "arc_unittest",
},
},
}
Am at a loss for what exactly the issue is. Thoughts?
File "/var/task/django/db/backends/postgresql/base.py", line 29, in
raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: No module named 'psycopg2._psycopg'
I receive this similar error when deploying locally.
In my case, I needed to replace the psycopg2-binary with aws-psycopg2 to be Lambda friendly.
Apparently it is a dependency error, in debian base distributions you solve it by installing the libpq-dev package
You can also use a lambda layer and use one already created in this repo
At the bottom of you Lambda console, you have a "Layers" section where you can click "Add Layer" and specify the corresponding ARN based on your Python version and AWS region.
It will also have the benefit of reducing your package size.
I am trying to deploy dockerized react web app to EC2 but I am still getting an error when configuring the instance. Already search but did not find anything.
Deploying using command:
ansible-playbook -vvvvv ansible/ec2_deploy.yml --user ubuntu
Docker which I am running ansible in:
FROM node:10.23.0-alpine3.9
COPY . .
ENV PYTHONUNBUFFERED=1
RUN apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python
RUN python3 -m ensurepip
RUN pip3 install --no-cache --upgrade pip setuptools
RUN apk add --update ansible
RUN apk add
RUN pip install boto
RUN chmod 777 get_vault_pass.sh
ENTRYPOINT [ "/bin/sh" ]
Ansible deployment:
- name: Deploy to EC2
hosts: localhost
connection: local
tasks:
- name: Launch EC2 instance
ec2:
instance_type: t2.micro
image: ami-0885b1f6bd170450c
region: us-east-1
key_name: eshop-key-pair
vpc_subnet_id: subnet-cafc34fb
assign_public_ip: yes
wait: yes
count: 1
group: eshop
aws_access_key: 'key'
aws_secret_key: 'key2'
security_token: 'token'
register: ec2
- name: Add instance host to group
add_host: hostname={{ item.public_dns_name }} groupname=launched
with_items: '{{ec2.instances}}'
- name: Wait for SSH connection
wait_for: host={{ item.public_dns_name }} port=22 delay=60 timeout=600 state=started
with_items: '{{ec2.instances}}'
- name: Configure EC2
hosts: launched
connection: ssh
tasks:
- name: Install docker
apt:
name: docker.io
state: present
update_cache: yes
become: yes
- service:
name: docker
state: started
enabled: yes
become: yes
- name: Get project files from GIT
git:
repo: 'https://github.com/romanzdk/4IT572_ZS_2020_circleci.git'
dest: ./app
- name: Build docker with eshop
shell: cd app && docker build -t myeshop:latest .
become: yes
- name: Run docker with eshop
shell: docker run -p 80:3000 myeshop
async: 90
poll: 15
become: yes
- wait_for: delay=60 timeout=600
port: 80
Stack trace:
PLAY [Configure EC2] ***********************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************
task path: /ansible/ec2_deploy.yml:30
<ec2-100-25-28-7.compute-1.amazonaws.com> ESTABLISH SSH CONNECTION FOR USER: None
<ec2-100-25-28-7.compute-1.amazonaws.com> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<ec2-100-25-28-7.compute-1.amazonaws.com> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<ec2-100-25-28-7.compute-1.amazonaws.com> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<ec2-100-25-28-7.compute-1.amazonaws.com> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/root/.ansible/cp/aaee2dc684)
<ec2-100-25-28-7.compute-1.amazonaws.com> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/aaee2dc684 ec2-100-25-28-7.compute-1.amazonaws.com '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
The full traceback is:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ansible/executor/task_executor.py", line 140, in run
res = self._execute()
File "/usr/lib/python3.6/site-packages/ansible/executor/task_executor.py", line 612, in _execute
result = self._handler.run(task_vars=variables)
File "/usr/lib/python3.6/site-packages/ansible/plugins/action/normal.py", line 46, in run
result = merge_hash(result, self._execute_module(task_vars=task_vars, wrap_async=wrap_async))
File "/usr/lib/python3.6/site-packages/ansible/plugins/action/__init__.py", line 745, in _execute_module
self._make_tmp_path()
File "/usr/lib/python3.6/site-packages/ansible/plugins/action/__init__.py", line 294, in _make_tmp_path
tmpdir = self._remote_expand_user(tmpdir, sudoable=False)
File "/usr/lib/python3.6/site-packages/ansible/plugins/action/__init__.py", line 613, in _remote_expand_user
data = self._low_level_execute_command(cmd, sudoable=False)
File "/usr/lib/python3.6/site-packages/ansible/plugins/action/__init__.py", line 980, in _low_level_execute_command
rc, stdout, stderr = self._connection.exec_command(cmd, in_data=in_data, sudoable=sudoable)
File "/usr/lib/python3.6/site-packages/ansible/plugins/connection/ssh.py", line 1145, in exec_command
(returncode, stdout, stderr) = self._run(cmd, in_data, sudoable=sudoable)
File "/usr/lib/python3.6/site-packages/ansible/plugins/connection/ssh.py", line 392, in wrapped
return_tuple = func(self, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/ansible/plugins/connection/ssh.py", line 1035, in _run
return self._bare_run(cmd, in_data, sudoable=sudoable, checkrc=checkrc)
File "/usr/lib/python3.6/site-packages/ansible/plugins/connection/ssh.py", line 790, in _bare_run
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
File "/usr/lib/python3.6/subprocess.py", line 729, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.6/subprocess.py", line 1364, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: b'ssh': b'ssh'
fatal: [ec2-52-73-248-179.compute-1.amazonaws.com]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
Any idea what is wrong? I already spent an ages on this...
here is some more text as I am asked to add more details because of the long code, lol
chepner's comment is spot on - your docker image doesn't have ssh installed. Try
apk add openssh-client
and the error should be solved.
I am trying to create a CloudFormation stack which has UserData script to install java, tomcat, httpd and java application on launch of an EC2 instance.
However, the stack gets created successfully with all the resources but when I connect to EC2 instance to check the configuration of above applications I don't find any. My usecase is to spin-up an instance with all the above applications/software to be installed with automation.
UserData:
Fn::Base64:
Fn::Join:
- ' '
- - '#!/bin/bash -xe\n'
- 'sudo yum update && install pip && pip install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz\n'
- 'date > /home/ec2-user/starttime\n'
- 'sudo yum update -y aws-cfn-bootstrap\n'
# Initialize CloudFormation bits\n
- ' '
- '/opt/aws/bin/cfn-init -v\n'
- ' --stack\n'
- '!Ref AWS::StackName\n'
- ' --resource LaunchConfig\n'
- 'ACCESS_KEY=${HostKeys}&SECRET_KEY=${HostKeys.SecretAccessKey}\n'
# Start servers\n
- 'service tomcat8 start\n'
- '/etc/init.d/httpd start\n'
- 'date > /home/ec2-user/stoptime\n'
Metadata:
AWS::CloudFormation::Init:
config:
packages:
yum:
- java-1.8.0-openjdk.x86_64: []
- tomcat8: []
- httpd: []
services:
sysvinit:
httpd:
enabled: 'true'
ensureRunning: 'true'
files:
- /usr/share/tomcat8/webapps/sample.war:
- source: https://s3-eu-west-1.amazonaws.com/testbucket/sample.war
- mode: 000500
- owner: tomcat
- group: tomcat
CfnUser:
Type: AWS::IAM::User
Properties:
Path: '/'
Policies:
- PolicyName: Admin
PolicyDocument:
Statement:
- Effect: Allow
Action: '*'
Resource: '*'
HostKeys:
Type: AWS::IAM::AccessKey
Properties:
UserName: !Ref CfnUser
The problem is in the way you have formatted your UserData. I would suggest that you launch the EC2 instance and manually test the script first. It has a number of problems in it.
Try formatting your UserData like this:
UserData:
Fn::Base64:
!Sub |
#!/bin/bash -xe
# FIXME. This won't work either.
# sudo yum update && install pip && pip install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz
date > /home/ec2-user/starttime
sudo yum update -y aws-cfn-bootstrap
# Initialize CloudFormation bits
/opt/aws/bin/cfn-init -v \
--stack ${AWS::StackName} \
--resource LaunchConfig
# FIXME. Not sure why these are here.
# ACCESS_KEY=${HostKeys}
# SECRET_KEY=${HostKeys.SecretAccessKey}
# Start servers\n
service tomcat8 start
/etc/init.d/httpd start
date > /home/ec2-user/stoptime
Things to note:
You can't interpolate here using !Ref notation. Notice I changed it to ${AWS::StackName} and notice the whole block is inside !Sub.
As my comments indicate, the yum update line has invalid commands in it.
As noted in the comments, it is a bad practice to inject access keys. Also, the keys don't seem to be required for anything in this script.
Note also that the files section is specified incorrectly in the MetaData, as Arrays instead of Hash keys.
It should be:
files:
/usr/share/tomcat8/webapps/sample.war:
source: https://s3-eu-west-1.amazonaws.com/testbucket/sample.war
mode: '000500'
owner: tomcat
group: tomcat