AWS: IAM permission discrepancies - amazon-web-services

I am provisioning an ECS cluster using this template as provided by AWS.
I want to also add a file from an s3 bucket, but when adding the following
files:
"/home/ec2-user/.ssh/authorized_keys":
mode: "000600"
owner: ec2-user
group: ec2-user
source: "https://s3-eu-west-1.amazonaws.com/mybucket/myfile"
the provisioning fails with this error in /var/log/cfn-init.log
[root#ip-10-17-19-56 ~]# tail -f /var/log/cfn-init.log
File "/usr/lib/python2.7/dist-packages/cfnbootstrap/construction.py", line 251, in build
changes['files'] = FileTool().apply(self._config.files, self._auth_config)
File "/usr/lib/python2.7/dist-packages/cfnbootstrap/file_tool.py", line 138, in apply
self._write_file(f, attribs, auth_config)
File "/usr/lib/python2.7/dist-packages/cfnbootstrap/file_tool.py", line 225, in _write_file
raise ToolError("Failed to retrieve %s: %s" % (source, e.strerror))
ToolError: Failed to retrieve https://s3-eu-west-1.amazonaws.com/mybucket/myfile: HTTP Error 403 : <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>C6CDAC18E57345BF</RequestId><HostId>VFCrqxtbAsTeFrGxp/nzgBqJdwC7IsS3phjvPq/YzhUk8zuRhemquovq3Plc8aqFC73ki78tK+U=</HostId></Error>
However from within the instance (without the above section) the following command succeeds!
aws s3 cp s3://mybucket/myfile .

You need to use AWS::CloudFormation::Authentication resource to specify authentication credentials for files or sources that you specify with the AWS::CloudFormation::Init resource.
Example:
Metadata:
AWS::CloudFormation::Init:
...
AWS::CloudFormation::Authentication:
S3AccessCreds:
type: "S3"
buckets:
- "mybucket"
roleName:
Ref: "myRole"

Related

terraform running ansible on ec2 instance

I have Terraform trying that is trying to run Ansible when creating an ec2 instance.
resource "aws_instance" "jenkins-master" {
depends_on = [aws_main_route_table_association.set-master-default-rt-assoc, aws_kms_alias.master_ebs_cmk]
provider = aws.region-master
ami = data.aws_ssm_parameter.linuxAmi.value
instance_type = var.instance-type
key_name = aws_key_pair.master-key.key_name
associate_public_ip_address = true
vpc_security_group_ids = [aws_security_group.jenkins-sg.id]
subnet_id = aws_subnet.master_subnet_1.id
ipv6_address_count = 1
root_block_device {
encrypted = false
volume_size = 30
}
provisioner "local-exec" {
command = <<EOF
aws --profile myprofile ec2 wait instance-status-ok --region us-east-1 --instance-ids ${self.id} \
&& ansible-playbook --extra-vars 'passed_in_hosts=tag_Name_${self.tags.Name}' ansible_templates/install_jenkins.yaml
EOF
}
}
My terraform works if I export the key id and secret of "myprofile" as environmental variables
export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=YYYYYYYYYYYYYYYYYYYYYYYYYYY
If I do not export "AWS_ACCESS_KEY_ID" and "AWS_SECRET_ACCESS_KEY" I get the following error
....
aws_instance.jenkins-master (local-exec): Executing: ["/bin/sh" "-c" "aws --profile myprofile ec2 wait instance-status-ok --region us-east-1 --instance-ids i-04db214244937ed60 \\\n&& ansible-playbook --extra-vars 'passed_in_hosts=tag_Name_jenkins_master_tf' ansible_templates/install_jenkins.yaml\n"]
....
aws_instance.jenkins-master (local-exec): [WARNING]: * Failed to parse /home/pcooke/workspace/learn-
aws_instance.jenkins-master (local-exec): terraform/modules/ansible_templates/inventory_aws/tf_aws_ec2.yml with auto
aws_instance.jenkins-master (local-exec): plugin: Insufficient boto credentials found. Please provide them in your
aws_instance.jenkins-master (local-exec): inventory configuration file or set them as environment variables.
aws_instance.jenkins-master (local-exec): [WARNING]: * Failed to parse /home/pcooke/workspace/learn-
aws_instance.jenkins-master (local-exec): terraform/modules/ansible_templates/inventory_aws/tf_aws_ec2.yml with yaml
aws_instance.jenkins-master (local-exec): plugin: Plugin configuration YAML file, not YAML inventory
aws_instance.jenkins-master (local-exec): [WARNING]: * Failed to parse /home/pcooke/workspace/learn-
aws_instance.jenkins-master (local-exec): terraform/modules/ansible_templates/inventory_aws/tf_aws_ec2.yml with ini
aws_instance.jenkins-master (local-exec): plugin: Invalid host pattern '---' supplied, '---' is normally a sign this is a
aws_instance.jenkins-master (local-exec): YAML file.
aws_instance.jenkins-master (local-exec): [WARNING]: Unable to parse /home/pcooke/workspace/learn-
aws_instance.jenkins-master (local-exec): terraform/modules/ansible_templates/inventory_aws/tf_aws_ec2.yml as an
aws_instance.jenkins-master (local-exec): inventory source
aws_instance.jenkins-master (local-exec): [WARNING]: No inventory was parsed, only implicit localhost is available
aws_instance.jenkins-master (local-exec): [WARNING]: provided hosts list is empty, only localhost is available. Note that
aws_instance.jenkins-master (local-exec): the implicit localhost does not match 'all'
aws_instance.jenkins-master (local-exec): [WARNING]: Could not match supplied host pattern, ignoring:
aws_instance.jenkins-master (local-exec): tag_Name_jenkins_master_tf
Is there a simple way to pass the AWS profile to Ansible so that Ansible can get the right key id and secret????
Is there a simple way to pass the AWS profile to Ansible so that Ansible can get the right key id and secret?
As what user is Ansible executing the task?
You should include the key id and secret in a config file on the system under that user:
$ cat /home/myuser/.aws/config
[default]
aws_access_key_id=...
aws_secret_access_key=...
region=...
output=...
This way, Ansible should read the key content from the system.
Another solution would be to add environment variables to the task:
- hosts: local_test
gather_facts: false
vars:
env_vars:
aws_access_key_id: abc
aws_secret_key: abc
tasks:
- name: Terraform stuff
shell: env
environment: "{{ env_vars }}"
register: my_env
- debug:
msg: "{{ my_env }}"
Note that describing secrets in a playbook isn't considered safe. In order to do that safely one can use Ansible-vault.

S3 file not downloaded when triggering a Lambda function associated with EFS

I'm using the Serverless framework to create a Lambda function that, when triggered by an S3 upload (uploading test.vcf to s3://trigger-test/uploads/), downloads that uploaded file from S3 to EFS (specifically to the /mnt/efs/vcfs/ folder). I'm pretty new to EFS and followed AWS documentation for setting up the EFS access point, but when I deploy this application and upload a test file to trigger the Lambda function, it fails to download the file and gives this error in the CloudWatch logs:
[ERROR] FileNotFoundError: [Errno 2] No such file or directory: '/mnt/efs/vcfs/test.vcf.A0bA45dC'
Traceback (most recent call last):
File "/var/task/handler.py", line 21, in download_files_to_efs
result = s3.download_file('trigger-test', key, efs_loci)
File "/var/runtime/boto3/s3/inject.py", line 170, in download_file
return transfer.download_file(
File "/var/runtime/boto3/s3/transfer.py", line 307, in download_file
future.result()
File "/var/runtime/s3transfer/futures.py", line 106, in result
return self._coordinator.result()
File "/var/runtime/s3transfer/futures.py", line 265, in result
raise self._exception
File "/var/runtime/s3transfer/tasks.py", line 126, in __call__
return self._execute_main(kwargs)
File "/var/runtime/s3transfer/tasks.py", line 150, in _execute_main
return_value = self._main(**kwargs)
File "/var/runtime/s3transfer/download.py", line 571, in _main
fileobj.seek(offset)
File "/var/runtime/s3transfer/utils.py", line 367, in seek
self._open_if_needed()
File "/var/runtime/s3transfer/utils.py", line 350, in _open_if_needed
self._fileobj = self._open_function(self._filename, self._mode)
File "/var/runtime/s3transfer/utils.py", line 261, in open
return open(filename, mode)
My hunch is that this has to do with the local mount path specified in the Lambda function versus the Root directory path in the Details portion of the EFS access point configuration. Ultimately, I want the test.vcf file I upload to S3 to be downloaded to the EFS folder: /mnt/efs/vcfs/.
Relevant files:
serverless.yml:
service: LambdaEFS-trigger-test
frameworkVersion: '2'
provider:
name: aws
runtime: python3.8
stage: dev
region: us-west-2
vpc:
securityGroupIds:
- sg-XXXXXXXX
- sg-XXXXXXXX
- sg-XXXXXXXX
subnetIds:
- subnet-XXXXXXXXXX
functions:
cfnPipelineTrigger:
handler: handler.download_files_to_efs
description: Lambda to download S3 file to EFS folder.
events:
- s3:
bucket: trigger-test
event: s3:ObjectCreated:*
rules:
- prefix: uploads/
- suffix: .vcf
existing: true
fileSystemConfig:
localMountPath: /mnt/efs
arn: arn:aws:elasticfilesystem:us-west-2:XXXXXXXXXX:access-point/fsap-XXXXXXX
iamRoleStatements:
- Effect: Allow
Action:
- s3:ListBucket
Resource:
- arn:aws:s3:::trigger-test
- Effect: Allow
Action:
- s3:GetObject
- s3:GetObjectVersion
Resource:
- arn:aws:s3:::trigger-test/uploads/*
- Effect: Allow
Action:
- elasticfilesystem:ClientMount
- elasticfilesystem:ClientWrite
- elasticfilesystem:ClientRootAccess
Resource:
- arn:aws:elasticfilesystem:us-west-2:XXXXXXXXXX:file-system/fs-XXXXXX
plugins:
- serverless-iam-roles-per-function
package:
individually: true
exclude:
- '**/*'
include:
- handler.py
handler.py:
import json
import boto3
s3 = boto3.client('s3', region_name = 'us-west-2')
def download_files_to_efs(event, context):
"""
Locates the S3 file name (i.e. S3 object "key" value) the initiated the Lambda call, then downloads the file
into the locally attached EFS drive at the target location.
:param: event | S3 event record
:return: dict
"""
print(event)
key = event.get('Records')[0].get('s3').get('object').get('key') # bucket: trigger-test, key: uploads/test.vcf
efs_loci = f"/mnt/efs/vcfs/{key.split('/')[-1]}" # '/mnt/efs/vcfs/test.vcf
print("key: %s, efs_loci: %s" % (key, efs_loci))
result = s3.download_file('trigger-test', key, efs_loci)
if result:
print('Download Success...')
else:
print('Download failed...')
return { 'status_code': 200 }
EFS Access Point details:
Details
Root directory path: /vcfs
POSIX
USER ID: 1000
Group ID: 1000
Root directory creation permissions
Owner User ID: 1000
Owner Group ID: 1000
POSIX permissions to apply to the root directory path: 777
Your local path is localMountPath: /mnt/efs. So in your code you should be using only this path (not /mnt/efs/vcfs):
efs_loci = f"/mnt/efs/{key.split('/')[-1]}" # '/mnt/efs/test.vcf

Copying a file from S3 into my codebase when using Elastic Beanstalk

I have the following script:
Parameters:
bucket:
Type: CommaDelimitedList
Description: "Name of the Amazon S3 bucket that contains your file"
Default: "my-bucket"
fileuri:
Type: String
Description: "Path to the file in S3"
Default: "https://my-bucket.s3.eu-west-2.amazonaws.com/oauth-private.key"
authrole:
Type: String
Description: "Role with permissions to download the file from Amazon S3"
Default: "aws-elasticbeanstalk-ec2-role"
files:
/var/app/current/storage/oauth-private.key:
mode: "000600"
owner: webapp
group: webapp
source: { "Ref" : "fileuri" }
authentication: S3AccessCred
Resources:
AWSEBAutoScalingGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Metadata:
AWS::CloudFormation::Authentication:
S3AccessCred:
type: "S3"
roleName: { "Ref" : "authrole" }
buckets: { "Ref" : "bucket" }
The issue that I am having is that when this is being deployed, the file aren't present in the /var/app/current/storage directory.
I thought that maybe this script was running too soon and the current directory wasn't ready yet, so I tried the ondeck directory and this also doesn't work.
If I change the path to anywhere other than my codebase directory it works, the file is copied from S3.
Any ideas? Thanks.
Directives under the "files" key are processed before your web application is set up. You will need to download the file to a tmp file, and then use a container_command to transfer it to your app in the current directory.
This AWS doc mentions near the top the order in which keys are processed. The files key is processed before commands, and commands are run before the application and web server are set up. However, the container_commands section notes that they are used "to execute commands that affect your application source code".
So you should modify your script to something like this:
Parameters: ...
Resources: ...
files:
"/tmp/oauth-private.key":
mode: "000600"
owner: webapp
group: webapp
source: { "Ref" : "fileuri" }
authentication: S3AccessCred
container_commands:
file_transfer_1:
command: "mkdir -p storage"
file_transfer_2:
command: "mv /tmp/oauth-private.key storage"

AWS Elastic Beanstalk: Add custom logs to CloudWatch?

How to add custom logs to CloudWatch? Defaults logs are sent but how to add a custom one?
I already added a file like this: (in .ebextensions)
files:
"/opt/elasticbeanstalk/tasks/bundlelogs.d/applogs.conf" :
mode: "000755"
owner: root
group: root
content: |
/var/app/current/logs/*
"/opt/elasticbeanstalk/tasks/taillogs.d/cloud-init.conf" :
mode: "000755"
owner: root
group: root
content: |
/var/app/current/logs/*
As I did bundlelogs.d and taillogs.d these custom logs are now tailed or retrieved from the console or web, that's nice but they don't persist and are not sent on CloudWatch.
In CloudWatch I have the defaults logs like
/aws/elasticbeanstalk/InstanceName/var/log/eb-activity.log
And I want to have another one like this
/aws/elasticbeanstalk/InstanceName/var/app/current/logs/mycustomlog.log
Both bundlelogs.d and taillogs.d are logs retrieved from management console. What you want to do is extend default logs (e.g. eb-activity.log) to CloudWatch Logs. In order to extend the log stream, you need to add another configuration under /etc/awslogs/config/. The configuration should follow the Agent Configuration file Format.
I've successfully extended my logs for my custom ubuntu/nginx/php platform. Here is my extension file FYI. Here is an official sample FYI.
In your case, it could be like
files:
"/etc/awslogs/config/my_app_log.conf" :
mode: "000600"
owner: root
group: root
content: |
[/var/app/current/logs/xxx.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/app/current/logs/xxx.log"]]}`
log_stream_name = {instance_id}
file = /var/app/current/logs/xxx.log*
Credits where due go to Sebastian Hsu and Abhyudit Jain.
This is the final config file I came up with for .ebextensions for our particular use case. Notes explaining some aspects are below the code block.
files:
"/etc/awslogs/config/beanstalklogs_custom.conf" :
mode: "000600"
owner: root
group: root
content: |
[/var/log/tomcat8/catalina.out]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Fn::Select" : [ "1", { "Fn::Split" : [ "-", { "Ref":"AWSEBEnvironmentName" } ] } ] }, "var/log/tomcat8/catalina.out"]]}`
log_stream_name = `{"Fn::Join":["--", [{ "Ref":"AWSEBEnvironmentName" }, "{instance_id}"]]}`
file = /var/log/tomcat8/catalina.out*
services:
sysvinit:
awslogs:
files:
- "/etc/awslogs/config/beanstalklogs_custom.conf"
commands:
rm_beanstalklogs_custom_bak:
command: "rm beanstalklogs_custom.conf.bak"
cwd: "/etc/awslogs/config"
ignoreErrors: true
log_group_name
We have a standard naming scheme for our EB environments which is exactly environmentName-environmentType. I'm using { "Fn::Split" : [ "-", { "Ref":"AWSEBEnvironmentName" } ] } to split that into an array of two strings (name and type).
Then I use { "Fn::Select" : [ "1", <<SPLIT_OUTPUT>> ] } to get just the type string. Your needs would obviously differ, so you may only need the following:
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/log/tomcat8/catalina.out"]]}`
log_stream_name
I'm using the Fn::Join function to join the EB environment name with the instance ID. Note that the instance ID template is a string that gets echoed exactly as given.
services
The awslogs service is restarted automatically when the custom conf file is deployed.
commands
When the files block overwrites an existing file, it creates a backup file, like beanstalklogs_custom.conf.bak. This block erases that backup file because awslogs service reads both files, potentially causing conflict.
Result
If you log in to an EC2 instance and sudo cat the file, you should see something like this. Note that all the Fn functions have resolved. If you find that an Fn function didn't resolve, check it for syntax errors.
[/var/log/tomcat8/catalina.out]
log_group_name = /aws/elasticbeanstalk/environmentType/var/log/tomcat8/catalina.out
log_stream_name = environmentName-environmentType--{instance_id}
file = /var/log/tomcat8/catalina.out*
The awslogs agent looks in the configuration file for the log files which it's supposed to send. There are some defaults in it. You need to edit it and specify the files.
You can check and edit the configuration file located at:
/etc/awslogs/awslogs.conf
Make sure to restart the service:
sudo service awslogs restart
You can specify your own files there and create different groups and what not.
Please refer to the following link and you'll be able to get your logs in no time.
Resources:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html
Edit:
As you don't want to edit the files on the instance, you can add the relevant code to the .ebextensions folder in the root of your code. For example, this is my 01_cloudwatch.config :
packages:
yum:
awslogs: []
container_commands:
01_get_awscli_conf_file:
command: "aws s3 cp s3://project/awscli.conf /etc/awslogs/awscli.conf"
02_get_awslogs_conf_file:
command: "aws s3 cp s3://project/awslogs.conf.${NODE_ENV} /etc/awslogs/awslogs.conf"
03_restart_awslogs:
command: "sudo service awslogs restart"
04_start_awslogs_at_system_boot:
command: "sudo chkconfig awslogs on"
In this config, I am fetching the appropriate config file from a S3 bucket depending on the NODE_ENV. You can do anything you want in your config.
Some great answers already here.
I've detailed in a new Medium blog how this all works and an example .ebextensions file and where to put it.
Below is an excerpt that you might be able to use, the article explains how to determine the right folder/file(s) to stream.
Note that if /var/app/current/logs/* contains many different files this may not work,e.g. if you have
database.log
app.log
random.log
Then you should consider adding a stream for each, however if you have
app.2021-10-18.log
app.2021-10-17.log
app.2021-10-16.log
Then you can use /var/app/current/logs/app.*
packages:
yum:
awslogs: []
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 90
files:
"/etc/awslogs/awscli.conf" :
mode: "000600"
owner: root
group: root
content: |
[plugins]
cwlogs = cwlogs
[default]
region = `{"Ref":"AWS::Region"}`
"/etc/awslogs/config/logs.conf" :
mode: "000600"
owner: root
group: root
content: |
[/var/app/current/logs]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "/var/app/current/logs"]]}`
log_stream_name = {instance_id}
file = /var/app/current/logs/*
commands:
"01":
command: systemctl enable awslogsd.service
"02":
command: systemctl restart awslogsd
Looking at the AWS docs it's not immediately apparent, but there are a few things you need to do.
(Our environment is an Amazon Linux AMI - Rails App on the Ruby 2.6 Puma Platform).
First, create a Policy in IAM to give your EB generated EC2 instances access to work with CloudWatch log groups and stream to them - we named ours "EB-Cloudwatch-LogStream-Access".
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:DescribeLogStreams",
"logs:CreateLogGroup",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:log-group:/aws/elasticbeanstalk/*:log-stream:*"
}
]
}
Once you have created this, make sure the policy is attached (in IAM > Roles) to your IAM Instance Profile and Service Role that are associated with your EB environment (check the environment's configuration page: Configuration > Security > IAM instance profile | Service Role).
Then, provide a .config file in your .ebextensions directory such as setup_stream_to_cloudwatch.config or 0x_setup_stream_to_cloudwatch.config. In our project we have made it the last extension .config file to run during our deploys by setting a high number for 0x (eg. 09_setup_stream_to_cloudwatch.config).
Then, provide the following, replacing your_log_file with the appropriate filename, keeping in mind that some log files live in /var/log on an Amazon Linux AMI and some (such as those generated by your application) may live in a path such as /var/app/current/log:
files:
'/etc/awslogs/config/logs.conf':
mode: '000600'
owner: root
group: root
content: |
[/var/app/current/log/your_log_file.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/app/current/log/your_log_file.log"]]}`
log_stream_name = {instance_id}
file = /var/app/current/log/your_log_file.log*
commands:
"01":
command: chkconfig awslogs on
"02":
command: service awslogs restart # note that this works for Amazon Linux AMI only - other Linux instances likely use `systemd`
Deploy your application, and you should be set!

How to upload a folder to aws s3 recursivly using ansible

I'm using ansible to deploy my application.
I'm came to the point where I want to upload my grunted assets to a newly created bucket, here is what I have done:
{{hostvars.localhost.public_bucket}} is the bucket name,
{{client}}/{{version_id}}/assets/admin is the path to a folder containing Multi-levels folders and assets to upload:
- s3:
aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
bucket: "{{hostvars.localhost.public_bucket}}"
object: "{{client}}/{{version_id}}/assets/admin"
src: "{{trunk}}/public/assets/admin"
mode: put
Here is the error message:
fatal: [x.y.z.t]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "s3"}, "module_stderr": "", "module_stdout": "\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 2868, in <module>\r\n main()\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 561, in main\r\n upload_s3file(module, s3, bucket, obj, src, expiry, metadata, encrypt, headers)\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 307, in upload_s3file\r\n key.set_contents_from_filename(src, encrypt_key=encrypt, headers=headers)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1358, in set_contents_from_filename\r\n with open(filename, 'rb') as fp:\r\nIOError: [Errno 21] Is a directory: '/home/abcd/efgh/public/assets/admin'\r\n", "msg": "MODULE FAILURE", "parsed": false}
I went through the documentation and I didn't find recursing option for ansible s3_module.
Is this a bug or am I missing something?
As of Ansible 2.3, you can use: s3_sync:
- name: basic upload
s3_sync:
bucket: tedder
file_root: roles/s3/files/
Note: If you're using a non-default region, you should set region explicitly, otherwise you get a somewhat obscure error along the lines of: An error occurred (400) when calling the HeadObject operation: Bad Request An error occurred (400) when calling the HeadObject operation: Bad Request
Here's a complete playbook matching what you were trying to do above:
- hosts: localhost
vars:
aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
bucket: "{{hostvars.localhost.public_bucket}}"
tasks:
- name: Upload files
s3_sync:
aws_access_key: '{{aws_access_key}}'
aws_secret_key: '{{aws_secret_key}}'
bucket: '{{bucket}}'
file_root: "{{trunk}}/public/assets/admin"
key_prefix: "{{client}}/{{version_id}}/assets/admin"
permission: public-read
region: eu-central-1
Notes:
You could probably remove region, I just added it to exemplify my point above
I've just added the keys to be explicit. You can (and probably should) use environment variables for this:
From the docs:
If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence AWS_URL or EC2_URL, AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY or EC2_ACCESS_KEY, AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY or EC2_SECRET_KEY, AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN, AWS_REGION or EC2_REGION
The ansible s3 module does not support directory uploads, or any recursion.
For this tasks, I'd recommend using s3cmd check below syntax.
command: "aws s3 cp {{client}}/{{version_id}}/assets/admin s3://{{hostvars.localhost.public_bucket}}/ --recursive"
By using ansible, it looks like you wanted something idempotent, but ansible doesn't support yet s3 directory uploads or any recursion, so you probably should use the aws cli to do the job like this:
command: "aws s3 cp {{client}}/{{version_id}}/assets/admin s3://{{hostvars.localhost.public_bucket}}/ --recursive"
I was able to accomplish this using the s3 module by iterating over the output of the directory listing i wanted to upload. The little inline python script i'm running via the command module just outputs the full list if files paths in the directory, formatted as JSON.
- name: upload things
hosts: localhost
connection: local
tasks:
- name: Get all the files in the directory i want to upload, formatted as a json list
command: python -c 'import os, json; print json.dumps([os.path.join(dp, f)[2:] for dp, dn, fn in os.walk(os.path.expanduser(".")) for f in fn])'
args:
chdir: ../../styles/img
register: static_files_cmd
- s3:
bucket: "{{ bucket_name }}"
mode: put
object: "{{ item }}"
src: "../../styles/img/{{ item }}"
permission: "public-read"
with_items: "{{ static_files_cmd.stdout|from_json }}"