Ansible Script dpkg lock on aws launch ubuntu 16.04 - amazon-web-services

I have a launch script (user data) that runs on startup in aws with an ubuntu 16.04 image, and the issue I'm having is that when it gets to the part where it runs an ansible playbook the playbook fails saying this basic error message Could not get lock /var/lib/dpkg/lock. Now when I log in and try to run the ansible script manually it works, but if I run it from the aws user data, it fails with the error.
This is the full error
TASK [rabbitmq : install packages (Ubuntu default repo is used)] ***************
task path: /etc/ansible/roles/rabbitmq/tasks/main.yml:50
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1480352390.01-116502531862586 `" && echo ansible-tmp-1480352390.01-116502531862586="` echo $HOME/.ansible/tmp/ansible-tmp-1480352390.01-116502531862586 `" ) && sleep 0'
<localhost> PUT /tmp/tmpGHaVRP TO /.ansible/tmp/ansible-tmp-1480352390.01-116502531862586/apt
<localhost> EXEC /bin/sh -c 'chmod u+x /.ansible/tmp/ansible-tmp-1480352390.01-116502531862586/ /.ansible/tmp/ansible-tmp-1480352390.01-116502531862586/apt && sleep 0'
<localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /.ansible/tmp/ansible-tmp-1480352390.01-116502531862586/apt; rm -rf "/.ansible/tmp/ansible-tmp-1480352390.01-116502531862586/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {"cache_update_time": 0, "cache_updated":
false, "changed": false, "failed": true, "invocation": {"module_args":
{"allow_unauthenticated": false, "autoremove": false, "cache_valid_time":
null, "deb": null, "default_release": null, "dpkg_options": "force-
confdef,force-confold", "force": false, "install_recommends": null, "name":
"rabbitmq-server", "only_upgrade": false, "package": ["rabbitmq-server"],
"purge": false, "state": "present", "update_cache": false, "upgrade": null},
"module_name": "apt"}, "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--
force-confdef\" -o \"Dpkg::Options::=--force-confold\" install
'rabbitmq-server'' failed: E: Could not get lock /var/lib/dpkg/lock - open
(11: Resource temporarily unavailable)\nE: Unable to lock the administration
directory (/var/lib/dpkg/), is another process using it?\n", "stderr": "E: Could
not get lock /var/lib/dpkg/lock - open (11: Resource temporarily
unavailable)\nE: Unable to lock the administration directory (/var/lib/dpkg/),
is another process using it?\n", "stdout": "", "stdout_lines": []}

I ran into the same lock issue. I found that ubuntu was installing some packages on first boot which cloud-init did not wait for.
I use the following script to check that the lock file is available for at least 15 seconds prior to trying to install anything.
#!/bin/bash
i="0"
while [ $i -lt 15 ]
do
if [ $(fuser /var/lib/dpkg/lock) ]; then
i="0"
fi
sleep 1
i=$[$i+1]
done
The reason I prefer this vs sleep 5m because in an autoscale group the instance may be removed before it's even provisioned.

Related

When creating directory with Ansible it doesnt appear

I have this simple ansible flow: I want to create a directory on the host:
- name: Create rails app dir
file: path=/etc/rails-app state=directory mode=0755
register: rails_app_dir
And these are the logs when I run the playbook:
TASK [instance_deploy_app : Create rails app dir] *************************************************************************************************
task path: /etc/ansible/roles/instance_deploy_app/tasks/main.yml:39
<IPv4 of host> ESTABLISH LOCAL CONNECTION FOR USER: root
<IPv4 of host> EXEC /bin/sh -c 'echo ~root && sleep 0'
<IPv4 of host> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297 `" && echo ansible-tmp-1645566978.53-25820-207749605236297="` echo /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297 `" ) && sleep 0'
Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py
<IPv4 of host> PUT /root/.ansible/tmp/ansible-local-25617Cg_rWo/tmpTPHs3p TO /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297/AnsiballZ_file.py
<IPv4 of host> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297/ /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297/AnsiballZ_file.py && sleep 0'
<IPv4 of host> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297/AnsiballZ_file.py && sleep 0'
<IPv4 of host> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297/ > /dev/null 2>&1 && sleep 0'
ok: [IPv4 of host] => {
"changed": false,
"diff": {
"after": {
"path": "/etc/rails-app"
},
"before": {
"path": "/etc/rails-app"
}
},
"gid": 0,
"group": "root",
"invocation": {
"module_args": {
"_diff_peek": null,
"_original_basename": null,
"access_time": null,
"access_time_format": "%Y%m%d%H%M.%S",
"attributes": null,
"backup": null,
"content": null,
"delimiter": null,
"directory_mode": null,
"follow": true,
"force": false,
"group": null,
"mode": "0755",
"modification_time": null,
"modification_time_format": "%Y%m%d%H%M.%S",
"owner": null,
"path": "/etc/rails-app",
"recurse": false,
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"state": "directory",
"unsafe_writes": null
}
},
"mode": "0755",
"owner": "root",
"path": "/etc/rails-app",
"size": 41,
"state": "directory",
"uid": 0
}
Read vars_file 'roles/instance_deploy_app/vars/instance_vars.yml'
Read vars_file 'roles/instance_deploy_app/vars/aws_cred.yml'
According to the logs, the directory should be there but when I try to access /etc/rails-app/ it is not there. I currently have 3 users in the AWS EC2 instance: ec2-user, root and user1 and I tried to check in all of them but the directory doesnt appear.
Am I doing something wrong? Thanks!
The reason why it was not creating the folder, as β.εηοιτ.βε suggested, is because in the playbook I had connection: local so it was "never connecting to my EC2 and always acting on my controller". Once I removed that, it worked.

packer provisioning by ansible fails in aws codebuild

My Codebuild project that it creates AMI by packer by ansible provisioner.
This packer settings success in my local environment and Amazon linux2 ec2 environment. However, when I use AWS Codebuild with aws/codebuild/amazonlinux2-x86_64-standard:1.0 image and it fails.
I already tried this settings remote_tmp = /tmp or remote_tmp = /tmp/.ansible-${USER}/tmp but did not work.
Authentication or permission failure, did not have permissions on the remote directory
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
pre_build:
commands:
- python --version
- pip --version
- curl -qL -o packer.zip https://releases.hashicorp.com/packer/1.4.3/packer_1.4.3_linux_amd64.zip && unzip packer.zip
- ./packer version
- pip install --user ansible==2.8.5
- ansible --version
- echo 'Validate packer json'
- ./packer validate packer.json
build:
commands:
- ./packer build -color=false packer.json | tee build.log
{
"builders": [{
"type": "amazon-ebs",
"region": "ap-northeast-1",
"ami_regions": "ap-northeast-1",
"source_ami": "ami-0ff21806645c5e492",
"instance_type": "t2.micro",
"ssh_username": "ec2-user",
"ami_name": "packer-quick-start {{timestamp}}",
"ami_description": "created by packer at {{timestamp}}",
"ebs_optimized": false,
"tags": {
"OS_Version": "Amazon Linux AMI 2018.03",
"timestamp": "{{timestamp}}",
"isotime": "{{isotime \"2006-01-02 03:04:05\"}}"
},
"disable_stop_instance": false
}],
"provisioners": [
{
"type" : "ansible",
"extra_arguments": [
"-vvv"
],
"playbook_file" : "ansible/main.yaml"
}
]
}
==> amazon-ebs: Prevalidating AMI Name: packer-quick-start 1569943272
amazon-ebs: Found Image ID: ami-0ff21806645c5e492
==> amazon-ebs: Creating temporary keypair: packer_5d936ee8-541f-5c9a-6955-9672526afc1a
==> amazon-ebs: Creating temporary security group for this instance: packer_5d936ef1-6546-d9d0-60ff-2dc4c011036f
==> amazon-ebs: Authorizing access to port 22 from [0.0.0.0/0] in the temporary security groups...
==> amazon-ebs: Launching a source AWS instance...
==> amazon-ebs: Adding tags to source instance
amazon-ebs: Adding tag: "Name": "Packer Builder"
amazon-ebs: Instance ID: i-04b00db56a8b3b6d0
==> amazon-ebs: Waiting for instance (i-04b00db56a8b3b6d0) to become ready...
==> amazon-ebs: Using ssh communicator to connect: 3.112.61.8
==> amazon-ebs: Waiting for SSH to become available...
==> amazon-ebs: Connected to SSH!
==> amazon-ebs: Provisioning with Ansible...
==> amazon-ebs: Executing Ansible: ansible-playbook --extra-vars packer_build_name=amazon-ebs packer_builder_type=amazon-ebs -o IdentitiesOnly=yes -i /tmp/packer-provisioner-ansible244097143 /codebuild/output/src965785042/src/github.com/repoUsername/reponame/ansible/main.yaml -e ansible_ssh_private_key_file=/tmp/ansible-key242793848 -vvv
amazon-ebs: ansible-playbook 2.8.5
amazon-ebs: config file = /codebuild/output/src965785042/src/github.com/repoUsername/reponame/ansible.cfg
amazon-ebs: configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
amazon-ebs: ansible python module location = /root/.local/lib/python3.7/site-packages/ansible
amazon-ebs: executable location = /root/.local/bin/ansible-playbook
amazon-ebs: python version = 3.7.4 (default, Sep 20 2019, 22:55:10) [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)]
amazon-ebs: Using /codebuild/output/src965785042/src/github.com/repoUsername/reponame/ansible.cfg as config file
amazon-ebs: host_list declined parsing /tmp/packer-provisioner-ansible244097143 as it did not pass it's verify_file() method
amazon-ebs: script declined parsing /tmp/packer-provisioner-ansible244097143 as it did not pass it's verify_file() method
amazon-ebs: auto declined parsing /tmp/packer-provisioner-ansible244097143 as it did not pass it's verify_file() method
amazon-ebs: Parsed /tmp/packer-provisioner-ansible244097143 inventory source with ini plugin
amazon-ebs:
amazon-ebs: PLAYBOOK: main.yaml ************************************************************
amazon-ebs: 1 plays in /codebuild/output/src965785042/src/github.com/repoUsername/reponame/ansible/main.yaml
amazon-ebs:
amazon-ebs: PLAY [all] *********************************************************************
amazon-ebs: META: ran handlers
amazon-ebs:
amazon-ebs: TASK [be sure httpd is installed] **********************************************
amazon-ebs: task path: /codebuild/output/src965785042/src/github.com/repoUsername/reponame/ansible/main.yaml:6
amazon-ebs: <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: root
amazon-ebs: <127.0.0.1> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=35595 -o 'IdentityFile="/tmp/ansible-key242793848"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/02aaab1733 127.0.0.1 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
amazon-ebs: <127.0.0.1> (0, b'/root\n', b"Warning: Permanently added '[127.0.0.1]:35595' (RSA) to the list of known hosts.\r\n")
amazon-ebs: <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: root
amazon-ebs: <127.0.0.1> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=35595 -o 'IdentityFile="/tmp/ansible-key242793848"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/02aaab1733 127.0.0.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1569943320.4544108-49329379039882 `" && echo ansible-tmp-1569943320.4544108-49329379039882="` echo /root/.ansible/tmp/ansible-tmp-1569943320.4544108-49329379039882 `" ) && sleep 0'"'"''
amazon-ebs: <127.0.0.1> (1, b'', b'mkdir: cannot create directory \xe2\x80\x98/root\xe2\x80\x99: Permission denied\n')
amazon-ebs: <127.0.0.1> Failed to connect to the host via ssh: mkdir: cannot create directory ‘/root’: Permission denied
amazon-ebs: fatal: [default]: UNREACHABLE! => {
amazon-ebs: "changed": false,
amazon-ebs: "msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\". Failed command was: ( umask 77 && mkdir -p \"` echo /root/.ansible/tmp/ansible-tmp-1569943320.4544108-49329379039882 `\" && echo ansible-tmp-1569943320.4544108-49329379039882=\"` echo /root/.ansible/tmp/ansible-tmp-1569943320.4544108-49329379039882 `\" ), exited with result 1",
amazon-ebs: "unreachable": true
amazon-ebs: }
amazon-ebs:
amazon-ebs: PLAY RECAP *********************************************************************
amazon-ebs: default : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
amazon-ebs:
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Cleaning up any extra volumes...
==> amazon-ebs: No volumes to clean up, skipping
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair...
I know it fails because it tried to mkdir /root and Permission denied.
But don't know why it tried to mkdir /root. How can I change this behavior?
I solved and it was super simple cause.
Because AWS Codebuild builds by the root user, ansible makes a connection by the root user. I just wrote like this and solved it.
"provisioners": [
{
"type" : "ansible",
"user": "ec2-user",
"playbook_file" : "ansible/main.yaml"
}
]
My ansible file is simple for testing.
---
- hosts: all
become: yes
gather_facts: no
tasks:
- name: be sure httpd is installed
yum: name=httpd state=installed
- name: be sure httpd is running and enabled
service: name=httpd state=started enabled=yes

Errors adding Environment Variables to NodeJS Elastic Beanstalk

My configuration worked up until yesterday. I have added the nginx NodeJS https redirect extension from AWS. Now, when I try to add a new Environment Variable through the Elastic Beanstalk configuration, I get this error:
[Instance: i-0364b59cca36774a0] Command failed on instance. Return code: 137 Output: + rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf + service nginx stop Stopping nginx: /sbin/service: line 66: 27395 Killed env -i PATH="$PATH" TERM="$TERM" "${SERVICEDIR}/${SERVICE}" ${OPTIONS}. Hook /opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
When I look at the eb-activity.log, I see this error:
[2018-02-18T17:24:58.762Z] INFO [13848] - [Configuration update 1.0.61#112/ConfigDeployStage1/ConfigDeployPostHook/99_kill_default_nginx.sh] : Starting activity...
[2018-02-18T17:24:58.939Z] INFO [13848] - [Configuration update 1.0.61#112/ConfigDeployStage1/ConfigDeployPostHook/99_kill_default_nginx.sh] : Activity execution failed, because: + rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
+ service nginx stop
Stopping nginx: /sbin/service: line 66: 14258 Killed env -i PATH="$PATH" TERM="$TERM" "${SERVICEDIR}/${SERVICE}" ${OPTIONS} (ElasticBeanstalk::ExternalInvocationError)
caused by: + rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
+ service nginx stop
Stopping nginx: /sbin/service: line 66: 14258 Killed env -i PATH="$PATH" TERM="$TERM" "${SERVICEDIR}/${SERVICE}" ${OPTIONS} (Executor::NonZeroExitStatus)
What am I doing wrong? And what has changed recently since this worked fine when I changed an Environment Variable a couple months ago.
I had this problem as well and Amazon acknowledged the error in the documentation. This is a working restart script that you can use in your .ebextensions config file.
/opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh:
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash -xe
rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
status=`/sbin/status nginx`
if [[ $status = *"start/running"* ]]; then
echo "stopping nginx..."
stop nginx
echo "starting nginx..."
start nginx
else
echo "nginx is not running... starting it..."
start nginx
fi
service nginx stop exits with status 137 (Killed).
Your script starts with: #!/bin/bash -xe
The parameter -e makes the script exit immediately whenever something exits with a non-zero status.
If you want to continue the execution, you need to catch the exit status (137).
/opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh:
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash -xe
rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
status=`/sbin/status nginx`
if [[ $status = *"start/running"* ]]; then
set +e
service nginx stop
exitStatus = $?
if [ $exitStatus -ne 0 ] && [ $exitStatus -ne 137 ]
then
exit $exitStatus
fi
set -e
fi
service nginx start
The order of events looks like this to me:
Create a post-deploy hook to delete /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
Run a container command to delete /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
Run the post-deploy hook, which tries to delete /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
So it doesn't seem surprising to me that the post-deploy script fails as the file you are trying to delete probably doesn't exist.
I would try one of two things:
Move the deletion of the temporary conf file from the container command to the 99_kill_default_nginx.sh script, then remove the whole container command section.
Remove the line rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf from the 99_kill_default_nginx.sh script.
/sbin/status nginx seems not to work anymore. I updated the script to use service nginx status:
/opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh:
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash -xe
rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
status=$(service nginx status)
if [[ "$status" =~ "running" ]]; then
echo "stopping nginx..."
stop nginx
echo "starting nginx..."
start nginx
else
echo "nginx is not running... starting it..."
start nginx
fi
And the faulty script is STILL in amazon's docs... I wonder when they are going to fix it. It's been enough time already

Deploying cloudformation stack to AWS using Ansible Tower

I'm new to Ansible, Ansible Tower, and AWS Cloud Formation and am trying to have Ansible Tower deploy an EC2 Container Service using a Cloud Formation template. I try to run the deploy job and am running into this error below.
TASK [create/update stack] *****************************************************
task path: /var/lib/awx/projects/_6__api/tasks/create_stack.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: awx
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790 `" && echo ansible-tmp-1470427494.79-207756006727790="` echo $HOME/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpgAsKKv TO /var/lib/awx/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790/cloudformation
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-coqlkeqywlqhagfixtfpfotjgknremaw; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 AWS_DEFAULT_REGION=us-west-2 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /var/lib/awx/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790/cloudformation; rm -rf "/var/lib/awx/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790/" > /dev/null 2>&1'"'"' && sleep 0'
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "cloudformation"}, "module_stderr": "/bin/sh: /usr/bin/sudo: Permission denied\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
This is the create/update task:
---
- name: create/update stack
cloudformation:
stack_name: my-stack
state: present
template: templates/stack.yml
template_format: yaml
template_parameters:
VpcId: "{{ vpc_id }}"
SubnetId: "{{ subnet_id }}"
KeyPair: "{{ ec2_keypair }}"
DbUsername: "{{ db_username }}"
DbPassword: "{{ db_password }}"
InstanceCount: "{{ instance_count | default(1) }}"
tags:
Environment: test
register: cf_stack
- debug: msg={{ cf_stack }}
when: debug is defined
The playbook that Ansible Tower executes is a site.yml file:
---
- name: Deployment Playbook
hosts: localhost
connection: local
gather_facts: no
environment:
AWS_DEFAULT_REGION: "{{ lookup('env', 'AWS_DEFAULT_REGION') | default('us-west-2', true) }}"
tasks:
- include: tasks/create_stack.yml
- include: tasks/deploy_app.yml
This is what my playbook folder structure looks like:
/deploy
/group_vars
all
/library
aws_ecs_service.py
aws_ecs_task.py
aws_ecs_taskdefinition.py
/tasks
stack.yml
/templates
site.yml
I'm basing everything really on Justin Menga's pluralsight course "Continuous Delivery using Docker and Ansible", but he uses Jenkins, not Ansible Tower, which is probably why the disconnect. Anyway, hopefully that is enough information, let me know if I should also provide the stack.yml file. The files under the library directory are Menga's customized modules from his video course.
Thanks for reading all this and for any potential help! This is a link to his deploy playbook repository that I closely modeled everything after, https://github.com/jmenga/todobackend-deploy. Things that I took out are the DB RDS stuff.
If you look at the two last lines of the error message you can see that it is attempting to escalate privileges but failing:
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-coqlkeqywlqhagfixtfpfotjgknremaw; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 AWS_DEFAULT_REGION=us-west-2 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /var/lib/awx/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790/cloudformation; rm -rf "/var/lib/awx/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790/" > /dev/null 2>&1'"'"' && sleep 0'
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "cloudformation"}, "module_stderr": "/bin/sh: /usr/bin/sudo: Permission denied\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
As this is a local task it is attempting to switch to the root user on the box that Ansible Tower is running on and the user presumably (and for good reason) doesn't have the privileges to do this.
With normal Ansible you can avoid this by not specifying the --become or -b flags on the command line or by specifying become: false in the task/play definition.
As you pointed out in the comments, with Ansible Tower it's a case of unticking the "Enable Privilege Escalation" option in the job template.

How to install nginx 1.9.15 on amazon linux disto

I try to install the latest version of nginx (>= 1.9.5) on a fresh amazon linux to make use of http2. I followed the instructions that are described here -> http://nginx.org/en/linux_packages.html
I created a repo file /etc/yum.repos.d/nginx.repowith this content:
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/mainline/centos/7/$basearch/
gpgcheck=0
enabled=1
If I run yum update and yum install nginx I get this:
nginx x86_64 1:1.8.1-1.26.amzn1 amzn-main 557 k
It seems that it fetches still from the amzn-main repo. How do I install a newer version of nginx?
-- edit --
I added "priority=10" to the nginx.repo file and now I can install 1.9.15 with yum install nginx with this result:
Loaded plugins: priorities, update-motd, upgrade-helper
Resolving Dependencies
--> Running transaction check
---> Package nginx.x86_64 1:1.9.15-1.el7.ngx will be installed
--> Processing Dependency: systemd for package: 1:nginx-1.9.15-1.el7.ngx.x86_64
--> Processing Dependency: libpcre.so.1()(64bit) for package: 1:nginx-1.9.15-1.el7.ngx.x86_64
--> Finished Dependency Resolution
Error: Package: 1:nginx-1.9.15-1.el7.ngx.x86_64 (nginx)
Requires: libpcre.so.1()(64bit)
Error: Package: 1:nginx-1.9.15-1.el7.ngx.x86_64 (nginx)
Requires: systemd
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
If you're using AWS Linux2, you have to install nginx from the AWS "Extras Repository". To see a list of the packages available:
# View list of packages to install
amazon-linux-extras list
You'll see a list similar to:
0 ansible2 disabled [ =2.4.2 ]
1 emacs disabled [ =25.3 ]
2 memcached1.5 disabled [ =1.5.1 ]
3 nginx1.12 disabled [ =1.12.2 ]
4 postgresql9.6 disabled [ =9.6.6 ]
5 python3 disabled [ =3.6.2 ]
6 redis4.0 disabled [ =4.0.5 ]
7 R3.4 disabled [ =3.4.3 ]
8 rust1 disabled [ =1.22.1 ]
9 vim disabled [ =8.0 ]
10 golang1.9 disabled [ =1.9.2 ]
11 ruby2.4 disabled [ =2.4.2 ]
12 nano disabled [ =2.9.1 ]
13 php7.2 disabled [ =7.2.0 ]
14 lamp-mariadb10.2-php7.2 disabled [ =10.2.10_7.2.0 ]
Use the amazon-linux-extras install command to install it, like:
sudo amazon-linux-extras install nginx1.12
More details are here: https://aws.amazon.com/amazon-linux-2/faqs/.
At the time of writing, the latest version of nginx available from the AWS yum repo is 1.8.
The best thing to do for now is to build any newer version from source.
The AWS Linux AMI already has the necessary build tools.
For example, based on the Nginx 1.10 (I've assumed you're logged in as the regular ec2-user. Anything needing superuser rights is preceded with sudo)
cd /tmp #so we can clean-up easily
wget http://nginx.org/download/nginx-1.10.0.tar.gz
tar zxvf nginx-1.10.0.tar.gz && rm -f nginx-1.10.0.tar.gz
cd nginx-1.10.0
sudo yum install pcre-devel openssl-devel #required libs, not installed by default
./configure \
--prefix=/etc/nginx \
--conf-path=/etc/nginx/nginx.conf \
--pid-path=/var/run/nginx.pid \
--lock-path=/var/run/nginx.lock \
--with-http_ssl_module \
--with-http_v2_module \
--user=nginx \
--group=nginx
make
sudo make install
sudo groupadd nginx
sudo useradd -M -G nginx nginx
rm -rf nginx-1.10.0
You'll then want a service file, so that you can start/stop nginx, and load it on boot.
Here's one that matches the above config. Put it in /etc/rc.d/init.d/nginx:
#!/bin/sh
#
# nginx - this script starts and stops the nginx daemon
#
# chkconfig: - 85 15
# description: NGINX is an HTTP(S) server, HTTP(S) reverse \
# proxy and IMAP/POP3 proxy server
# processname: nginx
# config: /etc/nginx/nginx.conf
# config: /etc/sysconfig/nginx
# pidfile: /var/run/nginx.pid
# Source function library.
. /etc/rc.d/init.d/functions
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0
nginx="/etc/nginx/sbin/nginx"
prog=$(basename $nginx)
NGINX_CONF_FILE="/etc/nginx/nginx.conf"
[ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx
lockfile=/var/run/nginx.lock
make_dirs() {
# make required directories
user=`$nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -`
if [ -z "`grep $user /etc/passwd`" ]; then
useradd -M -s /bin/nologin $user
fi
options=`$nginx -V 2>&1 | grep 'configure arguments:'`
for opt in $options; do
if [ `echo $opt | grep '.*-temp-path'` ]; then
value=`echo $opt | cut -d "=" -f 2`
if [ ! -d "$value" ]; then
# echo "creating" $value
mkdir -p $value && chown -R $user $value
fi
fi
done
}
start() {
[ -x $nginx ] || exit 5
[ -f $NGINX_CONF_FILE ] || exit 6
make_dirs
echo -n $"Starting $prog: "
daemon $nginx -c $NGINX_CONF_FILE
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}
stop() {
echo -n $"Stopping $prog: "
killproc $prog -QUIT
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}
restart() {
configtest || return $?
stop
sleep 1
start
}
reload() {
configtest || return $?
echo -n $"Reloading $prog: "
killproc $nginx -HUP
RETVAL=$?
echo
}
force_reload() {
restart
}
configtest() {
$nginx -t -c $NGINX_CONF_FILE
}
rh_status() {
status $prog
}
rh_status_q() {
rh_status >/dev/null 2>&1
}
case "$1" in
start)
rh_status_q && exit 0
$1
;;
stop)
rh_status_q || exit 0
$1
;;
restart|configtest)
$1
;;
reload)
rh_status_q || exit 7
$1
;;
force-reload)
force_reload
;;
status)
rh_status
;;
condrestart|try-restart)
rh_status_q || exit 0
;;
*)
echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
exit 2
esac
Set the service file to be executable:
sudo chmod 755 /etc/rc.d/init.d/nginx
Now you can start it with:
sudo service nginx start
To load it automatically on boot:
sudo chkconfig nginx on
Finally, don't forget to edit /etc/nginx/nginx.conf to match your requirements and run sudo service nginx reload to refresh the changes.
Note, there is no 1.10 where you're looking. You can see the list here
http://nginx.org/packages/mainline/centos/7/x86_64/RPMS/
After you yum update use yum search nginx to see the different versions you have and choose a specific one:
yum search nginx
on centos 6 gives
nginx.x86_64 : A high performance web server and reverse proxy server
nginx16.x86_64 : A high performance web server and reverse proxy server
nginx18.x86_64 : A high performance web server and reverse proxy server
I have two versions to choose from, 1.6 and 1.8.
You're getting error because those nginx RPMs are built for RHEL7, not Amazon Linux. Amazon Linux is a weird hybrid of RHEL6, RHEL7, and Fedora. You should contact Amazon and ask them to create a proper nginx19 RPM specifically built for their distro.