I have this simple ansible flow: I want to create a directory on the host:
- name: Create rails app dir
file: path=/etc/rails-app state=directory mode=0755
register: rails_app_dir
And these are the logs when I run the playbook:
TASK [instance_deploy_app : Create rails app dir] *************************************************************************************************
task path: /etc/ansible/roles/instance_deploy_app/tasks/main.yml:39
<IPv4 of host> ESTABLISH LOCAL CONNECTION FOR USER: root
<IPv4 of host> EXEC /bin/sh -c 'echo ~root && sleep 0'
<IPv4 of host> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297 `" && echo ansible-tmp-1645566978.53-25820-207749605236297="` echo /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297 `" ) && sleep 0'
Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py
<IPv4 of host> PUT /root/.ansible/tmp/ansible-local-25617Cg_rWo/tmpTPHs3p TO /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297/AnsiballZ_file.py
<IPv4 of host> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297/ /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297/AnsiballZ_file.py && sleep 0'
<IPv4 of host> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297/AnsiballZ_file.py && sleep 0'
<IPv4 of host> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297/ > /dev/null 2>&1 && sleep 0'
ok: [IPv4 of host] => {
"changed": false,
"diff": {
"after": {
"path": "/etc/rails-app"
},
"before": {
"path": "/etc/rails-app"
}
},
"gid": 0,
"group": "root",
"invocation": {
"module_args": {
"_diff_peek": null,
"_original_basename": null,
"access_time": null,
"access_time_format": "%Y%m%d%H%M.%S",
"attributes": null,
"backup": null,
"content": null,
"delimiter": null,
"directory_mode": null,
"follow": true,
"force": false,
"group": null,
"mode": "0755",
"modification_time": null,
"modification_time_format": "%Y%m%d%H%M.%S",
"owner": null,
"path": "/etc/rails-app",
"recurse": false,
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"state": "directory",
"unsafe_writes": null
}
},
"mode": "0755",
"owner": "root",
"path": "/etc/rails-app",
"size": 41,
"state": "directory",
"uid": 0
}
Read vars_file 'roles/instance_deploy_app/vars/instance_vars.yml'
Read vars_file 'roles/instance_deploy_app/vars/aws_cred.yml'
According to the logs, the directory should be there but when I try to access /etc/rails-app/ it is not there. I currently have 3 users in the AWS EC2 instance: ec2-user, root and user1 and I tried to check in all of them but the directory doesnt appear.
Am I doing something wrong? Thanks!
The reason why it was not creating the folder, as β.εηοιτ.βε suggested, is because in the playbook I had connection: local so it was "never connecting to my EC2 and always acting on my controller". Once I removed that, it worked.
Related
I want to use Ansible to backup license files from Swichten. But I have a problem. The idea is to write the output into a variable with ls *.lic. With this variable I want to copy a copy from the switch via tftp server to a computer. Everything should run fully automated so I try to do the same. I post below the playbook and the output. Then it will probably be clearer. Many thanks in advance.
I want to be clarify that i only want the output: (D3456234.lic) and not (
[
"D3456234.lic.lic"
]
as you can see below in the logs.
Playbook:
- name: copy license in home dir
os10_command:
commands:
- system "cp /mnt/license/*.lic /home/admin"
- name: create var
os10_command:
commands:
- "system ls"
register: licensevar
- debug:
var: licensevar
- name: backup license
os10_command:
commands:
- copy home://{{ licensevar.stdout }} tftp://10.x.x.xx/Sicherung/lizenz/{{
licensevar.stdout }}
-vvv ansible playbook run:
TASK [debug] *******************************************************************
ok: [hostname] => {
"licensevar.stdout": [
"DTH67C3.lic"
]
}
redirecting (type: action) dellemc.os10.os10_command to dellemc.os10.os10
TASK [backup license] **********************************************************
<10.0.0.81> ANSIBLE_NETWORK_IMPORT_MODULES: Result: {'changed': False, 'stdout': ["copy home://['D3456234.lic'] tftp://10.0.0.43/Sicherung/lizenz/['D3456234.li\x1bEc']\nFailed parsing URI filename"], 'stdout_lines': [["copy home://['D3456234.lic'] tftp://10.0.0.43/Sicherung/lizenz/['D3456234.li\x1bEc']", 'Failed parsing URI filename']], 'invocation': {'module_args': {'commands': ["copy home://['D3456234.lic'] tftp://10.0.0.43/Sicherung/lizenz/['D3456234.lic']"], 'match': 'all', 'retries': 10, 'interval': 1, 'wait_for': None, 'provider': None}}, '_ansible_parsed': True}
ok: [hostname] => {
"changed": false,
"invocation": {
"module_args": {
"commands": [
"copy home://['D3456234.lic'] tftp://10.0.0.43/Sicherung/lizenz/['D3456234.lic']"
],
"interval": 1,
"match": "all",
"provider": null,
"retries": 10,
"wait_for": null
}
},
"stdout": [
"copy home://['D3456234.lic'] tftp://10.0.0.43/Sicherung/lizenz/['D3456234.li\u001bEc']\nFailed parsing URI filename"
],
"stdout_lines": [
[
"copy home://['D3456234.lic'] tftp://10.0.0.43/Sicherung/lizenz/['D3456234.li\u001bEc']",
"Failed parsing URI filename"
]
]
}
META: ran handlers
META: ran handlers
I think it can be solve with regex.
I am getting the below error while deploying to aws elastic beanstalk from travis CI.
Service:AmazonECS, Code:ClientException, Message:Container list cannot be empty., Class:com.amazonaws.services.ecs.model.ClientException
.travis.yml:
sudo: required
language: generic
services:
- docker
before_install:
- docker build -t sathishpskdocker/react-test -f ./client/Dockerfile.dev ./client
script:
- docker run -e CI=true sathishpskdocker/react-test npm test
after_success:
- docker build -t sathishpskdocker/multi-client ./client
- docker build -t sathishpskdocker/multi-nginx ./nginx
- docker build -t sathishpskdocker/multi-server ./server
- docker build -t sathishpskdocker/multi-worker ./worker
# Log in to the docker CLI
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_ID" --password-stdin
# Take those images and push them to docker hub
- docker push sathishpskdocker/multi-client
- docker push sathishpskdocker/multi-nginx
- docker push sathishpskdocker/multi-server
- docker push sathishpskdocker/multi-worker
deploy:
provider: elasticbeanstalk
region: 'us-west-2'
app: 'multi-docker'
env: 'Multidocker-env'
bucker_name: elasticbeanstalk-us-west-2-194531873493
bucker_path: docker-multi
On:
branch: master
access_key_id: $AWS_ACCESS_KEY
secret_access_key: $AWS_SECRET_KEY
Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": 2,
"containerDefintions": [
{
"name": "client",
"image": "sathishpskdocker/multi-client",
"hostname": "client",
"essential": false,
"memory": 128
},
{
"name": "server",
"image": "sathishpskdocker/multi-server",
"hostname": "api",
"essential": false,
"memory": 128
},
{
"name": "worker",
"image": "sathishpskdocker/multi-worker",
"hostname": "worker",
"essential": false,
"memory": 128
},
{
"name": "nginx",
"image": "sathishpskdocker/multi-nginx",
"hostname": "nginx",
"essential": true,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"links": ["client", "server"],
"memory": 128
}
]
}
Deploying part alone failing with the error:
Service:AmazonECS, Code:ClientException, Message:Container list cannot be empty., Class:com.amazonaws.services.ecs.model.ClientException
Ah, Never mind, it's my mistake. There is typo in the dockerrun config file which wrongly reads containerDefintions instead of containerDefinitions.
Thanks everyone whoever taking look at my question. Cheers!
I have a shell provisioner in packer connected to a box with user vagrant
{
"environment_vars": [
"HOME_DIR=/home/vagrant"
],
"expect_disconnect": true,
"scripts": [
"scripts/foo.sh"
],
"type": "shell"
}
where the content of the script is:
whoami
sudo su
whoami
and the output strangely remains:
==> virtualbox-ovf: Provisioning with shell script: scripts/configureProxies.sh
virtualbox-ovf: vagrant
virtualbox-ovf: vagrant
why cant I switch to the root user?
How can I execute statements as root?
Note, I do not want to quote all statements like sudo "statement |foo" but rather globally switch user like demonstrated with sudo su
You should override the execute_command. Example:
"provisioners": [
{
"execute_command": "echo 'vagrant' | {{.Vars}} sudo -S -E sh -eux '{{.Path}}'",
"scripts": [
"scripts/foo.sh"
],
"type": "shell"
}
],
There is another solution with simpler usage of 2 provisioner together.
Packer's shell provisioner can run the bash with sudo privileges. First you need copy your script file from local machine to remote with file provisioner, then run it with shell provisioner.
packer.json
{
"vars": [...],
"builders": [
{
# ...
"ssh_username": "<some_user_other_than_root_with_passwordless_sudo>",
}
],
"provisioners": [
{
"type": "file",
"source": "scripts/foo.sh",
"destination": "~/shell.tmp.sh"
},
{
"type": "shell",
"inline": ["sudo bash ~/shell.tmp.sh"]
}
]
}
foo.sh
# ...
whoami
sudo su root
whoami
# ...
output
<some_user_other_than_root_with_passwordless_sudo>
root
After provisioner complete its task, you can delete the file with shell provisioner.
packer.json updated
{
"type": "shell",
"inline": ["sudo bash ~/shell.tmp.sh", "rm ~/shell.tmp.sh"]
}
one possible answer seems to be:
https://unix.stackexchange.com/questions/70859/why-doesnt-sudo-su-in-a-shell-script-run-the-rest-of-the-script-as-root
sudo su <<HERE
ls /root
whoami
HERE
maybe there is a better answer?
Assuming that the shell provisioner you are using is a bash script, you can add my technique to your script.
function if_not_root_rerun_as_root(){
install_self
if [[ "$(id -u)" -ne 0 ]]; then
run_as_root_keeping_exports "$0" "$#"
exit $?
fi
}
function run_as_root_keeping_exports(){
eval sudo $(for x in $_EXPORTS; do printf '%s=%q ' "$x" "${!x}"; done;) "$#"
}
export EXPORTS="PACKER_BUILDER_TYPE PACKER_BUILD_NAME"
if_not_root_rerun_as_root "$#"
There is a pretty good explanation of "$#" here on StackOverflow.
I am using amazon instance builder for creating image out of AMI. I am passing all parameters correctly. But I dont know which value should I pass in --manifest. I am getting following error.
amazon-instance: --manifest has invalid value
'/tmp/ami-257e6b5c.manifest.xml': File does not exist or is not a
file.
I am using following file for conversion.
{
"variables": {
"aws_access_key": "",
"aws_secret_key": ""
},
"builders": [{
"type": "amazon-instance",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-west-2",
"source_ami": "ami-257e6b5c",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"account_id": "12345678",
"bundle_upload_command": "sudo ec2-upload-bundle -b packer-images -m /tmp/manifest.xml -a access_key -s secret_key -d /tmp --batch --retry",
"s3_bucket": "packer-images",
"x509_cert_path": "server.crt",
"x509_key_path": "server.key",
"x509_upload_path": "/tmp",
"ami_name": "packer-example {{timestamp}}"
}]
}
Don't replace the template, copy it from the docs and modify it.
sudo ec2-upload-bundle \
-b {{.BucketName}} \
-m {{.ManifestPath}} \
-a {{.AccessKey}} \
-s {{.SecretKey}} \
-d {{.BundleDirectory}} \
--batch \
--retry
See bundle_upload_command.
The reason you have to remove --region is because you have an old version of AMI Tools. I recommend that you try to install a newer version from source, see Set Up AMI Tools.
I have a launch script (user data) that runs on startup in aws with an ubuntu 16.04 image, and the issue I'm having is that when it gets to the part where it runs an ansible playbook the playbook fails saying this basic error message Could not get lock /var/lib/dpkg/lock. Now when I log in and try to run the ansible script manually it works, but if I run it from the aws user data, it fails with the error.
This is the full error
TASK [rabbitmq : install packages (Ubuntu default repo is used)] ***************
task path: /etc/ansible/roles/rabbitmq/tasks/main.yml:50
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1480352390.01-116502531862586 `" && echo ansible-tmp-1480352390.01-116502531862586="` echo $HOME/.ansible/tmp/ansible-tmp-1480352390.01-116502531862586 `" ) && sleep 0'
<localhost> PUT /tmp/tmpGHaVRP TO /.ansible/tmp/ansible-tmp-1480352390.01-116502531862586/apt
<localhost> EXEC /bin/sh -c 'chmod u+x /.ansible/tmp/ansible-tmp-1480352390.01-116502531862586/ /.ansible/tmp/ansible-tmp-1480352390.01-116502531862586/apt && sleep 0'
<localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /.ansible/tmp/ansible-tmp-1480352390.01-116502531862586/apt; rm -rf "/.ansible/tmp/ansible-tmp-1480352390.01-116502531862586/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {"cache_update_time": 0, "cache_updated":
false, "changed": false, "failed": true, "invocation": {"module_args":
{"allow_unauthenticated": false, "autoremove": false, "cache_valid_time":
null, "deb": null, "default_release": null, "dpkg_options": "force-
confdef,force-confold", "force": false, "install_recommends": null, "name":
"rabbitmq-server", "only_upgrade": false, "package": ["rabbitmq-server"],
"purge": false, "state": "present", "update_cache": false, "upgrade": null},
"module_name": "apt"}, "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--
force-confdef\" -o \"Dpkg::Options::=--force-confold\" install
'rabbitmq-server'' failed: E: Could not get lock /var/lib/dpkg/lock - open
(11: Resource temporarily unavailable)\nE: Unable to lock the administration
directory (/var/lib/dpkg/), is another process using it?\n", "stderr": "E: Could
not get lock /var/lib/dpkg/lock - open (11: Resource temporarily
unavailable)\nE: Unable to lock the administration directory (/var/lib/dpkg/),
is another process using it?\n", "stdout": "", "stdout_lines": []}
I ran into the same lock issue. I found that ubuntu was installing some packages on first boot which cloud-init did not wait for.
I use the following script to check that the lock file is available for at least 15 seconds prior to trying to install anything.
#!/bin/bash
i="0"
while [ $i -lt 15 ]
do
if [ $(fuser /var/lib/dpkg/lock) ]; then
i="0"
fi
sleep 1
i=$[$i+1]
done
The reason I prefer this vs sleep 5m because in an autoscale group the instance may be removed before it's even provisioned.