How to install Ansible in Amazon Linux EC2? - amazon-web-services

I pick this Amazon Linux: Amazon Linux AMI 2017.09.1 (HVM), SSD Volume Type - ami-1a033c7a.
I installed Ansible using the command:
sudo pip install ansible,
it shows install completes.when I run ansible --version, it shows:
ansible 2.4.1.0
config file = None
configured module search path = [u'/home/ec2-
user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/site-
packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.12 (default, Nov 2 2017, 19:20:38) [GCC 4.8.5
20150623 (Red Hat 4.8.5-11)]
Why config file = None? Shouldn't it shows /etc/ansible/ansible.cfg? I do not see /etc/ansible/hosts, not even folder /etc/ansible. Did I install correctly, where is the folder /etc/ansible?

why config file = None?
Because at the time of running ansible --version no config file was found.
shouldn't it shows /etc/ansible/ansible.cfg?
No. It should show the ansible.cfg actually being used.
Per documentation, Ansible tries to find the config file in:
ANSIBLE_CONFIG (an environment variable)
ansible.cfg (in the current directory)
.ansible.cfg (in the home directory)
/etc/ansible/ansible.cfg
ansible --version will show you the exact path to the one being used.
Strictly speaking the last point is not always true, as package managers and virtual environment managers might cause the /etc directory to be located elsewhere.
did I install correctly
You didn't mention any error or warning during the installation and ansible --version returned a proper response.
There is no reason not to believe it's installed properly.
where is the folder /etc/ansible?
It's not existing on your system. There is no default inventory file, nor configuration file created by the installation package.
Create one.

Here I answer the question myself.
There are many ways to install ansible, and then you get difference default settings, depending on the OS. Many tutorials just assume the ansible_hosts and ansible.cfg already in /etc/ansible, which is not correct if you install ansible using pip.
In fact, if you install ansible using pip, then you will not see ansible.cfg and ansible_hosts in /etc/ansible. Even the folder /etc/ansible does not exist. but never mind, you can create these two files yourself as follows:
suppose you want to store ansible_hosts and ansible.cfg in /home/ec2-user, then you can:
echo <remote_host> /home/ec2-user/ansible_hosts
export ANSIBLE_INVENTORY=/home/ec2-user/ansible_hosts
wget https://raw.githubusercontent.com/ansible/ansible/devel/examples/ansible.cfg
mv ansible.cfg /home/ec2-user/
export ANSIBLE_CONFIG=/home/ec2-user/ansible.cfg
then if ansible --version, you will see
ansible 2.4.1.0
config file = /home/ec2-user/ansible.cfg
....
and if you test ansible ad-hoc command (my remote_host is ubuntu, so I use -u ubuntu, you can change it to be yours):
ansible all -m ping -u ubuntu
then you see ansible ping remote_host successfully.
This shows ansible does work.

Related

Ansible aws_ec2 inventory plugin did not pass its verify_file() method

unfortunately our aws_ec2 inventory plugin does not work anymore and I cant figure it our why.
It worked the last days but after an update on the ansible VM it shows only the same error.
Error:
/opt/ansible/bin/ansible-inventory -i inventory/ec2.aws.yml --graph -vvvvvv
ansible-inventory 2.9.2
config file = /home/XXXXX/ansible/ansible.cfg
configured module search path = [u'/home/XXXXX/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/python2.7/site-packages/ansible
executable location = /opt/ansible/bin/ansible-inventory
python version = 2.7.5 (default, Jun 28 2022, 15:30:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
Using /home/XXXXX/ansible/ansible.cfg as config file
setting up inventory plugins
ansible_collections.amazon.aws.plugins.inventory.aws_ec2 declined parsing /home/XXXXX/ansible/XXXXX/ec2.aws.yml as it did not pass its verify_file() method
I already checked if boto3 and botocore are installed, and they are for the python2.7 version that Ansible uses:
python2.7 -m pip freeze
boto3==1.26.69
botocore==1.29.69
This is the inventory yaml file:
plugin: amazon.aws.aws_ec2
cache: yes
cache_timeout: 600
regions:
- eu-central-1
validate_certs: False
keyed_groups:
- prefix: os
key: tags['OS']
hostnames:
- tag:Name
compose:
ansible_host: private_ip_address
I am using this in the "/home/XXXXX/ansible/ansible.cfg":
[inventory]
enable_plugins = vmware_vm_inventory, amazon.aws.aws_ec2
Also the amazon.aws collection is installed:
/opt/ansible/bin/ansible-galaxy collection install amazon.aws
Process install dependency map
Starting collection install process
Skipping 'amazon.aws' as it is already installed
Also the credentials are exported as env vars:
env
AWS_ACCESS_KEY_ID=XXXXXXXX
AWS_SECRET_ACCESS_KEY=XXXXXXXXXX
AWS_SESSION_TOKEN=XXXXXXXXXX
Does anyone have an idea what's the issue?
I was trying to run a playbook and every time the same issue comes up.
ansible_collections.amazon.aws.plugins.inventory.aws_ec2 declined parsing /home/XX/ansible/XX/ec2.aws.yml as it did not pass its verify_file() method
ec2.aws.yml has never been a valid filename for use with the aws_ec2 inventory plugin.
Inventory files for this plugin must end in either aws_ec2.yml or aws_ec2.yaml.

Unable to execute a step on a running EMR

I have an EMR cluster 5.28.1 running in AWS but I forgot to install from python libraries as part of the bootstrap action. Now that the cluster is running, I was simply attempting to add a step via the EMR console. Here are my settings
JAR: s3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar
Main class: None
Arguments: s3://xxxx/install_python_libraries.sh
Unfortunately, I get the following error.
Cannot run program "s3://xxxxx/install_python_libraries.sh" (in directory "."): error=2, No such file or directory
I am not sure what I am doing wrong. The shell script looks like this.
#!/bin/bash -xe
# Non-standard and non-Amazon Machine Image Python modules:
sudo pip-3.6 install boto3
sudo pip-3.6 install xmltodict
I also tried this by simply using 'command-runner.jar' but I get the same error. Can you please help me figure out the problem so I do this via the console? I would like to install the libraries on all nodes - master and core.
Thanks
The issue is the xxx.sh files EOL/carriage return type.
In other words, if it is Windows ("\r\n") then it will not work and return the ./ file not found error.
Convert it to unix type ("\n") using something like notepad++ and it will run fine.
(In notepad++ edit>EOL Conversion>Unix(LF) hit save and try again)

Running conda with proxy

I'm using Anaconda 2.7 on windows, and my internet connection uses a proxy.
Previously, when using python 2.7 (Not Anaconda), I installed package like this:
pip install {packagename} --proxy proxy-us.bla.com:123
Is there a way to run conda with proxy argument? didn't see it in conda help.
Thanks
Or you can use the command line below from version 4.4.x.
conda config --set proxy_servers.http http://id:pw#address:port
conda config --set proxy_servers.https https://id:pw#address:port
You can configure a proxy with conda by adding it to the .condarc, like
proxy_servers:
http: http://user:pass#corp.com:8080
https: https://user:pass#corp.com:8080
or set the HTTP_PROXY and HTTPS_PROXY environment variables. Note that in your case you need to add the scheme to the proxy url, like https://proxy-us.bla.com:123.
See http://conda.pydata.org/docs/config.html#configure-conda-for-use-behind-a-proxy-server.
One mistake I was making was saving the file as a.condarc or b.condarc.
Save it only as .condarc and paste the following code in the file and save the file in your home directory. Make necessary changes to hostname, user etc.
channels:
- defaults
show_channel_urls: True
allow_other_channels: True
proxy_servers:
http: http://user:pass#hostname:port
https: http://user:pass#hostname:port
ssl_verify: False
The best way I settled with is to set proxy environment variables right before using conda or pip install/update commands. Simply run:
set HTTP_PROXY=http://username:password#proxy_url:port
For example, your actual command could be like
set HTTP_PROXY=http://yourname:your_password#proxy.your_company.com:8080
If your company uses https proxy, then also
set HTTPS_PROXY=https://username:password#proxy_url:port
Once you exit Anaconda prompt then this setting is gone, so your username/password won't be saved after the session.
I didn't choose other methods mentioned in Anaconda documentation or some other sources, because they all require hardcoding of username/password into
Windows environment variables (also this requires restart of Anaconda prompt for the first time)
Conda .condarc or .netrc configuration files (also this won't work for PIP)
A batch/script file loaded while starting Anaconda prompt (also this might require configuring the path)
All of these are unsafe and will require constant update later. And if you forget where to update? More troubleshooting will come your way...
I was able to get it working without putting in the username and password:
conda config --set proxy_servers.https https://address:port
You can configure a proxy with conda by adding it to the .condarc, like
proxy_servers:
http: http://user:pass#corp.com:8080
https: https://user:pass#corp.com:8080
Then in cmd Anaconda Power Prompt (base) PS C:\Users\user> run:
conda update -n root conda
On Mac what worked for me was going to keychain and updating the password for the key that for the company's the internal repo.

elastic beanstalk: incremental push git

When I would like to push incremental changes to the AWS Elastic Beanstalk solution I get the following:
$ git aws.push
Updating the AWS Elastic Beanstalk environment None...
Error: Failed to get the Amazon S3 bucket name
I've already added FULLS3Access to my AWS users policies.
I had a similar issue today and here are the steps I followed to investigate :-
I modified line no 133 at .git/AWSDevTools/aws/dev_tools.py to print the exception like
except Exception, e:
print e
* Please make sure of spaces as Python does not work in case of spaces.
I ran command git aws.push again
and here is the exception printed :-
BotoServerError: 403 Forbidden
{"Error":{"Code":"SignatureDoesNotMatch","Message":"Signature not yet current: 20150512T181122Z is still later than 20150512T181112Z (20150512T180612Z + 5 min.)","Type":"Sender"},"
The issue is because there was a time difference in server and machine I corrected it and it stated working fine.
Basically the Exception will helps to let you know exact root cause, It may be related to Secret key as well.
It may have something to do with the boto-library (related thread). If you are on ubuntu/debian try this:
Remove old version of boto
sudo apt-get remove python-boto
Install newer version
sudo apt-get install python-pip
sudo pip install -U boto
Other systems (e.g. Mac)
Via easy_install
sudo easy_install pip
pip install boto
Or simply build from source
git clone git://github.com/boto/boto.git
cd boto
python setup.py install
Had the same problem a moment ago.
Note:
I just noticed your environment is called none. Did you follow all instructions and executed eb config/eb init?
One more try:
Add export PATH=$PATH:<path to unzipped eb CLI package>/AWSDevTools/Linux/ to your path and execute AWSDevTools-RepositorySetup.sh maybe something is wrong with your repository setup (notice the none weirdness). Other possible solutions:
Doublecheck AWSCredentials (maybe you are using different Key IDs / Wrong CredentialsFile-format)
Old/mismatching versions of eb client & python (check with eb -v and python -v) (current client is this)
Use amazons policy validator to doublecheck if your AWS User is allowed to perform all actions
If all that doesn't help im out of options. Good luck.

Vagrant Rsync Error before provisioning

So I'm having some adventures with the vagrant-aws plugin, and I'm now stuck on the issue of syncing folders. This is necessary to provision the machines, which is the ultimate goal. However, running vagrant provision on my machine yields
[root#vagrant-puppet-minimal vagrant]# vagrant provision
[default] Rsyncing folder: /home/vagrant/ => /vagrant
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir -p '/vagrant'
I'm almost positive the error is caused because ssh-ing manually and running that command yields 'permission denied' (obviously, a non-root user is trying to make a directory in the root directory). I tried ssh-ing as root but it seems like bad practice. (and amazon doesn't like it) How can I change the folder to be rsynced with vagrant-aws? I can't seem to find the setting for that. Thanks!
Most likely you are running into the known vagrant-aws issue #72: Failing with EC2 Amazon Linux Images.
Edit 3 (Feb 2014): Vagrant 1.4.0 (released Dec 2013) and later versions now support the boolean configuration parameter config.ssh.pty. Set the parameter to true to force Vagrant to use a PTY for provisioning. Vagrant creator Mitchell Hashimoto points out that you must not set config.ssh.pty on the global config, you must set it on the node config directly.
This new setting should fix the problem, and you shouldn't need the workarounds listed below anymore. (But note that I haven't tested it myself yet.) See Vagrant's CHANGELOG for details -- unfortunately the config.ssh.pty option is not yet documented under SSH Settings in the Vagrant docs.
Edit 2: Bad news. It looks as if even a boothook will not be "faster" to run (to update /etc/sudoers.d/ for !requiretty) than Vagrant is trying to rsync. During my testing today I started seeing sporadic "mkdir -p /vagrant" errors again when running vagrant up --no-provision. So we're back to the previous point where the most reliable fix seems to be a custom AMI image that already includes the applied patch to /etc/sudoers.d.
Edit: Looks like I found a more reliable way to fix the problem. Use a boothook to perform the fix. I manually confirmed that a script passed as a boothook is executed before Vagrant's rsync phase starts. So far it has been working reliably for me, and I don't need to create a custom AMI image.
Extra tip: And if you are relying on cloud-config, too, you can create a Mime Multi Part Archive to combine the boothook and the cloud-config. You can get the latest version of the write-mime-multipart helper script from GitHub.
Usage sketch:
$ cd /tmp
$ wget https://raw.github.com/lovelysystems/cloud-init/master/tools/write-mime-multipart
$ chmod +x write-mime-multipart
$ cat boothook.sh
#!/bin/bash
SUDOERS_FILE=/etc/sudoers.d/999-vagrant-cloud-init-requiretty
echo "Defaults:ec2-user !requiretty" > $SUDOERS_FILE
echo "Defaults:root !requiretty" >> $SUDOERS_FILE
chmod 440 $SUDOERS_FILE
$ cat cloud-config
#cloud-config
packages:
- puppet
- git
- python-boto
$ ./write-mime-multipart boothook.sh cloud-config > combined.txt
You can then pass the contents of 'combined.txt' to aws.user_data, for instance via:
aws.user_data = File.read("/tmp/combined.txt")
Sorry for not mentioning this earlier, but I am literally troubleshooting this right now myself. :)
Original answer (see above for a better approach)
TL;DR: The most reliable fix is to "patch" a stock Amazon Linux AMI image, save it and then use the customized AMI image in your Vagrantfile. See below for details.
Background
A potential workaround is described (and linked in the bug report above) at https://github.com/mitchellh/vagrant-aws/pull/70/files. In a nutshell, add the following to your Vagrantfile:
aws.user_data = "#!/bin/bash\necho 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty && chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty\nyum install -y puppet\n"
Most importantly this will configure the OS to not require a tty for user ec2-user, which seems to be the root of the problem. I /think/ that the additional installation of the puppet package is not required for the actual fix (although Vagrant may use Puppet for provisioning the machine later, depending on how you configured Vagrant).
My experience with the described workaround
I have tried this workaround but Vagrant still occasionally fails with the same error. It might be a "race condition" where Vagrant happens to run its rsync phase faster than cloud-init (which is what aws.user_data is passing information to) can prepare the workaround for #72 on the machine for Vagrant. If Vagrant is faster you will see the same error; if cloud-init is faster it works.
What will work (but requires more effort on your side)
What definitely works is to run the command on a stock Amazon Linux AMI image, and then save the modified image (= create an image snapshot) as a custom AMI image of yours.
# Start an EC2 instance with a stock Amazon Linux AMI image and ssh-connect to it
$ sudo su - root
$ echo 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty
$ chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty
# Note: Installing puppet is mentioned in the #72 bug report but I /think/ you do not need it
# to fix the described Vagrant problem.
$ yum install -y puppet
You must then use this custom AMI image in your Vagrantfile instead of the stock Amazon one. The obvious drawback is that you are not using a stock Amazon AMI image anymore -- whether this is a concern for you or not depends on your requirements.
What I tried but didn't work out
For the record: I also tried to pass a cloud-config to aws.user_data that included a bootcmd to set !requiretty in the same way as the embedded shell script above. According to the cloud-init docs bootcmd is run "very early" in the startup cycle for an EC2 instance -- the idea being that bootcmd instructions would be run earlier than Vagrant would try to run its rsync phase. But unfortunately I discovered that the bootcmd feature is not implemented in the outdated cloud-init version of current Amazon's Linux AMIs (e.g. ami-05355a6c has cloud-init 0.5.15-69.amzn1 but bootcmd was only introduced in 0.6.1).