AWS EMR bootstrap action as sudo - amazon-web-services

I need to update /etc/hosts for all instances in my EMR cluster (EMR AMI 4.3).
The whole script is nothing more than:
#!/bin/bash
echo -e 'ip1 uri1' >> /etc/hosts
echo -e 'ip2 uri2' >> /etc/hosts
...
This script needs to run as sudo or it fails.
From here: https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-bootstrap.html#bootstrapUses
Bootstrap actions execute as the Hadoop user by default. You can execute a bootstrap action with root privileges by using sudo.
Great news... but I can't figure out how to do this, and I can't find an example.
I've tried a bunch of things... including...
running as Hadoop and adding 'sudo' to each of the 'echo' statements in the script
using a shell script to copy and chmod the above ('echo' statements with no 'sudo') and running local copy using run-if bootstrap that calls 1=1 sudo bash /home/hadoop/myDir/myScript.sh
hard coding the whole script as a one-liner into a run-if bootstrap action
I consistently get:
On the master instance (i-xxx), bootstrap action 2 returned a non-zero return code
If i check the logs for the "Setup hadoop debugging" step, there's nothing there.
From here: https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-overview.html#emr-overview-cluster-lifecycle
summary emr setup (in order):
provisions ec2 instances
runs bootstrap actions
installs native applications... like hadoop, spark, etc.
So it seems like there's some risk that since I'm mucking around as user Hadoop before hadoop is installed, I could be messing something up there, but I can't imagine what.
I think it must be that my script isn't running as 'sudo' and it's failing to update /etc/hosts.
My question... how can I use bootstrap actions (or something else) on EMR to run a simple shell script as sudo? ...specifically to update /etc/hosts?

I've not had problems using sudo from within a shell script run as an EMR bootstrap action, so it should work. You can test that it works with a simple script that simply does "sudo ls /root".
Your script is trying to append to /etc/hosts by redirecting stdout with:
sudo echo -e 'ip1 uri1' >> /etc/hosts
The problem here is that while the echo is run with sudo, the redirection (>>) is not. It's run by the underlying hadoop user, who does not have permission to write to /etc/hosts. The fix is:
sudo sh -c 'echo -e "ip1 uri1" >> /etc/hosts'
This runs the entire command, including the stdout redirection, in a shell with sudo.

Related

AWS EC2 User Data script isnt working as expected

When I ssh into my EC2 Instance and run the following commands my SpringServer.jar file executes and I can access my Spring application by going to myawsaccount:8080/times. when I specify the following commands in User Data I cant access my application at myawsaccount:8080/times and im not sure why. Any help would be appreciated.
Commands
#!/bin/bash --> only in user script
sudo su
wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u141-b15/336fa29ff2bb4ef291e347e091f7f4a7/jdk-8u141-linux-x64.rpm
yum install -y jdk-8u141-linux-x64.rpm
wget https://myawsaccount.s3-eu-west-1.amazonaws.com/SpringServer-1-0.0.1-SNAPSHOT.jar
java -jar SpringServer-1-0.0.1-SNAPSHOT.jar
To troubleshoot UserData issues, the best thing to do is to login to an instance,
and inspect one of UserData log files.
Most impotently /var/log/cloud-init-output.log:
The cloud-init output log file (/var/log/cloud-init-output.log) captures console output so it is easy to debug your scripts following a launch if the instance does not behave the way you intended.
Also your UserData script will be located in /var/lib/cloud/instances/<instance-id>/. Thus, once you are in the instance you can manually try to run it and fix/debug while in the instance.
Setting environment variables using export doesn't work in user data as it only sets them for the current shell session. You can fix this by copying them to your profile configuration:
#!/bin/bash
...
echo 'export JAVA_HOME=/opt/jdk1.8.0_141' >> /etc/profile
echo 'export JRE_HOME=/opt/jdk1.8.0_141/jre' >> /etc/profile
echo 'export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin' >> /etc/profile
...
This way, the environment variables will be available in every session.

Is it possible to run an Ansible playbook from a Chef AWS/Opworks cookbook?

I try to figure out if it's possible to create a Chef cookbook that ssh into an Ansible server and run some Ansible cookbook from AWS Opworks on the current node
I think of a script that I can put in a execute like this :
define :foobar_magento2_deploy do
release_path = node[:app_release_path]
execute 'Ansible playbook' do
command "ssh -i key ansible-server 'ansible-playbook arg1 arg2'"
end
end
Do you think it's possible ? Is there some caveats ? Hints ?
Edit from #coderanger answer:
define :foobar_magento2_deploy do
release_path = node[:app_release_path]
execute 'Ansible playbook' do
command "git clone ansible-playbook"
command "cd ansible-playbook"
command "ansible-playbook -l localhost playbook.yml"
end
end
So a couple of things:
OpsWorks Stacks is dangerously out of date and using it should be considered highly suspect.
I don't actually recognize that define block thing in there, maybe that's an older OpsWorks syntax?
You can definitely run an Ansible playbook from Chef code, but I would probably go a little simpler than you have there. Probably just run ansible-playbook locally and aim it at localhost.

AWS EC2 cloud-init script run as ec2-user

I am baking an image on top of Amazon linux image.
I need to run a service as ec2-user.
Is it possible to run a launch script of any kind as user other than root?
I'm assuming you're going to put the command under UserData.
Scripts entered as user data are executed as the root user, so do not use the sudo command in the script. Remember that any files you create will be owned by root; if you need non-root users to have file access, you should modify the permissions accordingly in the script. Also, because the script is not run interactively, you cannot include commands that require user feedback (such as yum update without the -y flag).
Here's the full documentation discussing topic
Use this:
su ec2-user -c 'your commands go here'

compute engine startup script can't execute as a non-root user

Boiling my issue down to the simplest case, I'm using Compute Engine with the following startup-script:
#! /bin/bash
sudo useradd -m drupal
su drupal
cd /home/drupal
touch test.txt
I can confirm the drupal user exists after this command, so does the test file. However I expect the owner of the test file to be 'drupal' (hence the su). However, when I use this as a startup script I can still confirm ROOT is the owner of the file:
meaning my
su drupal
did not work. sudo su drupal also does not make any difference. I'm using Google Container OS, but same happens on a Debian 8 image.
sudo su is not a command run within a shell -- it starts a new shell.
That new shell is no longer running your script, and the old shell that is running the script waits for the new one to exit before it continues.
The sudo su command will start a new shell. The old shell waits for the old one to exit and continues executing the rest of the code.
Your script is running in the 'old' shell, which means these commands:
cd /home/drupal
touch test.txt
are still executed as root and thus the owner of these files is root as well.
You can modify your script to this:
#! /bin/bash
sudo useradd -m drupal
sudo -u drupal bash -c 'cd ~/; touch text2.txt'
and it should work.
The -u flag executes the command as the user specified, in this case 'drupal'
I wrote some stuff underneath - but looks like this should work:
how to run script as another user without password
The other option would be to ssh into your own machine as the other user, you can use sshpass to send the password, or get your own public key.
When I test a similar script:
su [my username]
touch test.txt
It actually logs in as me, and doesn't finish until I ctrl-d
Further testing reveals that the only way to own the file is if I invoke the script from the shell, ie:
su me
touch test.txt
./test2.sh
test2.sh:
touch test2.txt
gives both files to root, even if I own both scripts.
This follows that everything YOU do is yours, you can't make something for someone else.

cloudformation composer install

So I am using cloudformation for my AWS setup, I am trying to run composer but for some reason no matter what command I put in my userdata section I always can an error, this is my error:
php /usr/local/bin/composer.phar create-project composer/satis /var/www/satis --stability=dev
[RuntimeException]
The HOME or COMPOSER_HOME environment variable must be set for composer to run correctly
This is my code within the userdata section:
"#composer\n",
"curl -sS https://getcomposer.org/installer | php\n",
"mv composer.phar /usr/local/bin/composer.phar\n",
"#satis\n",
"php /usr/local/bin/composer.phar create-project composer/satis /var/www/satis --stability=dev\n",
Does anyone have any ideas why this might not work and should I should be doing ?
Composer is looking for the location of the .composer directory. Export the HOME or COMPOSER_HOME env variable, e.g. : HOME=/root php /usr/local/bin/composer.phar create-project composer/satis /var/www/satis --stability=dev, it will work fine then.
I had the similar issue with amazon linux ami 2, it was showing in the log All settings correct for using Composer. The HOME or COMPOSER_HOME environment variable must be set for composer to run correctly, but it was not installed at all. Below is the way to fix it. Might be helpful to somebody rather waisting 2,3 hours!
sudo curl -sS https://getcomposer.org/installer | sudo php
mv composer.phar /usr/bin/composer
chmod +x /usr/bin/composer
export COMPOSER_HOME=/root
Agree with Ntwobike's answer.
When launching AWS EC2 instances I was installing composer by running an Ansible playbook during in the user data script run. (The user data script is called by cloud-init during the instance build process).
For some reason at this point in the build the $HOME environment variable is not set. So I needed to add 'export HOME=/root' - e.g.
# These need to be set to enable the composer installer to run. It is probably due to an issue
# with the $HOME variable not yet being set at this point in the instance creation.
export HOME=/root
ansible-playbook --extra-vars "target=localhost" playbooks/debian-9/drush.yml