Pass my local environment variables values to my ec2 user data - amazon-web-services

As simple as it sounds, I would like to pass my local environment variable value inside my ec2 user data script. So for instance I run this locally:
export PASSWORD=mypassword
printenv PASSWORD
mypassword
then once I ssh to my ec2 and run
printenv PASSWORD
I should see the same value mypassword. I haven't found a way to inject the right codes in my user data script. Please help if you can.
This is my user data, I am basically installing some packages then authenticate to my vault with the password value I would like to upload from my laptop to my ec2. I just don't want to hardcode mypassword in my user dat script. (not even sure if it's doable?)
# User Data for ASG
user_data = <<EOF
#!/usr/bin/env bash
set -x -v
exec > >(tee -i user-data.log 2>/dev/console) 2>&1
# Install latest AWS cli
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
# Install VAULT cli
sudo wget https://releases.hashicorp.com/vault/1.8.2/vault_1.8.2_linux_amd64.zip
sudo unzip vault_1.8.2_linux_amd64.zip
sudo mv vault /usr/local/bin/vault
sudo chmod +x /usr/local/bin/vault
vault -v
# Vault env var
export VAULT_ADDR=https://myvault.test
export VAULT_SKIP_VERIFY=true
export VAULT_NAMESPACE=test
# Vault login (to authenticate to vault must export local value of $PASSWORD
export VAULT_PASSWORD=$PASSWORD
vault login -namespace=test -method=userpass username=myuser password=$VAULT_PASSWORD

user_data runs under root user and it has its own shell environment. Thus when you ssh to the instance as an ec2-user or ubuntu, you have your own, different local environment. This is the reason why your export does not work.
To rectify the issue, your user_data must modify .bashrc (or equivalent depending on the OS) of your ssh user (often ec2-user or ubuntu). Only then your exports will take effect.

I was able to make it work by setting up locally all variables for my sensitive data and defined them my variables.tf. Then on my user data field I just exported the TF var name. See below:
Local setup
export TF_VAR_password=password
TF code --> variables.tf
variable "password" {
description = "my password"
type = string
default = ""
}
Now in my app user data script
export MYPASSWORD=${var.password}
VOILA :)
Here is the website as a point of reference --> https://learn.hashicorp.com/tutorials/terraform/sensitive-variables?in=terraform/0-14 ( look for Set values with environment variables)

Related

How to set environmental variables on aws ec2 mern app deployment

I want to deploy my mern app in aws ec2 instance, I did clone my folder into the instance but I don't know how to set the env variables. I tried to create an .env file and store my varibles, but that didn't work either. So is there any other method to do the same or should I use any other aws service to store my env variables.
You could use Param Store or you could add the environment variable on the instance's /home/ec2-user/.bashrc
You could also do this using User Data when you launch the instance.
[ec2-user# ~]$ cat /home/ec2-user/.bashrc
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=
# User specific aliases and functions
export ENV1=TestEnv1
export ENV2=TestEnv2
you need to execute the following to set the variable
source /home/ec2-user/.bashrc
[ec2-user#ip-172-31-42-105 ~]$ echo $ENV1
TestEnv1
[ec2-user#ip-172-31-42-105 ~]$ echo $ENV2
TestEnv2

$HOME is not set for ec2-user during commands in User Data run

I put the following commands in user data of an EC2 running RedHat 8 AMI (ami-0fc841be1f929d7d1), when they run, the mkdir tries to create .kube at root which looks to me like $HOME is not set at the time.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Following are log from /var/log/user-data.log
+ mkdir -p /.kube
+ sudo cp -i /etc/kubernetes/admin.conf /.kube/config
++ id -u
++ id -g
+ sudo chown 0:0 /.kube/config
When I SSH to the instance, the $HOME is set correctly to /home/ec2-user.
Could you advise what I did wrong here?
Thank you
When your EC2 server is provisioned, the user data script runs as user root, so $HOME is empty. What you could do, is to define the HOME env var at the top of your user data script, like this (insert your user's home directory here):
export HOME=/home/ubuntu
I've tried it and it works (I install NVM, SDKMAN, sbt, java, git, docker; all works fine). You might need to do some chown at the end of your user data script to change the owner of some files back to your user. For example, if your user data sets up some files in your home directory:
chown ubuntu ~/.foo/bar.properties
$HOME refers to the home directory of the logged in user. Userdata runs under the root user, and the root user $HOME is /. That is the result you are seeing.
Instead of the variable $HOME, your script should refer to /home as a literal.
See https://superuser.com/questions/271925/where-is-the-home-environment-variable-set
You are running as sudo which is known to change environment variables that are established with your users shell (such as $HOME) as well as shell context based such as ssh-agent.
Generally you can ensure this persists when you run sudo by adding it to the env_keep settings in your sudoers configuration by adding the below line within /etc/sudoers. More information is available here, be careful about modifying this file.
Defaults env_keep=HOME
Otherwise if you don't want to make the above change, ensure you have the permissions to carry this out without running sudo or pass an absolute path value in.
I would generally stay clear of user data for important configuration anyway,
instead build a pre-baked AMI ahead of time with the configuration how you want it, using a configuration tool such as Ansible, Chef, Puppet.
Alternatively as this is within the User Data anyway, it is unlikely you have already configured the sudoers configuration, you should instead just specify the path.
I faced the same issue. Adding this to the User Data script helped resolve it. The sub shells will have the HOME set with this change to profile.
cat > /etc/profile.d/set_home.sh << 'EOF'
export HOME=~
EOF
chmod a+x /etc/profile.d/set_home.sh

AWS EC2 User Data script isnt working as expected

When I ssh into my EC2 Instance and run the following commands my SpringServer.jar file executes and I can access my Spring application by going to myawsaccount:8080/times. when I specify the following commands in User Data I cant access my application at myawsaccount:8080/times and im not sure why. Any help would be appreciated.
Commands
#!/bin/bash --> only in user script
sudo su
wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u141-b15/336fa29ff2bb4ef291e347e091f7f4a7/jdk-8u141-linux-x64.rpm
yum install -y jdk-8u141-linux-x64.rpm
wget https://myawsaccount.s3-eu-west-1.amazonaws.com/SpringServer-1-0.0.1-SNAPSHOT.jar
java -jar SpringServer-1-0.0.1-SNAPSHOT.jar
To troubleshoot UserData issues, the best thing to do is to login to an instance,
and inspect one of UserData log files.
Most impotently /var/log/cloud-init-output.log:
The cloud-init output log file (/var/log/cloud-init-output.log) captures console output so it is easy to debug your scripts following a launch if the instance does not behave the way you intended.
Also your UserData script will be located in /var/lib/cloud/instances/<instance-id>/. Thus, once you are in the instance you can manually try to run it and fix/debug while in the instance.
Setting environment variables using export doesn't work in user data as it only sets them for the current shell session. You can fix this by copying them to your profile configuration:
#!/bin/bash
...
echo 'export JAVA_HOME=/opt/jdk1.8.0_141' >> /etc/profile
echo 'export JRE_HOME=/opt/jdk1.8.0_141/jre' >> /etc/profile
echo 'export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin' >> /etc/profile
...
This way, the environment variables will be available in every session.

Creating user in ubuntu from AWS

Using AWS (Amazon Web Services) I have created an Ubuntu 16.10 instance and I am able to login using a pem file like this:
ssh -i key.pem ubuntu#52.16.73.14.54
After I am logged, I can see that I am able to execute:
sudo su
(with no password), however the file /etc/sudoers does NOT contain any reference to the user current user: ubuntu.
How can I create another user with exactly the same behavior (without touching the sudoers file) from terminal in a NON interactive way?
I tried:
sudo useradd -m -c "adding a test user" -G sudo,adm -s /bin/bash testuser
But after I become "testuser" if I invoke:
sudo su
I have to provide a password. Which is exactly the way I want to avoid.
You can't do this without touching sudo, beacuse the ubuntu user is given passwordless access specifically.
$ for group in `groups ubuntu`; do sudo grep -r ^[[:space:]]*[^#]*$group[[:space:]] /etc/sudoers* ; done
/etc/sudoers.d/90-cloud-init-users:ubuntu ALL=(ALL) NOPASSWD:ALL
/etc/sudoers.d/90-cloud-init-users:ubuntu ALL=(ALL) NOPASSWD:ALL
/etc/sudoers:%sudo ALL=(ALL:ALL) ALL
But what you can do is create a new sudoers file without touching any existing files. sudo is typically configured these days to read all the configurations in a directiory, usually /etc/sudoers.d/, preceisely so that one failing config doesn't effect the rest of sudo.
In your case, you might want to give an admin group sudoless access rather than your user. Then you can add access in the future to other users without changing sudo config.

cloudformation composer install

So I am using cloudformation for my AWS setup, I am trying to run composer but for some reason no matter what command I put in my userdata section I always can an error, this is my error:
php /usr/local/bin/composer.phar create-project composer/satis /var/www/satis --stability=dev
[RuntimeException]
The HOME or COMPOSER_HOME environment variable must be set for composer to run correctly
This is my code within the userdata section:
"#composer\n",
"curl -sS https://getcomposer.org/installer | php\n",
"mv composer.phar /usr/local/bin/composer.phar\n",
"#satis\n",
"php /usr/local/bin/composer.phar create-project composer/satis /var/www/satis --stability=dev\n",
Does anyone have any ideas why this might not work and should I should be doing ?
Composer is looking for the location of the .composer directory. Export the HOME or COMPOSER_HOME env variable, e.g. : HOME=/root php /usr/local/bin/composer.phar create-project composer/satis /var/www/satis --stability=dev, it will work fine then.
I had the similar issue with amazon linux ami 2, it was showing in the log All settings correct for using Composer. The HOME or COMPOSER_HOME environment variable must be set for composer to run correctly, but it was not installed at all. Below is the way to fix it. Might be helpful to somebody rather waisting 2,3 hours!
sudo curl -sS https://getcomposer.org/installer | sudo php
mv composer.phar /usr/bin/composer
chmod +x /usr/bin/composer
export COMPOSER_HOME=/root
Agree with Ntwobike's answer.
When launching AWS EC2 instances I was installing composer by running an Ansible playbook during in the user data script run. (The user data script is called by cloud-init during the instance build process).
For some reason at this point in the build the $HOME environment variable is not set. So I needed to add 'export HOME=/root' - e.g.
# These need to be set to enable the composer installer to run. It is probably due to an issue
# with the $HOME variable not yet being set at this point in the instance creation.
export HOME=/root
ansible-playbook --extra-vars "target=localhost" playbooks/debian-9/drush.yml