How to set an environment variable in Amazon EC2 - amazon-web-services

I created a tag on the AWS console for one of my EC2 instances.
However, when I look on the server, no such environment variable is set.
The same thing works with elastic beanstalk. env shows the tags I created on the console.
$ env
[...]
DB_PORT=5432
How can I set environment variables in Amazon EC2?

You can retrieve this information from the meta data and then run your own set environment commands.
You can get the instance-id from the meta data (see here for details: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-data-retrieval)
curl http://169.254.169.254/latest/meta-data/instance-id
Then you can call the describe-tags using the pre-installed AWS CLI (or install it on your AMI)
aws ec2 describe-tags --filters "Name=resource-id,Values=i-5f4e3d2a" "Name=Value,Values=DB_PORT"
Then you can use OS set environment variable command
export DB_PORT=/what/you/got/from/the/previous/call
You can run all that in your user-data script. See here for details: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html

Lately, it seems AWS Parameter Store is a better solution.
Now there is even a secrets manager which auto manages sensitive configurations as database keys and such..
See this script using SSM Parameter Store based of the previous solutions by Guy and PJ Bergeron.
https://github.com/lezavala/ec2-ssm-env

I used a combination of the following tools:
Install jq library (sudo apt-get install -y jq)
Install the EC2 Instance Metadata Query Tool
Here's the gist of the code below in case I update it in the future: https://gist.github.com/marcellodesales/a890b8ca240403187269
######
# Author: Marcello de Sales (marcello.desales#gmail.com)
# Description: Create Create Environment Variables in EC2 Hosts from EC2 Host Tags
#
### Requirements:
# * Install jq library (sudo apt-get install -y jq)
# * Install the EC2 Instance Metadata Query Tool (http://aws.amazon.com/code/1825)
#
### Installation:
# * Add the Policy EC2:DescribeTags to a User
# * aws configure
# * Souce it to the user's ~/.profile that has permissions
####
# REboot and verify the result of $(env).
# Loads the Tags from the current instance
getInstanceTags () {
# http://aws.amazon.com/code/1825 EC2 Instance Metadata Query Tool
INSTANCE_ID=$(./ec2-metadata | grep instance-id | awk '{print $2}')
# Describe the tags of this instance
aws ec2 describe-tags --region sa-east-1 --filters "Name=resource-id,Values=$INSTANCE_ID"
}
# Convert the tags to environment variables.
# Based on https://github.com/berpj/ec2-tags-env/pull/1
tags_to_env () {
tags=$1
for key in $(echo $tags | /usr/bin/jq -r ".[][].Key"); do
value=$(echo $tags | /usr/bin/jq -r ".[][] | select(.Key==\"$key\") | .Value")
key=$(echo $key | /usr/bin/tr '-' '_' | /usr/bin/tr '[:lower:]' '[:upper:]')
echo "Exporting $key=$value"
export $key="$value"
done
}
# Execute the commands
instanceTags=$(getInstanceTags)
tags_to_env "$instanceTags"

If you are using linux or mac os for your ec2 instance then,
Go to your root directory and write command:
vim .bash_profile
You can see your bash_profile file and now press 'i' for inserting a lines, then add
export DB_PORT="5432"
After adding this line you need to save file, so press 'Esc' button then press ':' and after colon write 'w' it will save the file without exiting.
For exit, again press ':' after that write 'quit' and now you are exit from the file. To check that your environment variable is set or not write below commands:
python
>>>import os
>>>os.environ.get('DB_PORT')
>>>5432

Following the instructions given by Guy, I wrote a small shell script. This script uses AWS CLI and jq. It lets you import your AWS instance and AMI tags as shell environment variables.
I hope it can help a few people.
https://github.com/12moons/ec2-tags-env

Related

How to set environmental variables on aws ec2 mern app deployment

I want to deploy my mern app in aws ec2 instance, I did clone my folder into the instance but I don't know how to set the env variables. I tried to create an .env file and store my varibles, but that didn't work either. So is there any other method to do the same or should I use any other aws service to store my env variables.
You could use Param Store or you could add the environment variable on the instance's /home/ec2-user/.bashrc
You could also do this using User Data when you launch the instance.
[ec2-user# ~]$ cat /home/ec2-user/.bashrc
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=
# User specific aliases and functions
export ENV1=TestEnv1
export ENV2=TestEnv2
you need to execute the following to set the variable
source /home/ec2-user/.bashrc
[ec2-user#ip-172-31-42-105 ~]$ echo $ENV1
TestEnv1
[ec2-user#ip-172-31-42-105 ~]$ echo $ENV2
TestEnv2

Pass my local environment variables values to my ec2 user data

As simple as it sounds, I would like to pass my local environment variable value inside my ec2 user data script. So for instance I run this locally:
export PASSWORD=mypassword
printenv PASSWORD
mypassword
then once I ssh to my ec2 and run
printenv PASSWORD
I should see the same value mypassword. I haven't found a way to inject the right codes in my user data script. Please help if you can.
This is my user data, I am basically installing some packages then authenticate to my vault with the password value I would like to upload from my laptop to my ec2. I just don't want to hardcode mypassword in my user dat script. (not even sure if it's doable?)
# User Data for ASG
user_data = <<EOF
#!/usr/bin/env bash
set -x -v
exec > >(tee -i user-data.log 2>/dev/console) 2>&1
# Install latest AWS cli
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
# Install VAULT cli
sudo wget https://releases.hashicorp.com/vault/1.8.2/vault_1.8.2_linux_amd64.zip
sudo unzip vault_1.8.2_linux_amd64.zip
sudo mv vault /usr/local/bin/vault
sudo chmod +x /usr/local/bin/vault
vault -v
# Vault env var
export VAULT_ADDR=https://myvault.test
export VAULT_SKIP_VERIFY=true
export VAULT_NAMESPACE=test
# Vault login (to authenticate to vault must export local value of $PASSWORD
export VAULT_PASSWORD=$PASSWORD
vault login -namespace=test -method=userpass username=myuser password=$VAULT_PASSWORD
user_data runs under root user and it has its own shell environment. Thus when you ssh to the instance as an ec2-user or ubuntu, you have your own, different local environment. This is the reason why your export does not work.
To rectify the issue, your user_data must modify .bashrc (or equivalent depending on the OS) of your ssh user (often ec2-user or ubuntu). Only then your exports will take effect.
I was able to make it work by setting up locally all variables for my sensitive data and defined them my variables.tf. Then on my user data field I just exported the TF var name. See below:
Local setup
export TF_VAR_password=password
TF code --> variables.tf
variable "password" {
description = "my password"
type = string
default = ""
}
Now in my app user data script
export MYPASSWORD=${var.password}
VOILA :)
Here is the website as a point of reference --> https://learn.hashicorp.com/tutorials/terraform/sensitive-variables?in=terraform/0-14 ( look for Set values with environment variables)

How can I get the Region and DNS to appear on my html document?

I have a YAML script that sets up an Ubuntu server. Under the UserData portion of it, I have an HTML doc that contains information. I want the AWS Region and the Public DNS name of the server to be displayed on the web page once it is created.
I have variables in lines 8, 9 which are supposed to find the EC2 Availability zone and parse through it to find the specific region. Line 11 has the variable for the public DNS. Initially I tried the "sed" command to replace the values (%AWS_REGION% and %DNS_HOSTNAME%) on the HTML page with the variables. When I checked the page after running the script, nothing was replaced. (i.e. "AWS region: %AWS_REGION%" was displayed.)
THEN I tried the code below, I replaced %AWS_REGION% with $EC2_REGION in hopes that the variable would just get substituted in, but when I ran the script, it was blank (i.e. After "AWS region:" there was nothing, where last time %AWS_REGION% was there.)
UserData:
'Fn::Base64': |
#!/bin/bash -x
# set timezone
timedatectl set-timezone America/New_York
# get region
EC2_AVAIL_ZONE=curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone
EC2_REGION="`echo \"$EC2_AVAIL_ZONE\" | sed 's/[a-z]$//'`"
# get DNS
EC2_DNS=`curl http://169.254.169.254/latest/meta-data/public-hostname`
# install and setup apache
apt-get update
apt-get install -y nginx
cd /var/www/html
echo "<title>Jonah Ryder</title> <h1>Jonah Ryder</h1> <p>AWS region: $EC2_REGION</p> <p>Public hostname: %DNS_HOSTNAME%</p>" > index.html
sed 's/%AWS_REGION%/EC2_REGION/g' index.html
sed 's/%DNS_HOSTNAME%/EC2_DNS/g' index.html
service nginx start
I want the HTML page to take the variables and display them. I don't know where my mistake is.
To use the EC2_REGION environment variable after you have set its value, you need to use $EC2_REGION, not EC2_REGION. This is how you read env variables in general.
Also, it's worth echoing $EC2_AVAIL_ZONE and $EC2_REGION to stdout after you set them so that you can debug this later, if needed, using the EC2 instance console log. For example:
EC2_AVAIL_ZONE=curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone
EC2_REGION="`echo \"$EC2_AVAIL_ZONE\" | sed 's/[a-z]$//'`"
EC2_DNS=`curl http://169.254.169.254/latest/meta-data/public-hostname`
echo "AZ: $EC2_AVAIL_ZONE"
echo "Region: $EC2_REGION"
echo "DNS: $EC2_DNS"

How to migrate elasticsearch data to AWS elasticsearch domain?

I have elasticsearch 5.5 running on a server with some data indexed in it. I want to migrate this ES data to AWS elasticsearch cluster. How I can perform this migration. I got to know that one way is by creating the snapshot of ES cluster, but I am not able to find any proper documentation for this.
The best way to migrate is by using Snapshots. You will need to snapshot your data to Amazon S3 and then proceed a restore from there. Documentation for snapshots to S3 can be found here. Alternatively, you can also re-index your data though this is a longer process and there are limitations depending on the version of AWS ES.
I also recommend looking at Elastic Cloud, the official hosted offering on AWS that includes the additional X-Pack monitoring, management, and security features. The migration guide for moving to Elastic Cloud also goes over snapshots and re-indexing.
I momentarily created a shell script for this -
Github - https://github.com/vivekyad4v/aws-elasticsearch-domain-migration/blob/master/migrate.sh
#!/bin/bash
#### Make sure you have Docker engine installed on the host ####
###### TODO - Support parameters ######
export AWS_ACCESS_KEY_ID=xxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=xxxxxxxxx
export AWS_DEFAULT_REGION=ap-south-1
export AWS_DEFAULT_OUTPUT=json
export S3_BUCKET_NAME=my-es-migration-bucket
export DATE=$(date +%d-%b-%H_%M)
old_instance="https://vpc-my-es-ykp2tlrxonk23dblqkseidmllu.ap-southeast-1.es.amazonaws.com"
new_instance="https://vpc-my-es-mg5td7bqwp4zuiddwgx2n474sm.ap-south-1.es.amazonaws.com"
delete=(.kibana)
es_indexes=$(curl -s "${old_instance}/_cat/indices" | awk '{ print $3 }')
es_indexes=${es_indexes//$delete/}
es_indexes=$(echo $es_indexes|tr -d '\n')
echo "index to be copied are - $es_indexes"
for index in $es_indexes; do
# Export ES data to S3 (using s3urls)
docker run --rm -ti taskrabbit/elasticsearch-dump \
--s3AccessKeyId "${AWS_ACCESS_KEY_ID}" \
--s3SecretAccessKey "${AWS_SECRET_ACCESS_KEY}" \
--input="${old_instance}/${index}" \
--output "s3://${S3_BUCKET_NAME}/${index}-${DATE}.json"
# Import data from S3 into ES (using s3urls)
docker run --rm -ti taskrabbit/elasticsearch-dump \
--s3AccessKeyId "${AWS_ACCESS_KEY_ID}" \
--s3SecretAccessKey "${AWS_SECRET_ACCESS_KEY}" \
--input "s3://${S3_BUCKET_NAME}/${index}-${DATE}.json" \
--output="${new_instance}/${index}"
new_indexes=$(curl -s "${new_instance}/_cat/indices" | awk '{ print $3 }')
echo $new_indexes
curl -s "${new_instance}/_cat/indices"
done

Decrypted vars when install a new aws instance via user-data script

I have Ansible playbooks ready, they includes several encrypted vars. With normal process, I can feed a vault password file to decrypt them with --vault-password-file ~/.vault_pass.txt and deploy the change to remote EC2 instance. So I needn't expose the password file.
But my request is different here. I need include ansible-playbook change in user-data script when create a new EC2 instance. Ideally I should automatically have all setting ready after the instance is running.
I deploy the instances with Terraform by below simple user-data script:
#!/usr/bin/bash
yum -y update
/usr/local/bin/aws s3 cp s3://<BUCKET>/ansible.tar.gz ansible.tar.gz
gtar zxvf ansible.tar.gz
cd ansible
ansible-playbook -i inventory/ec2.py -c local ROLE.yml
So I have to upload my password file into user-data script as well, if in the playbook, there are some encrypted vars.
Anything I can do to avoid it? Will Ansible Tower help for this request?
I did test with CredStash, but still a chicken and egg issue.
If you want your instances to configure themselves they are going to either need all the credentials or another way to get the credentials, ideally with some form of one time pass.
The best I can think of off the top of my head is to use Hashicorp's Vault to store the credentials (potentially all of our secrets or maybe just the Ansible Vault password that then can be used to un-vault your Ansible variables) and have your deploy process create a one time use token that is injected into the user-data script via Terraform's templating.
To do this you'll probably want to wrap your Terraform apply command with some form of helper script that might look like this (untested):
#!/bin/bash
vault_host="10.0.0.3"
vault_port="8200"
response=`curl \
-X POST \
-H "X-Vault-Token:$VAULT_TOKEN" \
-d '{"num_uses":"1"}' \
http://${vault_host}:${vault_port}/auth/token/create/ansible_vault_read`
vault_token=`echo ${response} | jq '.auth.client_token' --raw-output`
terraform apply \
-var 'vault_host=${vault_host}'
-var 'vault_port=${vault_port}'
-var 'vault_token=${vault_token}'
And then your user data script will want to be templated in Terraform with something like this (also untested):
template.tf:
resource "template_file" "init" {
template = "${file("${path.module}/init.tpl")}"
vars {
vault_host = "${var.vault_host}"
vault_port = "${var.vault_port}"
vault_token = "${var.vault_token}"
}
}
init.tpl:
#!/usr/bin/bash
yum -y update
response=`curl \
-H "X-Vault-Token: ${vault_token}" \
-X GET \
http://${vault_host}:${vault_port}/v1/secret/ansible_vault_pass`
ansible_vault_password=`echo ${response} | jq '.data.ansible_vault_pass' --raw-output`
echo ${ansible_vault_password} > ~/.vault_pass.txt
/usr/local/bin/aws s3 cp s3://<BUCKET>/ansible.tar.gz ansible.tar.gz
gtar zxvf ansible.tar.gz
cd ansible
ansible-playbook -i inventory/ec2.py -c local ROLE.yml --vault-password-file ~/.vault_pass.txt
Alternatively you could simply have the instances call something such as Ansible Tower to trigger the playbook to be run against it. This allows you to keep the secrets on the central box doing the configuration rather than having to distribute them to every instance you are deploying.
With Ansible Tower this is done using callbacks and you will need to set up job templates and then have your user data script curl the Tower to trigger the configuration run. You could change your user data script to something like this instead:
template.tf:
resource "template_file" "init" {
template = "${file("${path.module}/init.tpl")}"
vars {
ansible_tower_host = "${var.ansible_tower_host}"
ansible_host_config_key = "${var.ansible_host_config_key}"
}
}
init.tpl:
#!/usr/bin/bash
curl \
-X POST
--data "host_config_key=${ansible_host_config_key}" \
http://{${ansible_tower_host}/v1/job_templates/1/callback/
The host_config_key may seem to be a secret at first glance but it's a shared key that can be used for multiple hosts to access a job template and Ansible Tower will still only run if the host is either defined in a static inventory for the job template or if you are using dynamic inventories then if the host is found in that lookup.