How to set environmental variables on aws ec2 mern app deployment - amazon-web-services

I want to deploy my mern app in aws ec2 instance, I did clone my folder into the instance but I don't know how to set the env variables. I tried to create an .env file and store my varibles, but that didn't work either. So is there any other method to do the same or should I use any other aws service to store my env variables.

You could use Param Store or you could add the environment variable on the instance's /home/ec2-user/.bashrc
You could also do this using User Data when you launch the instance.
[ec2-user# ~]$ cat /home/ec2-user/.bashrc
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=
# User specific aliases and functions
export ENV1=TestEnv1
export ENV2=TestEnv2
you need to execute the following to set the variable
source /home/ec2-user/.bashrc
[ec2-user#ip-172-31-42-105 ~]$ echo $ENV1
TestEnv1
[ec2-user#ip-172-31-42-105 ~]$ echo $ENV2
TestEnv2

Related

How to pull in additional private repositories in Amplify

Trying to use AWS Amplify to deploy a multi-repo dendron wiki which has a mix of public and private github repositories.
Amplify can be associated with a single repo but there doesn't seem to be a built-in way to pull in additional private repositories.
Create a custom deploy key for the private repo in github
generate the key
ssh-keygen -f deploy_key -N ""
Encode the deploy key as a base64 encoded env variable for amplitude
cat deploy_key | base64 | tr -d \\n
add this as a hosting environment variable (eg. DEPLOY_KEY)
Modify the amplify.yml file to make use of the deploy key
there's 2 key steps
adding deploy key to ssh-agent
WARNING: this implementation will print the $DEPLOY_KEY to stdout
disabling StrictHostKeyChecking
NOTE: amplify does not have a $HOME/.ssh folder by default so you'll need to create one as part of the deployment process
relevant excerpt below
- ...
- eval "$(ssh-agent -s)"
- ssh-add <(echo "$DEPLOY_KEY" | base64 -d)
- echo "disable strict host key check"
- mkdir ~/.ssh
- touch ~/.ssh/config
- 'echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- ...
full build file here
Now you should be able to use git to clone the private repo.
For a more detailed writeup as well as alternatives and gotchas, see here
For GitLab repositories:
Create a deploy token in GitLab
Set env variables (eg. DEPLOY_TOKEN_USERNAME, DEPLOY_TONE_PASSWORD) in amplify panel
Add this line to the amplify config
- git config --global url."https://$DEPLOY_TOKEN_USERNAME:$DEPLOY_TOKEN_PASSWORD#gitlab.com/.../repo.git".insteadOf "https://gitlab.com/.../repo.git"

Pass my local environment variables values to my ec2 user data

As simple as it sounds, I would like to pass my local environment variable value inside my ec2 user data script. So for instance I run this locally:
export PASSWORD=mypassword
printenv PASSWORD
mypassword
then once I ssh to my ec2 and run
printenv PASSWORD
I should see the same value mypassword. I haven't found a way to inject the right codes in my user data script. Please help if you can.
This is my user data, I am basically installing some packages then authenticate to my vault with the password value I would like to upload from my laptop to my ec2. I just don't want to hardcode mypassword in my user dat script. (not even sure if it's doable?)
# User Data for ASG
user_data = <<EOF
#!/usr/bin/env bash
set -x -v
exec > >(tee -i user-data.log 2>/dev/console) 2>&1
# Install latest AWS cli
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
# Install VAULT cli
sudo wget https://releases.hashicorp.com/vault/1.8.2/vault_1.8.2_linux_amd64.zip
sudo unzip vault_1.8.2_linux_amd64.zip
sudo mv vault /usr/local/bin/vault
sudo chmod +x /usr/local/bin/vault
vault -v
# Vault env var
export VAULT_ADDR=https://myvault.test
export VAULT_SKIP_VERIFY=true
export VAULT_NAMESPACE=test
# Vault login (to authenticate to vault must export local value of $PASSWORD
export VAULT_PASSWORD=$PASSWORD
vault login -namespace=test -method=userpass username=myuser password=$VAULT_PASSWORD
user_data runs under root user and it has its own shell environment. Thus when you ssh to the instance as an ec2-user or ubuntu, you have your own, different local environment. This is the reason why your export does not work.
To rectify the issue, your user_data must modify .bashrc (or equivalent depending on the OS) of your ssh user (often ec2-user or ubuntu). Only then your exports will take effect.
I was able to make it work by setting up locally all variables for my sensitive data and defined them my variables.tf. Then on my user data field I just exported the TF var name. See below:
Local setup
export TF_VAR_password=password
TF code --> variables.tf
variable "password" {
description = "my password"
type = string
default = ""
}
Now in my app user data script
export MYPASSWORD=${var.password}
VOILA :)
Here is the website as a point of reference --> https://learn.hashicorp.com/tutorials/terraform/sensitive-variables?in=terraform/0-14 ( look for Set values with environment variables)

Environment variables with AWS SSM Run Command

I am using AWS SSM Run Command with the AWS-RunShellScript document to run a script on an AWS Linux 1 instance. Part of the script includes using an environment variable. When I run the script myself, everything is fine. But when I run the script with SSM, it can't see the environment variable.
This variable needs to be passed to a Python script. I had originally been trying os.environ['VARIABLE'] to no effect.
I know that AWS SSM uses root privileges and so I have put a line exporting the variable in the root ~/.bashrc file, yet it still can not see the variable. The root user can see it when I run it myself.
Is it not possible for AWS SSM to use environment variables, or am I not exporting it correctly? If it is not possible, I'll try using AWS KMS instead to store my variable.
~/.bashrc
export VARIABLE="VALUE"
script.sh
"$VARIABLE"
Security is important, hence why I don't want to just store the variable in the script.
SSM does not open an actual SSH session so passing environment variables won't work. It's essential a daemon running on the box that's taking your requests and processing them. It's a very basic product: it doesn't support any of the standard features that come with SSH such as SCP, port forwarding, tunneling, passing of env variables etc. An alternative way of passing a value you need to a script would be to store it in AWS Systems Manager Parameter Store, and have your script pull the variable from the store.
You'll need to update your instance role permissions to have access to ssm:GetParameters for the script you run to access the value stored.
My solution to this problem:
set -o allexport; source /etc/environment; set +o allexport
-o allexport enables all variables in /etc/environment to be exported. +o allexport disables this feature.
For more information see the Set builtin documentation
I have tested this solution by using the AWS CLI command aws ssm send-command:
"commands": [
"set -o allexport; source /etc/environment; set +o allexport",
"echo $TEST_VAR > /home/ec2-user/app.log"
]
I am running bash script in my SSM command document, so I just source the profile/script to have env variables ready to be used by the subsequent commands. For example,
"runCommand": [
"#!/bin/bash",
". /tmp/setEnv.sh",
"echo \"myVar: $myVar, myVar2: $myVar2\""
]
You can refer to Can a shell script set environment variables of the calling shell? for sourcing your env variables. For python, you will have to parse your source profile/script, see Emulating Bash 'source' in Python

Setting environment variable for a Compute Engine VM

I need to set an environment variable within my virtual machine on Google Compute Engine. The variable I need to set is called "GOOGLE_APPLICATION_CREDENTIALS"and according to Google documentation I need to set its value to the path of a json file. I have two questions:
1: Can I set this variable within the Google Compute Engine interface on GCP?
2: Can I use System.Environment.SetEnvironmentVariable("GOOGLE_APPLICATION_CREDENTIALS", Resources.googlecredentials.credentials);? Whenever I try and set this variable on my local machine I use this technique, but I set the value to the path of the file (local directory). However, because I am now using a virtual machine, I was wondering, can I set the environment variable to the actual contents of a resource file? Advantageously, this allows me to embed the credentials into the actual app itself.
Cheers
Store your credentials in a file temporarily
$HOME/example/g-credentials.json
{
"foo": "bar"
}
Then upload it to your GCE projects metadata as a string
gcloud compute project-info add-metadata \
--metadata-from-file g-credentials=$HOME/example/g-credentials.json
You can view your GCE projects metadata on the cloud console by searching for metadata or you can view it by using gcloud
gcloud compute project-info describe
Then set the env var/load the config in your VMs startup script
$HOME/example/startup.txt
#! /bin/bash
# gce project metadata key where the config json is stored as a string
meta_key=g-credentials
env_key=GOOGLE_APPLICATION_CREDENTIALS
config_file=/opt/g-credentials.json
env_file=/etc/profile
# command to set env variable
temp_cmd="export $env_key=$config_file"
# command to write $temp_cmd to file if $temp_cmd doesnt exist w/in it
perm_cmd="grep -q -F '$temp_cmd' $env_file || echo '$temp_cmd' >> $env_file"
# set the env var for only for the duration of this script.
# can delete this if you don't start processes at the end of
# this script that utilize the env var.
eval $temp_cmd
# set the env var permanently for any SUBSEQUENT shell logins
eval $perm_cmd
# load the config from the projects metadata
config=`curl -f http://metadata.google.internal/computeMetadata/v1/project/attributes/$meta_key -H "Metadata-Flavor: Google" 2>/dev/null`
# write it to file
echo $config > $config_file
# start other processes below ...
example instance
gcloud compute instances create vm-1 \
--metadata-from-file startup-script=$HOME/example/startup.txt \
--zone=us-west1-a
you could also edit the user's profile:
nano ~/.bashrc
or even system-wide with /etc/profile, /etc/bash.bashrc, or /etc/environment
and then add:
export GOOGLE_APPLICATION_CREDENTIALS=...
Custom Metadata can also be used, which is rather GCE specific.
Yes, you could set it within your RDP/SSH session.
No, you should set the path in the variable according to the documentation, alternatively, there are code examples that gather the service account path in a variable to use the credentials within your applications.

How to set an environment variable in Amazon EC2

I created a tag on the AWS console for one of my EC2 instances.
However, when I look on the server, no such environment variable is set.
The same thing works with elastic beanstalk. env shows the tags I created on the console.
$ env
[...]
DB_PORT=5432
How can I set environment variables in Amazon EC2?
You can retrieve this information from the meta data and then run your own set environment commands.
You can get the instance-id from the meta data (see here for details: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-data-retrieval)
curl http://169.254.169.254/latest/meta-data/instance-id
Then you can call the describe-tags using the pre-installed AWS CLI (or install it on your AMI)
aws ec2 describe-tags --filters "Name=resource-id,Values=i-5f4e3d2a" "Name=Value,Values=DB_PORT"
Then you can use OS set environment variable command
export DB_PORT=/what/you/got/from/the/previous/call
You can run all that in your user-data script. See here for details: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
Lately, it seems AWS Parameter Store is a better solution.
Now there is even a secrets manager which auto manages sensitive configurations as database keys and such..
See this script using SSM Parameter Store based of the previous solutions by Guy and PJ Bergeron.
https://github.com/lezavala/ec2-ssm-env
I used a combination of the following tools:
Install jq library (sudo apt-get install -y jq)
Install the EC2 Instance Metadata Query Tool
Here's the gist of the code below in case I update it in the future: https://gist.github.com/marcellodesales/a890b8ca240403187269
######
# Author: Marcello de Sales (marcello.desales#gmail.com)
# Description: Create Create Environment Variables in EC2 Hosts from EC2 Host Tags
#
### Requirements:
# * Install jq library (sudo apt-get install -y jq)
# * Install the EC2 Instance Metadata Query Tool (http://aws.amazon.com/code/1825)
#
### Installation:
# * Add the Policy EC2:DescribeTags to a User
# * aws configure
# * Souce it to the user's ~/.profile that has permissions
####
# REboot and verify the result of $(env).
# Loads the Tags from the current instance
getInstanceTags () {
# http://aws.amazon.com/code/1825 EC2 Instance Metadata Query Tool
INSTANCE_ID=$(./ec2-metadata | grep instance-id | awk '{print $2}')
# Describe the tags of this instance
aws ec2 describe-tags --region sa-east-1 --filters "Name=resource-id,Values=$INSTANCE_ID"
}
# Convert the tags to environment variables.
# Based on https://github.com/berpj/ec2-tags-env/pull/1
tags_to_env () {
tags=$1
for key in $(echo $tags | /usr/bin/jq -r ".[][].Key"); do
value=$(echo $tags | /usr/bin/jq -r ".[][] | select(.Key==\"$key\") | .Value")
key=$(echo $key | /usr/bin/tr '-' '_' | /usr/bin/tr '[:lower:]' '[:upper:]')
echo "Exporting $key=$value"
export $key="$value"
done
}
# Execute the commands
instanceTags=$(getInstanceTags)
tags_to_env "$instanceTags"
If you are using linux or mac os for your ec2 instance then,
Go to your root directory and write command:
vim .bash_profile
You can see your bash_profile file and now press 'i' for inserting a lines, then add
export DB_PORT="5432"
After adding this line you need to save file, so press 'Esc' button then press ':' and after colon write 'w' it will save the file without exiting.
For exit, again press ':' after that write 'quit' and now you are exit from the file. To check that your environment variable is set or not write below commands:
python
>>>import os
>>>os.environ.get('DB_PORT')
>>>5432
Following the instructions given by Guy, I wrote a small shell script. This script uses AWS CLI and jq. It lets you import your AWS instance and AMI tags as shell environment variables.
I hope it can help a few people.
https://github.com/12moons/ec2-tags-env