Environment variables with AWS SSM Run Command - amazon-web-services

I am using AWS SSM Run Command with the AWS-RunShellScript document to run a script on an AWS Linux 1 instance. Part of the script includes using an environment variable. When I run the script myself, everything is fine. But when I run the script with SSM, it can't see the environment variable.
This variable needs to be passed to a Python script. I had originally been trying os.environ['VARIABLE'] to no effect.
I know that AWS SSM uses root privileges and so I have put a line exporting the variable in the root ~/.bashrc file, yet it still can not see the variable. The root user can see it when I run it myself.
Is it not possible for AWS SSM to use environment variables, or am I not exporting it correctly? If it is not possible, I'll try using AWS KMS instead to store my variable.
~/.bashrc
export VARIABLE="VALUE"
script.sh
"$VARIABLE"
Security is important, hence why I don't want to just store the variable in the script.

SSM does not open an actual SSH session so passing environment variables won't work. It's essential a daemon running on the box that's taking your requests and processing them. It's a very basic product: it doesn't support any of the standard features that come with SSH such as SCP, port forwarding, tunneling, passing of env variables etc. An alternative way of passing a value you need to a script would be to store it in AWS Systems Manager Parameter Store, and have your script pull the variable from the store.
You'll need to update your instance role permissions to have access to ssm:GetParameters for the script you run to access the value stored.

My solution to this problem:
set -o allexport; source /etc/environment; set +o allexport
-o allexport enables all variables in /etc/environment to be exported. +o allexport disables this feature.
For more information see the Set builtin documentation
I have tested this solution by using the AWS CLI command aws ssm send-command:
"commands": [
"set -o allexport; source /etc/environment; set +o allexport",
"echo $TEST_VAR > /home/ec2-user/app.log"
]

I am running bash script in my SSM command document, so I just source the profile/script to have env variables ready to be used by the subsequent commands. For example,
"runCommand": [
"#!/bin/bash",
". /tmp/setEnv.sh",
"echo \"myVar: $myVar, myVar2: $myVar2\""
]
You can refer to Can a shell script set environment variables of the calling shell? for sourcing your env variables. For python, you will have to parse your source profile/script, see Emulating Bash 'source' in Python

Related

any "docker" command that i try to run on terminal throw this message "context requires credentials to be passed as environment variables"

I was reading the Docker documentation about deploy Docker containers on AWS ECS https://docs.docker.com/cloud/ecs-integration/ . And after i run the command docker context create ecs myecscontext and select the option AWS environment variables every docker commands that i try to run throw this message on my terminal context requires credentials to be passed as environment variables. I've tried to set the AWS environments with the windows set command but it dosen't work.
I've used like this:
set AWS_SECRET_ACCESS_KEY=any-value
set AWS_ACCESS_KEY_ID=any-value
I'm searching how to solve this problem and the only thing that i've found is to set environment variables like i've already done. What i have to do?
UPDATE:
I've find another way to set environment variables on windows in this site https://www.tutorialspoint.com/how-to-set-environment-variables-using-powershell
Instead use set i had to use $env:VARIABLE_NAME = 'any-value' this sintax to really update the vars.
Like this:
$env:AWS_ACCESS_KEY_ID = 'my-aws-access-key-id'
$env:AWS_SECRET_ACCESS_KEY = 'my-aws-secret-access-key'

how export addresses into PATH for running pyspark in Amazon EC2

Hi I have installed Spark and Python and Jupyter Notebook in Amazon AWS EC2 but when I run "jupyter notebook" in the command prompt it just provides an address for "jupyter notebook" when I open the jupyter notebook, I can't run pyspark commands. I just can run python commands.
I googled it and found these commands:
export SPARK_HOME=/home/ubuntu/spark-3.0.1-bin-hadoop3.2
export PATH=$SPARK_HOME/bin:$PATH
export PYTHONPATH=$SPARK_HOME/python:$PYTHONPATH
After executing them, when I typed "jupyter notebook" I access a "jupyter notebook" that can support pyspark.
However, when I close my commands prompt and logged in later, I have to type the above commands again to be able to have "pyspark" in the "jupyter notebook".
My question: how permanently save those variables in "PATH" of environment variable. And how can see all of the environment variables including the ones that I entered through above commands.
It's can be a similar case for environment variables and your local terminal. To have some extra variables exported, you need to save those export commands in your EC2 machine .bashrc file
More details can be easily found on the Internet, for example here

Where are my environment variables in Elastic Beanstalk for AL2?

I'm using elastic beanstalk to deploy a Django app. I'd like to SSH on the EC2 instance to execute some shell commands but the environment variables don't seem to be there. I specified them via the AWS GUI (configuration -> environment properties) and they seem to work during the boot-up of my app.
I tried activating and deactivating the virtual env via:
source /var/app/venv/*/bin/activate
Is there some environment (or script I can run) to access an environment with all the properties set? Otherwise, I'm hardly able to run any command like python3 manage.py ... since there is no settings module configured (I know how to specify it manually but my app needs around 7 variables to work).
During deployment, the environment properties are readily available to your .platform hook scripts.
After deployment, e.g. when using eb ssh, you need to load the environment properties manually.
One option is to use the EB get-config tool. The environment properties can be accessed either individually (using the -k option), or as a JSON or YAML object with key-value pairs.
For example, one way to export all environment properties would be:
export $(/opt/elasticbeanstalk/bin/get-config --output YAML environment |
sed -r 's/: /=/' | xargs)
Here the get-config part returns all environment properties as YAML, the sed part replaces the ': ' in the YAML output with '=', and the xargs part fixes quoted numbers.
Note this does not require sudo.
Alternatively, you could refer to this AWS knowledge center post:
Important: On Amazon Linux 2, all environment properties are centralized into a single file called /opt/elasticbeanstalk/deployment/env. You must use this file during Elastic Beanstalk's application deployment process only. ...
The post describes how to make a copy of the env file during deployment, using .platform hooks, and how to set permissions so you can access the file later.
You can also perform similar steps manually, using SSH. Once you have the copy set up, with the proper permissions, you can source it.
Beware:
Note: Environment properties with spaces or special characters are interpreted by the Bash shell and can result in a different value.
Try running the command /opt/elasticbeanstalk/bin/get-config environment after you ssh into the EC2 instance.
If you are trying to access the environment variables in eb script elastic beanstalk
Use this
$(/opt/elasticbeanstalk/bin/get-config environment -k ENVURL)
{ "Ref" : "AWSEBEnvironmentName" }
$(/opt/elasticbeanstalk/bin/get-config environment -k ENVURL)

How to set environment variable for root user at start-up?

I'm trying to add memory usage monitoring to the monitoring tab of an instance at console.aws.amazon.com. It's an instance running Amazon Linux AMI 2013.09.2 I have found the Amazon CloudWatch Monitoring Scripts for Linux and specifically mon-put-instance-data.pl that let's me collect memory stats and report it to CloudWatch as custom metrics.
To have this working I need to set the environment variable AWS_CREDENTIAL_FILE to point to a file containing my AWSAccessKeyId and AWSSecretKey. I do this by typing:
export AWS_CREDENTIAL_FILE=/home/ec2-user/aws-scripts-mon/awscreds.template
To avoid having to type this over and over again, I'm looking for a way to set the environment variable at startup. I have tried adding the code to these files:
/etc/rc.local file
/etc/profile
/home/ec2-user/.bash_profile
As adding the line of code in either of the files seems to work when I switch to root user, where should I put it? If I set the variable in /home/ec2-user/.bash_profile the variable is set for ec2-user but not for root. If i then sudo -E su it works, but I don't know if this is the best way to go about it?
Create a sh file and put the code in it. Then put this sh file in /etc/profile.d/ folder.
Note: create this sh file using the root user.
Once your instance is created, this sh file will automatically run and creates the environment variable for you and this environment variable will be accessible to all the users.

How to check whether my user data passing to EC2 instance is working

While creating a new AWS EC2 instance using the EC2 command line API, I passed some user data to the new instance.
How can I know whether that user data executed or not?
You can verify using the following steps:
SSH on launch EC2 instance.
Check the log of your user data script in:
/var/log/cloud-init.log and
/var/log/cloud-init-output.log
You can see all logs of your user data script, and it will also create the /etc/cloud folder.
Just for reference, you can check if the user data executed by taking a look at the system log from the EC2 console. Right click on your instance -
In the new interface: Monitor and Troubleshoot > Get System Log
In the old interface: Instance Settings > Get System log
This should open a modal window with the system logs
It might also be useful for you to see what the userdata looks like when it's being executed during the bootstrapping of the instance. This is especially true if you are passing in environmental variables or flags from the CloudFormation template. You can see how the UserData is being executed in two different ways:
1. From within the instance:
# Get instance ID
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
# Print user data
sudo cat /var/lib/cloud/instances/$INSTANCE_ID/user-data.txt
2. From outside the instance
Note: this will only work if you have configured the UserData shell in such a way that it will output the commands it runs.
For bash, you can do this like as follows:
"#!/bin/bash\n",
"set -x\n",
Right click on the EC2 instance from the EC2 console -> Monitor and Troubleshoot -> Get system log. Download the log file and look for something a section that looks like this:
ip-172-31-76-56 login: 2021/10/25 17:13:47Z: Amazon SSM Agent v3.0.529.0 is running
2021/10/25 17:13:47Z: OsProductName: Ubuntu
2021/10/25 17:13:47Z: OsVersion: 20.04
[ 45.636562] cloud-init[856]: Cloud-init v. 21.2-3...
[ 47.749983] cloud-init[896]: + echo hello world
this is what you would see if the UserData was configured like this:
"#!/bin/bash\n",
"set -x\n",
"echo hello world"
Debugging user data scripts on Amazon EC2 is a bit awkward indeed, as there is usually no way to actively hook into the process, so one ideally would like to gain Real time access to user-data script output as summarized in Eric Hammond's article Logging user-data Script Output on EC2 Instances:
The recent Ubuntu AMIs still send user-data script to the console
output, so you can view it remotely, but it is no longer available in
syslog on the instance. The console output is only updated a few
minutes after the instance boots, reboots, or terminates, which forces
you to wait to see the output of the user-data script as well as not
capturing output that might come out after the snapshot.
Depending on your setup you might want to ship the logs to a remote logging facility like Loggly right away, but getting this installed early enough can obviously be kind of a chicken/egg problem (though it works great if the AMI happens to be configured like so already).
Enable logging for your user data
Eric Hammond, in "Logging user-data Script Output on EC2 Instances (2010, Hammond)", suggests:
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
Take care to put a space between the two > > characters at the beginning of the statement.
Here’s a complete user-data script as an example:
#!/bin/bash -ex
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
echo BEGIN
date '+%Y-%m-%d %H:%M:%S'
echo END
Put this in userdata
touch /tmp/file2.txt
Once the instance is up you can check whether the file is created or not. Based on this you can tell if the userdata is executed or not.
Have your user data create a file in your ec2's /tmp directory to see if it works:
bob.txt:
#!/bin/sh
echo 'Woot!' > /home/ec2-user/user-script-output.txt
Then launch with:
ec2-run-instances -f bob.txt -t t1.micro -g ServerPolicy ami-05cf5c6d -v