Can we set Environment variable against an EC2 instance from AWS console?
Or do we need load any other service to achieve this?
Also can we load this variable insider Docker?
That appears to be out of scope for the AWS Console. EC2 however is basically AWS hosted VMs, So anything you could do normally on that OS can be done.
Just connect to the machine and do what you would normally do. (See the AWS CLI Guide for further help)
Using the AWS Console:
Stop the ec2 instance
For that ec2 instance, from top dropdown - select Actions > Instance Settings > View/Change User Data
Create a script that would add required environment variables to init scripts.
!/bin/sh echo 'export MY_ENV_VARIABLE=value' >> ~/.bashrc
or a script to /etc/init.d or other similar locations..
Start the EC2 instance for the Environment variables to be available.
If you want to pass the EC2 instance environment variable to the Docker container environment variable:
When you start a docker instance through "docker run" you can pass the value through --env param in the command line.
eg: docker run -e DOCKER_ENV_VARIABLE=$MY_ENV_VARIABLE ...
https://docs.docker.com/engine/reference/commandline/run/#options
You can use user_data to load and afterwards pass to your docker container using docker run -e CONTAINER_ENV_VAR=$EC2_ENV_VARor put in your Dockerfile.
for keys or any sensitive data I would advise you to use the Parameter Store, you can put passwords, users, or whatever data you want and use a service call chamber
Related
I am trying to use the new ecs context that the latest docker cli exposes. When I create the context using docker context create ecs my_context, I select the use environment variables method (the recommended way), how now matter how I try to call docker compose, it always returns the message context requires credentials to be passed as environment variables. Searching for this message in google returns nothing :/
I have tried passing using the -e and --environment flags, I have tried with a .env file, and the --env-file flag.
docker --context my_ecs compose -f docker-compose.yml -f docker-compose.dev.yml -e AWS_ACCESS_KEY_ID=XXXXXXX -e AWS_SECRET_ACCESS_KEY=XXXXXXXX -e AWS_DEFAULT_REGION=XXXXXXXX up
If I don't use the environment variable method, it hits another known (frustrating) bug, whereby it uses the wrong aws region, even though the context is setup with the correct region. I feel the environment variable option would resolve that, if I could get it to see the environment variables.
I don't think it works passing the variables during the docker compose up command. The way it works is that you export the access key ID, secret access key and region variables in your shell and then you create the docker context by pointing to the variables as shown here.
You can do that interactively by launching docker context create ecs myecscontext and then picking AWS environment variables or you can just run docker context create ecs myecscontext --from-env.
And yes the region bug is annoying, if this is what you are referring to.
Google search lead me to this post, (I am not using Windows but aws cloud9 IDE,)
I could get rid of the message context requires credentials to be passed as environment variables by passing dummy values to the environment vars, it goes like this.
$ docker context create ecs myecscontext
? Create a Docker context using: AWS environment variables
Successfully created ecs context "myecscontext"
$ docker context use myecscontext
myecscontext
$ docker context ls
context requires credentials to be passed as environment variables
$ AWS_SECRET_ACCESS_KEY=dummy AWS_ACCESS_KEY_ID=dummy AWS_DEFAULT_REGION=dummy docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
myecscontext * ecs credentials read from environment
$ AWS_SECRET_ACCESS_KEY=dummy AWS_ACCESS_KEY_ID=dummy AWS_DEFAULT_REGION=dummy docker context use default
default
$ docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
myecscontext ecs credentials read from environment
$ docker context rm myecscontext
myecscontext
Modify and export environmental variables on Linux or MacOs with Environment variables to configure the AWS CLI:
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-west-2
Later, you can see the options with this command:
docker context create ecs --help
Description: Create a context for Amazon ECS
Usage:
docker context create ecs CONTEXT [flags]
Flags:
--access-keys string :Use AWS access keys from file
--description string :Description of the context
--from-env:Use AWS environment variables for profile, or credentials and region
-h, --help: Help for ecs
--local-simulation :Create context for ECS local simulation endpoints
--profile string :Use an existing AWS profile
Answer: docker context create ecs CONTEXT --from-env
I had to resource my terminal with source ~/.bash_profile because I was using .bashrc as my source and this did not have my environment variables in it.
I've handled this situation by setting environment variables. You can use this guide for this.
I am currently unable to access my RDS environment variables in the EC2 instance. They are both linked using Elastic Beanstalk.
I am trying to use the RDS environment variables in a PHP script using the $_SERVER global variable but every time I check on the console these are always empty strings. Also if I run echo ${RDS_HOSTNAME} on the console I also get an empty string.
However when I run /opt/elasticbeanstalk/bin/get-config environment I get the following with the correct credentials.
{
"COMPOSER_HOME":"/root",
"RDS_DB_NAME":"dbname",
"RDS_HOSTNAME":"dbhost.rds.amazonaws.com",
"RDS_PASSWORD":"dbpassword",
"RDS_PORT":"3306",
"RDS_USERNAME":"dbusername"
}
I've also connected to the database via the mysql command just to make sure that the EC2 instance can access the RDS database and it worked.
Using mysql -u dbusername -h dbhost.rds.amazonaws.com -p
Here is an example, where I can run (locally) multiple instances of a docker image with different arguments to the echo command and to AUTHOR environment variable.
NAME=Peter
docker run alpine echo "Hello $NAME"
docker run --name static-site -e AUTHOR="$NAME" -d -P dockersamples/static-site
Here I can programmatically change the value of $NAME and run the containers.
I want the same flexibility while I try to run those containers on Amazon ECS using AWS Fargate launch type. Alas, I am unable to figure out how can I programmatically provide different values to $NAME variable.
ecs task acts like the docker run command so
when your creating a task in the bottom you can add container details. in there you have environment vaiables.
so when your deploying programmatically you have to create a new task per each container.
You can override the environment variable settings when you run an existing task or create a new version of an existing task. If you click on 'create new revision' in the Task Definitions tab on the ECS Dashboard, you'll see that it lets you edit your container definitions as well. Navigate to Advanced container configuration>ENVIRONMENT and in there you should be able to add new env vars or update existing ones.
I am using AWS code deploy agent and deploying my project to the server through bitbucket plugin.
The code deployment agent first executes the script files which has the command to execute my spring-boot project.
Since I have two environments one development and another production. I want the script to do things differently based on the environment i.e two different instances.
My plan is to fetch the aws static ip-address which is mapped and from that determine the environment
(production or stage).
How to fetch the elastic ip address through sh commands.
edited
Static IP will work.
Here is a more nature CodeDeploy way to solve this is - set up 2 CodeDeploy deployment groups, one for your development env and the other for your production env. Then in your script, you can use environment variables that CodeDeploy will set during the deployment to understand which env you are deploying to.
Here is a blog post about how to use CodeDeploy environment variables: https://aws.amazon.com/blogs/devops/using-codedeploy-environment-variables/
You could do the following:
id=$( curl http://169.254.169.254/latest/meta-data/instance-id )
eip=$( aws ec2 describe-addresses --filter Name=instance-id,Values=${id} | aws ec2 describe-addresses | jq .Addresses[].PublicIp --raw-output )
The above gets the instance-id from metadata, then uses the aws cli to look for elastic IPs filtered by the id from metadata. Using jq this output can then be parsed down to the IP you are looking for.
Query the metadata server
eip=`curl -s 169.254.169.254/latest/meta-data/public-ipv4`
echo $eip
The solution is completely off tangent to what I originally asked but it was enough for my requirement.
I just needed to know the environment I am in to do certain actions. So what I did was to set an environment variable in an independent script file where the environment variable is set and the value is that of the environment.
ex: let's say in a file env-variables.sh
export profile= stage
In the script file where the commands have to be executed based on the environment I access it this way
source /test/env-variables.sh
echo current profile is $profile
if [ $profile = stage ]
then
echo stage
elif [ $profile = production ]
then
echo production
else
echo failure
fi
Hope some one finds it useful
So I have a docker container running jenkins and an EC2 registry on AWS. I would like to have jenkins push containers back to the EC2 registry.
To do this, I would like to be able to automate the aws configure and get login steps on container startup. I figured that I would be able to
export AWS_ACCESS_KEY_ID=*
export AWS_SECRET_ACCESS_KEY=*
export AWS_DEFAULT_REGION=us-east-1
export AWS_DEFAULT_OUTPUT=json
Which I expected to cause aws configure to complete automatically, but that did not work. I then tried creating configs as per the AWS docs and repeating the process, which also did not work. I then tried using aws configure set also with no luck.
I'm going bonkers here, what am I doing wrong?
No real need to issue aws configure instead as long as you populate env vars
export AWS_ACCESS_KEY_ID=aaaa
export AWS_SECRET_ACCESS_KEY=bbbb
... also export zone and region
then issue
aws ecr get-login --region ${AWS_REGION}
you will achieve the same desired aws login status ... as far as troubleshooting I suggest you remote login into your running container instance using
docker exec -ti CONTAINER_ID_HERE bash
then manually issue above aws related commands interactively to confirm they run OK before putting same into your Dockerfile