I have an application that I need to deploy in AWS.
The application has default properties which should be overridden for each env (qa/prd etc.) using overrides.properties file.
Source code of the application is composed to docker image and sent to GPR.
I have a CDK repo which takes docker image from GPR, stores it in ECR and creates a Fargate service with AutoScaling Group.
Here somehow I have to override default properties to the specific ones for different environments.
Is there an option in CDK to add a file (overrides.properties) to docker image or to pass it to ec2 instances before running the docker container?
If someone finds themselves in the similar situation - posting a workaround'ish solution prorposed by #gshpychka
It's possible to set required properties as environment variables and add the script to store env variables to specific file to the Dockerfile for your image.
Sample script:
env | while IFS= read -r line; do
echo "$line" >> /your/file/here
done
Related
I am trying to create cluster in ECS Fargate with a Docker-hub image. To spin the container I have to upload a file config.json in the host so that the path could be mounted as -v /configfile/path:/etc/selenium
In local we can specify the path of json file like below.
docker run -d --name selenium -p 4444:4444 -v /E/config.json:/etc/selenium/
Whereas I am not sure where to upload the config.json file in ECS Faragte and use that path in volume.
It is not possible to do anything related to the host when using Fargate. The whole point of Fargate is that it is a "serverless" container runtime, so you have no access to the underlying server at all.
You need to look into other ways of achieving your requirement in Fargate. For example you could first copy the file to S3, and then configure the entrypoint script in your docker container to download the file from S3 before starting selenium.
Can we set Environment variable against an EC2 instance from AWS console?
Or do we need load any other service to achieve this?
Also can we load this variable insider Docker?
That appears to be out of scope for the AWS Console. EC2 however is basically AWS hosted VMs, So anything you could do normally on that OS can be done.
Just connect to the machine and do what you would normally do. (See the AWS CLI Guide for further help)
Using the AWS Console:
Stop the ec2 instance
For that ec2 instance, from top dropdown - select Actions > Instance Settings > View/Change User Data
Create a script that would add required environment variables to init scripts.
!/bin/sh echo 'export MY_ENV_VARIABLE=value' >> ~/.bashrc
or a script to /etc/init.d or other similar locations..
Start the EC2 instance for the Environment variables to be available.
If you want to pass the EC2 instance environment variable to the Docker container environment variable:
When you start a docker instance through "docker run" you can pass the value through --env param in the command line.
eg: docker run -e DOCKER_ENV_VARIABLE=$MY_ENV_VARIABLE ...
https://docs.docker.com/engine/reference/commandline/run/#options
You can use user_data to load and afterwards pass to your docker container using docker run -e CONTAINER_ENV_VAR=$EC2_ENV_VARor put in your Dockerfile.
for keys or any sensitive data I would advise you to use the Parameter Store, you can put passwords, users, or whatever data you want and use a service call chamber
Here is an example, where I can run (locally) multiple instances of a docker image with different arguments to the echo command and to AUTHOR environment variable.
NAME=Peter
docker run alpine echo "Hello $NAME"
docker run --name static-site -e AUTHOR="$NAME" -d -P dockersamples/static-site
Here I can programmatically change the value of $NAME and run the containers.
I want the same flexibility while I try to run those containers on Amazon ECS using AWS Fargate launch type. Alas, I am unable to figure out how can I programmatically provide different values to $NAME variable.
ecs task acts like the docker run command so
when your creating a task in the bottom you can add container details. in there you have environment vaiables.
so when your deploying programmatically you have to create a new task per each container.
You can override the environment variable settings when you run an existing task or create a new version of an existing task. If you click on 'create new revision' in the Task Definitions tab on the ECS Dashboard, you'll see that it lets you edit your container definitions as well. Navigate to Advanced container configuration>ENVIRONMENT and in there you should be able to add new env vars or update existing ones.
I am using AWS code deploy agent and deploying my project to the server through bitbucket plugin.
The code deployment agent first executes the script files which has the command to execute my spring-boot project.
Since I have two environments one development and another production. I want the script to do things differently based on the environment i.e two different instances.
My plan is to fetch the aws static ip-address which is mapped and from that determine the environment
(production or stage).
How to fetch the elastic ip address through sh commands.
edited
Static IP will work.
Here is a more nature CodeDeploy way to solve this is - set up 2 CodeDeploy deployment groups, one for your development env and the other for your production env. Then in your script, you can use environment variables that CodeDeploy will set during the deployment to understand which env you are deploying to.
Here is a blog post about how to use CodeDeploy environment variables: https://aws.amazon.com/blogs/devops/using-codedeploy-environment-variables/
You could do the following:
id=$( curl http://169.254.169.254/latest/meta-data/instance-id )
eip=$( aws ec2 describe-addresses --filter Name=instance-id,Values=${id} | aws ec2 describe-addresses | jq .Addresses[].PublicIp --raw-output )
The above gets the instance-id from metadata, then uses the aws cli to look for elastic IPs filtered by the id from metadata. Using jq this output can then be parsed down to the IP you are looking for.
Query the metadata server
eip=`curl -s 169.254.169.254/latest/meta-data/public-ipv4`
echo $eip
The solution is completely off tangent to what I originally asked but it was enough for my requirement.
I just needed to know the environment I am in to do certain actions. So what I did was to set an environment variable in an independent script file where the environment variable is set and the value is that of the environment.
ex: let's say in a file env-variables.sh
export profile= stage
In the script file where the commands have to be executed based on the environment I access it this way
source /test/env-variables.sh
echo current profile is $profile
if [ $profile = stage ]
then
echo stage
elif [ $profile = production ]
then
echo production
else
echo failure
fi
Hope some one finds it useful
How can i give a different filename other than "Dockerrun.aws.json" for elastic beanstalk multi container deployment ?
I have different branches for different environments & they all should be in sync while releasing. For deploying in PROD i need to give a different ECR repo name to the JSON file.
Is there any way i can give a file as an input such as -
eb deploy -f Dockerrun.aws-prod.json ?
PS - Environment variables are an option but is there any other straight forward option.