how to pass JVM arguments in AWS ECS? - jvm-arguments

I am trying to set up a microservice in Amazon ECS. How can JVM arguments be configured and passed to the microservice?

We do this sort of thing by passing them as env variables on the task. When you edit your container in your task, scroll down to the Env Variables section:
You can then reference these as normal env variables on the command line when you launch your application.

You can set an ENTRYPOINT in your Dockerfile like:
ENTRYPOINT ["java","-Xms1024m","-Xmx1800m","-jar","/app.jar"]

Related

How to add a file to container under EC2 instance?

I have an application that I need to deploy in AWS.
The application has default properties which should be overridden for each env (qa/prd etc.) using overrides.properties file.
Source code of the application is composed to docker image and sent to GPR.
I have a CDK repo which takes docker image from GPR, stores it in ECR and creates a Fargate service with AutoScaling Group.
Here somehow I have to override default properties to the specific ones for different environments.
Is there an option in CDK to add a file (overrides.properties) to docker image or to pass it to ec2 instances before running the docker container?
If someone finds themselves in the similar situation - posting a workaround'ish solution prorposed by #gshpychka
It's possible to set required properties as environment variables and add the script to store env variables to specific file to the Dockerfile for your image.
Sample script:
env | while IFS= read -r line; do
echo "$line" >> /your/file/here
done

Passing environment variables to `docker-compose` when using ecs context

I am trying to use the new ecs context that the latest docker cli exposes. When I create the context using docker context create ecs my_context, I select the use environment variables method (the recommended way), how now matter how I try to call docker compose, it always returns the message context requires credentials to be passed as environment variables. Searching for this message in google returns nothing :/
I have tried passing using the -e and --environment flags, I have tried with a .env file, and the --env-file flag.
docker --context my_ecs compose -f docker-compose.yml -f docker-compose.dev.yml -e AWS_ACCESS_KEY_ID=XXXXXXX -e AWS_SECRET_ACCESS_KEY=XXXXXXXX -e AWS_DEFAULT_REGION=XXXXXXXX up
If I don't use the environment variable method, it hits another known (frustrating) bug, whereby it uses the wrong aws region, even though the context is setup with the correct region. I feel the environment variable option would resolve that, if I could get it to see the environment variables.
I don't think it works passing the variables during the docker compose up command. The way it works is that you export the access key ID, secret access key and region variables in your shell and then you create the docker context by pointing to the variables as shown here.
You can do that interactively by launching docker context create ecs myecscontext and then picking AWS environment variables or you can just run docker context create ecs myecscontext --from-env.
And yes the region bug is annoying, if this is what you are referring to.
Google search lead me to this post, (I am not using Windows but aws cloud9 IDE,)
I could get rid of the message context requires credentials to be passed as environment variables by passing dummy values to the environment vars, it goes like this.
$ docker context create ecs myecscontext
? Create a Docker context using: AWS environment variables
Successfully created ecs context "myecscontext"
$ docker context use myecscontext
myecscontext
$ docker context ls
context requires credentials to be passed as environment variables
$ AWS_SECRET_ACCESS_KEY=dummy AWS_ACCESS_KEY_ID=dummy AWS_DEFAULT_REGION=dummy docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
myecscontext * ecs credentials read from environment
$ AWS_SECRET_ACCESS_KEY=dummy AWS_ACCESS_KEY_ID=dummy AWS_DEFAULT_REGION=dummy docker context use default
default
$ docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
myecscontext ecs credentials read from environment
$ docker context rm myecscontext
myecscontext
Modify and export environmental variables on Linux or MacOs with Environment variables to configure the AWS CLI:
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-west-2
Later, you can see the options with this command:
docker context create ecs --help
Description: Create a context for Amazon ECS
Usage:
docker context create ecs CONTEXT [flags]
Flags:
--access-keys string :Use AWS access keys from file
--description string :Description of the context
--from-env:Use AWS environment variables for profile, or credentials and region
-h, --help: Help for ecs
--local-simulation :Create context for ECS local simulation endpoints
--profile string :Use an existing AWS profile
Answer: docker context create ecs CONTEXT --from-env
I had to resource my terminal with source ~/.bash_profile because I was using .bashrc as my source and this did not have my environment variables in it.
I've handled this situation by setting environment variables. You can use this guide for this.

AWS - can we set Environment variable to read inside Docker?

Can we set Environment variable against an EC2 instance from AWS console?
Or do we need load any other service to achieve this?
Also can we load this variable insider Docker?
That appears to be out of scope for the AWS Console. EC2 however is basically AWS hosted VMs, So anything you could do normally on that OS can be done.
Just connect to the machine and do what you would normally do. (See the AWS CLI Guide for further help)
Using the AWS Console:
Stop the ec2 instance
For that ec2 instance, from top dropdown - select Actions > Instance Settings > View/Change User Data
Create a script that would add required environment variables to init scripts.
!/bin/sh echo 'export MY_ENV_VARIABLE=value' >> ~/.bashrc
or a script to /etc/init.d or other similar locations..
Start the EC2 instance for the Environment variables to be available.
If you want to pass the EC2 instance environment variable to the Docker container environment variable:
When you start a docker instance through "docker run" you can pass the value through --env param in the command line.
eg: docker run -e DOCKER_ENV_VARIABLE=$MY_ENV_VARIABLE ...
https://docs.docker.com/engine/reference/commandline/run/#options
You can use user_data to load and afterwards pass to your docker container using docker run -e CONTAINER_ENV_VAR=$EC2_ENV_VARor put in your Dockerfile.
for keys or any sensitive data I would advise you to use the Parameter Store, you can put passwords, users, or whatever data you want and use a service call chamber

How to provide argument to a task (container) on its launch on Amazon ECS using AWS Fargate

Here is an example, where I can run (locally) multiple instances of a docker image with different arguments to the echo command and to AUTHOR environment variable.
NAME=Peter
docker run alpine echo "Hello $NAME"
docker run --name static-site -e AUTHOR="$NAME" -d -P dockersamples/static-site
Here I can programmatically change the value of $NAME and run the containers.
I want the same flexibility while I try to run those containers on Amazon ECS using AWS Fargate launch type. Alas, I am unable to figure out how can I programmatically provide different values to $NAME variable.
ecs task acts like the docker run command so
when your creating a task in the bottom you can add container details. in there you have environment vaiables.
so when your deploying programmatically you have to create a new task per each container.
You can override the environment variable settings when you run an existing task or create a new version of an existing task. If you click on 'create new revision' in the Task Definitions tab on the ECS Dashboard, you'll see that it lets you edit your container definitions as well. Navigate to Advanced container configuration>ENVIRONMENT and in there you should be able to add new env vars or update existing ones.

How to install and enable a service in amazon Elastic Beanstalk?

I'm banging my head against a wall trying to both install and then enable a service in elastic beanstalk. What I want to do is:
Install a service in /etc/init.d that points to my python app in /opt/python/current/app/
Have Elastic Beanstalk start and keep-alive the service, as specified in an .ebextensions/myapp.config file.
(Reference: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-services)
Here's my .ebextensions/myapp.config file:
container_commands:
01_copy_service:
command: "cp /opt/python/ondeck/app/my_service /etc/init.d/"
02_chmod_service:
command: "chmod +x /etc/init.d/my_service"
services:
sysvinit:
my_service:
enabled: true
ensureRunning: true
files : [/etc/init.d/my_service]
This fails because services are run before container_commands. If I comment out services, deploy, then uncomment services, then deploy again, it will work. But I want to have a single-step deploy, because this will be an auto-scaling node.
Is there a solution? Thanks!
Nate, I have the exact same scenario as you and I solved it this way:
Drop the "services" section and add a "restart" command.
container_commands:
...
03_restart_service:
command: /sbin/service my_service restart
You can cause the service to restart after a command is run by using a commands: key under the services: key. The documentation for the services: key is here:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-services
I haven't done it myself, but I want to give you some ideas which should work. It's just the matter of convenience and the workflow.
Since it is not really application file, but rather EC2 file, and unlikely to be changed often, you can do one of the following:
Use files content to create the service init script. You can even have a specific config file just for that script.
Store service init script on S3 and copy the contents with command.
Create dummy service script, replace the contents with the one from deployment with container command and dependency on the above command to the service.
(this one is heavy) Create custom AMI and specify it in Autoscaling configuration.
Hope it helps.