AWS EC2 Container program command line arguments - amazon-web-services

I am trying to run a Play Framework application on AWS EC2 Containers. I am using sbt-ecr to build and upload the image.
Now I would like to pass different command line parameters to Play, for instance -Dconfig=production.conf.
Usually when I run it locally my command looks like this:
docker run -p 80:9000 myimage -Dconfig.resource=production.conf
The port settings can be configured separately in AWS. How can I set Play's command line parameter for AWS EC2 containers?

Apparently my problem was of a completely different nature and didn't have anything at all to do with the entrypoint or cmd arguments.
The task didn't start because the loggroup which was configured for the container didn't exist.
Here is how to pass parameters to an image on ECS just like on the command line or using the docker CMD instruction. Just put them in the "Command" field in the "Environment" section of the container configuration like so:
-Dconfig.resource=production.conf,-Dhttps.port=9443

Related

CDK Ec2 MultipartUserData control script order

I am using CDK to provision some EC2 instances and configure them using user-data.
My user data consists of 2 files
cloud-config
shell script.
What I have been noticing is that the shell script executes before my cloud-config finishes resulting in the script failing as all dependencies have not finished downloading.
Is there a way to control the run order? The reason I did not do all the configuration in the cloud config is I need to pass some arguments to the script and was easy using the ec2.UserData.forLinux().addExecuteFileCommand
const multipartUserData = new ec2.MultipartUserData();
multipartUserData.addUserDataPart(
this.createBootstrapConfig(),
'text/cloud-config; charset="utf8"'
);
multipartUserData.addUserDataPart(
this.runInstallationScript(),
'text/x-shellscript; charset="utf8"'
);

any "docker" command that i try to run on terminal throw this message "context requires credentials to be passed as environment variables"

I was reading the Docker documentation about deploy Docker containers on AWS ECS https://docs.docker.com/cloud/ecs-integration/ . And after i run the command docker context create ecs myecscontext and select the option AWS environment variables every docker commands that i try to run throw this message on my terminal context requires credentials to be passed as environment variables. I've tried to set the AWS environments with the windows set command but it dosen't work.
I've used like this:
set AWS_SECRET_ACCESS_KEY=any-value
set AWS_ACCESS_KEY_ID=any-value
I'm searching how to solve this problem and the only thing that i've found is to set environment variables like i've already done. What i have to do?
UPDATE:
I've find another way to set environment variables on windows in this site https://www.tutorialspoint.com/how-to-set-environment-variables-using-powershell
Instead use set i had to use $env:VARIABLE_NAME = 'any-value' this sintax to really update the vars.
Like this:
$env:AWS_ACCESS_KEY_ID = 'my-aws-access-key-id'
$env:AWS_SECRET_ACCESS_KEY = 'my-aws-secret-access-key'

AWS - Activating conda environment with cloud-init (User Data Field)

We are trying to run batch scripts on load on a AWS EC2 instance using userdata (which I understand is based off of cloud-init). Since the code runs in a conda environment, we are trying to activate it prior to running the Python/Pandas code. We noticed that the PATH variable isn't getting set correctly. (even though it was set correctly prior to making the image, and is set correctly for all users after SSH'ing into instance)
We've tried:
#!/bin/bash
source activate path/to/conda_env
bash path/to/script.sh
and
#!/bin/bash
conda run -n path/to/conda_env bash path/to/script.sh
Nothing appears to work. This code runs the script while sshing into an EC2 instance but not while using EC2 cloud-init userdata (launching a script at launch). I've verified the script is indeed working at launch by creating a simple text file with user data, so it is working when starting an instance...

Same Application in a docker compose configuration mapping different ports on AWS EC2 instance

The app has the following containers
php-fpm
nginx
local mysql
app's API
datadog container
In the dev process many feature branches are created to add new features. such as
app-feature1
app-feature2
app-feature3
...
I have an AWS EC2 instance per feature branches running docker engine V.18 and docker compose to build the and run the docker stack that compose the php app.
To save operation costs 1 AWS EC2 instance can have 3 feature branches at the same time. I was thinking that there should be a custom docker-compose with special port mapping and docker image tag for each feature branch.
The goal of this configuration is to be able to test 3 feature branches and access the app through different ports while saving money.
I also thought about using docker networks by keeping the same ports and using an nginx to redirect traffic to the different docker network ports.
What recommendations do you give?
One straight forward way I can think of in this case is to use the .env file for your docker-compose.
docker-compose.yaml file will look something like this
...
ports:
- ${NGINX_PORT}:80
...
ports:
- ${API_PORT}:80
.env file for each stack will look something like this
NGINX_PORT=30000
API_PORT=30001
and
NGINX_PORT=30100
API_PORT=30101
for different projects.
Note:
.env must be in the same folder as your docker-compose.yaml.
Make sure that all the ports inside .env files will not be conflicting with each other. You can have some kind of conventions like having prefix for features like feature1 will have port starting with 301 i.e. 301xx.
In this way, your docker-compose.yaml can be as generic as you may like.
You're making things harder than they have to be. Your app is containerized- use a container system.
ECS is very easy to get going with. It's a json file that defines your deployment- basically analogous to docker-compose (they actually supported compose files at some point, not sure if that feature stayed around). You can deploy an arbitrary number of services with different container images. We like to use a terraform module with the image tag as a parameter, but easy enough to write a shell script or whatever.
Since you're trying to save money, create a single application load balancer. each app gets a hostname, and each container gets a subpath. For short lived feature branch deployments, you can even deploy on Fargate and not have an ongoing server cost.
It turns out the solution involved capabilities from docker-compose. In docker docs the concept is called Multiple Isolated environments on a single host
to achieve this:
I used an .env file with so many env vars. The main one is CONTAINER_IMAGE_TAG that defines the git branch ID to identify the stack.
A separate docker-compose-dev file defines ports, image tags, extra metadata that is dev related
Finally the use of --project-name in the docker-compose command allows to have different stacks.
an example docker-compose Bash function that uses the docker-compose command
docker_compose() {
docker-compose -f docker/docker-compose.yaml -f docker/docker-compose-dev.yaml --project-name "project${CONTAINER_IMAGE_TAG}" --project-directory . "$#"
}
The separation should be done in the image tags, container names, network names, volume names and project name.

Where is the "staging directory" on Elastic Beanstalk instances?

I'm trying to locate the staging directory as mentioned in the Elastic Beanstalk documentation.
The specified commands run as the root user, and are processed in alphabetical order by name. Container commands are run from the staging directory, where your source code is extracted prior to being deployed to the application server. Any changes you make to your source code in the staging directory with a container command will be included when the source is deployed to its final location.
For anyone else who ends up here, this worked for me:
/opt/elasticbeanstalk/bin/get-config platformconfig -k AppStagingDir
Amazon's documentation actually provides that exact command on this page (you have to expand the platformconfig – Constant configuration values section to see it).
The staging directory is in /var/app/ondeck. Or at least this is the case on their managed platform Puma with Ruby 2.5 running on 64bit Amazon Linux/2.11.0.
To check your own SSH into your Beanstalk instance and do:
/opt/elasticbeanstalk/lib/ruby/bin/get-config container -k app_staging_dir
You can also create a script in .ebextensions/ and issue a command that will be something like:
echo "We are here: $(pwd)"
You'll then be able to check eb-activity.log for that line.