Docker links with awsvpc network mode - amazon-web-services

I have a Java webapp deployed in ECS using the tomcat:8.5-jre8-alpine image. The network mode for this task is awsvpc; I have many of these tasks running across 3 EC2 instances fronted by an ALB.
This is working fine but now I want to add an nginx reverse-proxy in front of each tomcat container, similar to this example: https://github.com/awslabs/ecs-nginx-reverse-proxy/tree/master/reverse-proxy.
My abbreviated container definition file is:
{
"containerDefinitions": [
{
"name": "nginx",
"image": "<NGINX reverse proxy image URL>",
"memory": "256",
"cpu": "256",
"essential": true,
"portMappings": [
{
"containerPort": "80",
"protocol": "tcp"
}
],
"links": [
"app"
]
},
{
"name": "app",
"image": "<app image URL>",
"memory": "1024",
"cpu": "1024",
"essential": true
}
],
"volumes": [],
"networkMode": "awsvpc",
"placementConstraints": [],
"family": "application-stack"
}
When I try to save a new task definition I received the error: "links are not supported when the network type is awsvpc"
I am using the awsvpc network mode because it gives me granular control over the inbound traffic via a security group.
Is there any way to create a task definition with 2 linked containers when using awsvpc network mode?

You dont need the linking part at all, because awsvpc allows you to reference other containers simply by using
localhost:8080
(or whatever port is your other container mapped to)
in your nginx config file.
So remove links from your json and use localhost:{container port} in nginx config. Simple as that.

Actually if you want to use a reverse-proxy you can stop using links, because you can make service discovery or using your reverse-proxy to use your dependency.
If you still want to use link instead of using that reverse proxy you can use consul and Fabio. Both services are dockerizable.
With this, there is no necessity to use awsvpc and you can use consul for service-discovery.
Hope it helps!

Related

How to scale ECS sidecar containers independently

How can I create and independently scale sidecar containers in ECS Fargate using the AWS console?
The task creation step allows adding multiple containers with different CPU and memory configurations but not an independent scaling option. On the other hand, the ECS Service launch allows the option to scale only at the task level. Also, ECS doesn't clearly mention how a container can be specified as a sidecar.
You can't independently scale a sidecar in ECS. The unit of scaling in ECS is at the task level.
You can specify cpu and memory of the Fargate task (e.g. 512/1024) - this is the resources that are assigned for that task and are what you will pay for on your bill.
Within that task you can have 1:n containers - each of these can have their own cpu and memory configurations but these are assigned within the constraints of the task - the combined cpu/memory values for all containers cannot exceed those assigned to the task (e.g. you couldn't have a 512/1024 task and assign 2048 memory to a container within it).
This effectively allows you to give weighting to the containers in your task, e.g. giving an nginx sidecar less of a weighting than your main application.
ECS doesn't clearly mention how a container can be specified as a sidecar.
A 'sidecar' is just a container that shares resources (network, disk etc) with another container. Creating a task definition with 2 containers gives you a sidecar.
Below is a sample task-definition that has nginx fronting a Flask app. It contains:
A flask app listening on port 5000.
An nginx container listening on port 80. This has an envvar upstream=http:\\localhost:5000. As these both share the same network they can communicate via localhost (so nginx could forward to flask-app).
They both have access to a shared drive ("shared-volume")
{
"containerDefinitions": [
{
"name": "main-app",
"image": "ghcr.io/my-flask-app",
"portMappings": [
{
"containerPort": 5000,
"hostPort": 5000,
"protocol": "tcp"
}
],
"mountPoints": [
{
"sourceVolume": "shared-volume",
"containerPath": "/scratch"
}
]
},
{
"name": "sidecar",
"image": "ghcr.io/my-nginx",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp"
}
],
"environment": [
{
"name": "upstream",
"value": "http:\\localhost:5000"
}
],
"mountPoints": [
{
"sourceVolume": "shared-volume",
"containerPath": "/scratch"
}
]
}
],
"networkMode": "awsvpc",
"revision": 1,
"volumes": [
{
"name": "shared-volume",
"host": {}
}
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "512",
"memory": "1024"
}

AWS Fargate modify /etc/hosts on startup

I am calling an external API and the only way to call is when I have a hosts file entry. So right now I ECS Exec and add it there. I want to automate it so when I autoscale I don't have to ECS Exec to add the hosts file entry on each task.
Below is part of my task definition that has Entrypoint / command. They both are empty. I believe I can use one of them to do this but not 100% sure.
"entryPoint": null,
"portMappings": [
{
"hostPort": 8000,
"protocol": "tcp",
"containerPort": 8000
}
],
"command": null,
Posting this for the sake of others with similar needs. There's 3 ways to make this work.
Like the commentors have pointed out R53 private hosted zones (costs money + affects the entire VPC, not just your containers)
Create a Startup.sh which can have logic to add to hosts and add it as part of your container deployable and invoke it using the DockerFile
ADD RunStartUp.sh RunStartUp.sh
CMD["./RunStartUp.sh"]
Directly add the hosts entry using entryPoint/Command in the task definition json
"entryPoint": [
"sh",
"-c"
],
"command" : [
"/bin/sh -c "echo '122.123.423.12 google.com >> \etc\hosts ""
]

How to provide a config file to a Fargate Task?

What is the easiest way to provide one or several external configuration file(s) to an app running as an AWS Fargate task?
The files cannot be part of the Docker image because they depend on the stage environment and may contain secrets.
Creating an EFS volume just for this seems to be overengineered (we only need read access to some kb of properties).
Using the AWS SDK to access a S3 bucket at startup means that the app has a dependency to the SDK, and one has to manage S3 buckets.*
Using AWS AppConfig would still require the app to use the AWS SDK to access the config values.*
Having hundreds of key-value pairs in the Parameter Store would be ugly.
*It is an important aspect of our applications to not depend on the AWS SDK, because we need to be able to deploy to different cloud platforms, so solutions that avoid this are preferable.
It would be nice to just be able to define this in the task definition, so that Fargate mounts a couple of files in the container. Is this or a similar low-key solution available?
There's a specific feature of AWS Systems Manager for that purpose, called AWS AppConfig. It helps you deploy application configuration just like code deployments, but without the need to re-deploy the code if a configuration value changes.
The following article illustrates the integration between containers and AWS AppConfig: Application configuration deployment to container workloads using AWS AppConfig.
Not an answer to the question but in case someone comes here looking for solutions, we had the same requirements but did not find an easy solution to deploy configuration file directly in ECS instance for the container to read. I'm sure it's possible, just would make is difficult to configure, therefore we did not see the effort worthy.
What we did:
Added EnvironmentConfigBuilder as discribed in MS docs here
Passed in configuration values using environment variables as described in AWS docs here.
You can specify your AWS AppConfig dependency as a separate container. AWS gives you the option to set container dependency execution conditions in your Task Definition. See: https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerDependency.html
You could set your container dependency status to COMPLETE for the container that pulls the config files from AppConfig and then just treat the files as a dumb mount, separating the AWS dependency completely. For Example:
"containerDefinitions": [
{
"name": "app-config-script",
"image": "1234567890.dkr.ecr.SOME_REGION.amazonaws.com/app-config-script:ver",
"essential": false,
"mountPoints": [
{
"sourceVolume": "config",
"containerPath": "/data/config/nginx",
"readOnly": ""
}
],
"dependsOn": null,
"repositoryCredentials": {
"credentialsParameter": ""
}
},
{
"name": "nginx",
"image": "nginx",
"essential": true,
"portMappings": [
{
"containerPort": "80",
"protocol": "tcp"
},
{
"containerPort": "443",
"protocol": "tcp"
}
],
"mountPoints": [
{
"sourceVolume": "config",
"containerPath": "/etc/nginx",
"readOnly": true
}
],
"dependsOn": [
{
"containerName": "app-config-script",
"condition": "COMPLETE"
}
],
"repositoryCredentials": {
"credentialsParameter": ""
}
}
],
Your Entrypoint/CMD script in the bootstrap container would then be something like:
#!/bin/sh
token=$(aws appconfigdata start-configuration-session --application-identifier "${APPLICATION_ID}" --environment-identifier "${ENVIRONMENT_ID}" --configuration-profile-identifier "${CONFIGURATION_ID}" | jq -r .InitialConfigurationToken)
aws appconfigdata get-latest-configuration --configuration-token "${token}" /data/config/nginx/nginx.conf

ECS task_definition environment variable needs IP address

So I have two container definitions for a service that I am trying to run on ECS. For one of the services (Kafka), it requires the IP Address of the other service (Zookeeper). In the pure docker world we can achieve this using the name of the container, however in AWS the container name is appended by AWS to create a unique name, so how do we achieve the same behaviour?
Currently my Terraform task definitions look like:
[
{
"name": "${service_name}",
"image": "zookeeper:latest",
"cpu": 1024,
"memory": 1024,
"essential": true,
"portMappings": [
{ "containerPort": ${container_port}, "protocol": "tcp" }
],
"networkMode": "awsvpc"
},
{
"name": "kafka",
"image": "ches/kafka:latest",
"environment": [
{ "name": "ZOOKEEPER_IP", "value": "${service_name}" }
],
"cpu": 1024,
"memory": 1024,
"essential": true,
"networkMode": "awsvpc"
}
]
I don't know enough about the rest of the setup to give really concrete advice, but there's a few options:
Put both containers in the same task, and use links between them
Use route53 auto naming to get DNS names for each service task, specify those in the task definition environment, also described as ecs service discovery
Put the service tasks behind a load balancer, and use DNS names from route53 and possibly host matching on the load balancer, specify the DNS names in the task definition environment
Consider using some kind of service discovery / service mesh framework (Consul, for instance)
There are posts describing some of the alternatives. Here's one:
How to setup service discovery in Amazon ECS

AWS ECS Service for Wordpress

I created a service for wordpress on AWS ECS with the following container definitions
{
"containerDefinitions": [
{
"name": "wordpress",
"links": [
"mysql"
],
"image": "wordpress",
"essential": true,
"portMappings": [
{
"containerPort": 0,
"hostPort": 80
}
],
"memory": 250,
"cpu": 10
},
{
"environment": [
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "password"
}
],
"name": "mysql",
"image": "mysql",
"cpu": 10,
"memory": 250,
"essential": true
}
],
"family": "wordpress"
}
Then went over to the public IP and completed the Wordpress installation. I also added a few posts.
But now, when I update the service to use a an updated task definition (Updated mysql container image)
"image": "mysql:latest"
I loose all the posts created and data and Wordpress prompts me to install again.
What am i doing wrong?
I also tried to use host volumes but to no vail - creates a bind mount and a docker managed volume (Did a docker inspect on container).
So, every time I update the task it resets Wordpress.
If your container needs access to the original data each time it
starts, you require a file system that your containers can connect to
regardless of which instance they’re running on. That’s where EFS
comes in.
EFS allows you to persist data onto a durable shared file system that
all of the ECS container instances in the ECS cluster can use.
Step-by-step Instructions to Setup an AWS ECS Cluster
Using Data Volumes in Tasks
Using Amazon EFS to Persist Data from Amazon ECS Containers