AWS Fargate modify /etc/hosts on startup - amazon-web-services

I am calling an external API and the only way to call is when I have a hosts file entry. So right now I ECS Exec and add it there. I want to automate it so when I autoscale I don't have to ECS Exec to add the hosts file entry on each task.
Below is part of my task definition that has Entrypoint / command. They both are empty. I believe I can use one of them to do this but not 100% sure.
"entryPoint": null,
"portMappings": [
{
"hostPort": 8000,
"protocol": "tcp",
"containerPort": 8000
}
],
"command": null,

Posting this for the sake of others with similar needs. There's 3 ways to make this work.
Like the commentors have pointed out R53 private hosted zones (costs money + affects the entire VPC, not just your containers)
Create a Startup.sh which can have logic to add to hosts and add it as part of your container deployable and invoke it using the DockerFile
ADD RunStartUp.sh RunStartUp.sh
CMD["./RunStartUp.sh"]
Directly add the hosts entry using entryPoint/Command in the task definition json
"entryPoint": [
"sh",
"-c"
],
"command" : [
"/bin/sh -c "echo '122.123.423.12 google.com >> \etc\hosts ""
]

Related

How to provide a config file to a Fargate Task?

What is the easiest way to provide one or several external configuration file(s) to an app running as an AWS Fargate task?
The files cannot be part of the Docker image because they depend on the stage environment and may contain secrets.
Creating an EFS volume just for this seems to be overengineered (we only need read access to some kb of properties).
Using the AWS SDK to access a S3 bucket at startup means that the app has a dependency to the SDK, and one has to manage S3 buckets.*
Using AWS AppConfig would still require the app to use the AWS SDK to access the config values.*
Having hundreds of key-value pairs in the Parameter Store would be ugly.
*It is an important aspect of our applications to not depend on the AWS SDK, because we need to be able to deploy to different cloud platforms, so solutions that avoid this are preferable.
It would be nice to just be able to define this in the task definition, so that Fargate mounts a couple of files in the container. Is this or a similar low-key solution available?
There's a specific feature of AWS Systems Manager for that purpose, called AWS AppConfig. It helps you deploy application configuration just like code deployments, but without the need to re-deploy the code if a configuration value changes.
The following article illustrates the integration between containers and AWS AppConfig: Application configuration deployment to container workloads using AWS AppConfig.
Not an answer to the question but in case someone comes here looking for solutions, we had the same requirements but did not find an easy solution to deploy configuration file directly in ECS instance for the container to read. I'm sure it's possible, just would make is difficult to configure, therefore we did not see the effort worthy.
What we did:
Added EnvironmentConfigBuilder as discribed in MS docs here
Passed in configuration values using environment variables as described in AWS docs here.
You can specify your AWS AppConfig dependency as a separate container. AWS gives you the option to set container dependency execution conditions in your Task Definition. See: https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerDependency.html
You could set your container dependency status to COMPLETE for the container that pulls the config files from AppConfig and then just treat the files as a dumb mount, separating the AWS dependency completely. For Example:
"containerDefinitions": [
{
"name": "app-config-script",
"image": "1234567890.dkr.ecr.SOME_REGION.amazonaws.com/app-config-script:ver",
"essential": false,
"mountPoints": [
{
"sourceVolume": "config",
"containerPath": "/data/config/nginx",
"readOnly": ""
}
],
"dependsOn": null,
"repositoryCredentials": {
"credentialsParameter": ""
}
},
{
"name": "nginx",
"image": "nginx",
"essential": true,
"portMappings": [
{
"containerPort": "80",
"protocol": "tcp"
},
{
"containerPort": "443",
"protocol": "tcp"
}
],
"mountPoints": [
{
"sourceVolume": "config",
"containerPath": "/etc/nginx",
"readOnly": true
}
],
"dependsOn": [
{
"containerName": "app-config-script",
"condition": "COMPLETE"
}
],
"repositoryCredentials": {
"credentialsParameter": ""
}
}
],
Your Entrypoint/CMD script in the bootstrap container would then be something like:
#!/bin/sh
token=$(aws appconfigdata start-configuration-session --application-identifier "${APPLICATION_ID}" --environment-identifier "${ENVIRONMENT_ID}" --configuration-profile-identifier "${CONFIGURATION_ID}" | jq -r .InitialConfigurationToken)
aws appconfigdata get-latest-configuration --configuration-token "${token}" /data/config/nginx/nginx.conf

Docker links with awsvpc network mode

I have a Java webapp deployed in ECS using the tomcat:8.5-jre8-alpine image. The network mode for this task is awsvpc; I have many of these tasks running across 3 EC2 instances fronted by an ALB.
This is working fine but now I want to add an nginx reverse-proxy in front of each tomcat container, similar to this example: https://github.com/awslabs/ecs-nginx-reverse-proxy/tree/master/reverse-proxy.
My abbreviated container definition file is:
{
"containerDefinitions": [
{
"name": "nginx",
"image": "<NGINX reverse proxy image URL>",
"memory": "256",
"cpu": "256",
"essential": true,
"portMappings": [
{
"containerPort": "80",
"protocol": "tcp"
}
],
"links": [
"app"
]
},
{
"name": "app",
"image": "<app image URL>",
"memory": "1024",
"cpu": "1024",
"essential": true
}
],
"volumes": [],
"networkMode": "awsvpc",
"placementConstraints": [],
"family": "application-stack"
}
When I try to save a new task definition I received the error: "links are not supported when the network type is awsvpc"
I am using the awsvpc network mode because it gives me granular control over the inbound traffic via a security group.
Is there any way to create a task definition with 2 linked containers when using awsvpc network mode?
You dont need the linking part at all, because awsvpc allows you to reference other containers simply by using
localhost:8080
(or whatever port is your other container mapped to)
in your nginx config file.
So remove links from your json and use localhost:{container port} in nginx config. Simple as that.
Actually if you want to use a reverse-proxy you can stop using links, because you can make service discovery or using your reverse-proxy to use your dependency.
If you still want to use link instead of using that reverse proxy you can use consul and Fabio. Both services are dockerizable.
With this, there is no necessity to use awsvpc and you can use consul for service-discovery.
Hope it helps!

Mounting an elastic file system to AWS Batch Computer Enviroment

I'm trying to get my elastic file system (EFS) to be mounted in my docker container so it can be used with AWS batch. Here is what I did:
Create a new AMI that is optimized for Elastic Container Services (ECS). I followed this guide here to make sure it had ECS on it. I also put the mount into /etc/fstab file and verified that my EFS was being mounted (/mnt/efs) after reboot.
Tested an EC2 instance with my new AMI and verified I could pull the docker container and pass it my mount point via
docker run --volume /mnt/efs:/home/efs -it mycontainer:latest
Interactively running the docker image shows me my data inside efs
Set up a new compute enviorment with my new AMI that mounts EFS on boot.
Create a JOB definition File:
{
"jobDefinitionName": "MyJobDEF",
"jobDefinitionArn": "arn:aws:batch:us-west-2:#######:job-definition/Submit:8",
"revision": 8,
"status": "ACTIVE",
"type": "container",
"parameters": {},
"retryStrategy": {
"attempts": 1
},
"containerProperties": {
"image": "########.ecr.us-west-2.amazonaws.com/mycontainer",
"vcpus": 1,
"memory": 100,
"command": [
"ls",
"/home/efs",
],
"volumes": [
{
"host": {
"sourcePath": "/mnt/efs"
},
"name": "EFS"
}
],
"environment": [],
"mountPoints": [
{
"containerPath": "/home/efs",
"readOnly": false,
"sourceVolume": "EFS"
}
],
"ulimits": []
}
}
Run Job, view log
Anyway, while it does not say "no file /home/efs found" it does not list anything in my EFS which is populated, which I'm inerpreting as the container mounting an empty efs. What am I doing wrong? Is my AMI not mounting the EFS in the compute environment?
I covered this in a recent blog post
https://medium.com/arupcitymodelling/lab-note-002-efs-as-a-persistence-layer-for-aws-batch-fcc3d3aabe90
You need to set up a launch template for your batch instances, and you need to make sure that your subnets/security groups are configured properly.

ECS task_definition environment variable needs IP address

So I have two container definitions for a service that I am trying to run on ECS. For one of the services (Kafka), it requires the IP Address of the other service (Zookeeper). In the pure docker world we can achieve this using the name of the container, however in AWS the container name is appended by AWS to create a unique name, so how do we achieve the same behaviour?
Currently my Terraform task definitions look like:
[
{
"name": "${service_name}",
"image": "zookeeper:latest",
"cpu": 1024,
"memory": 1024,
"essential": true,
"portMappings": [
{ "containerPort": ${container_port}, "protocol": "tcp" }
],
"networkMode": "awsvpc"
},
{
"name": "kafka",
"image": "ches/kafka:latest",
"environment": [
{ "name": "ZOOKEEPER_IP", "value": "${service_name}" }
],
"cpu": 1024,
"memory": 1024,
"essential": true,
"networkMode": "awsvpc"
}
]
I don't know enough about the rest of the setup to give really concrete advice, but there's a few options:
Put both containers in the same task, and use links between them
Use route53 auto naming to get DNS names for each service task, specify those in the task definition environment, also described as ecs service discovery
Put the service tasks behind a load balancer, and use DNS names from route53 and possibly host matching on the load balancer, specify the DNS names in the task definition environment
Consider using some kind of service discovery / service mesh framework (Consul, for instance)
There are posts describing some of the alternatives. Here's one:
How to setup service discovery in Amazon ECS

AWS ECS Service for Wordpress

I created a service for wordpress on AWS ECS with the following container definitions
{
"containerDefinitions": [
{
"name": "wordpress",
"links": [
"mysql"
],
"image": "wordpress",
"essential": true,
"portMappings": [
{
"containerPort": 0,
"hostPort": 80
}
],
"memory": 250,
"cpu": 10
},
{
"environment": [
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "password"
}
],
"name": "mysql",
"image": "mysql",
"cpu": 10,
"memory": 250,
"essential": true
}
],
"family": "wordpress"
}
Then went over to the public IP and completed the Wordpress installation. I also added a few posts.
But now, when I update the service to use a an updated task definition (Updated mysql container image)
"image": "mysql:latest"
I loose all the posts created and data and Wordpress prompts me to install again.
What am i doing wrong?
I also tried to use host volumes but to no vail - creates a bind mount and a docker managed volume (Did a docker inspect on container).
So, every time I update the task it resets Wordpress.
If your container needs access to the original data each time it
starts, you require a file system that your containers can connect to
regardless of which instance they’re running on. That’s where EFS
comes in.
EFS allows you to persist data onto a durable shared file system that
all of the ECS container instances in the ECS cluster can use.
Step-by-step Instructions to Setup an AWS ECS Cluster
Using Data Volumes in Tasks
Using Amazon EFS to Persist Data from Amazon ECS Containers