I am currently building a project using Terraform and AWS ECS with two containers: Django App and Nginx (to host the static files). Currently it works great; however, I am receiving an error in the logs of Nginx (using CloudWatch Logs) saying,
CommandError: You must set settings.ALLOWED_HOSTS if DEBUG is False.
I know this has to do with Django's ALLOWED_HOSTS since my DEBUG is set to False in the settings.py file, but I feel everything should be working as it should. Here is what my settings.py has for ALLOWED_HOSTS:
ALLOWED_HOSTS = os.getenv('ALLOWED_HOSTS', '').split()
From here, I have my task definition file named container-def.json to do the job in AWS ECS:
[
{
"name": "django-app",
"image": "${django_docker_image}",
"cpu": 10,
"memory": 256,
"memoryReservation": 128,
"links": [],
"essential": true,
"portMappings": [
{
"hostPort": 0,
"containerPort": 8000,
"protocol": "tcp"
}
],
"command": ["gunicorn", "-w", "3", "-b", ":8000", "project.wsgi:application"],
"environment": [
{
"name": "RDS_DB_NAME",
"value": "${rds_db_name}"
},
{
"name": "RDS_USERNAME",
"value": "${rds_username}"
},
{
"name": "RDS_PASSWORD",
"value": "${rds_password}"
},
{
"name": "RDS_PORT",
"value": "5432"
},
{
"name": "ALLOWED_HOSTS",
"value": "${allowed_hosts}"
}
],
"mountPoints": [
{
"containerPath": "/usr/src/app/staticfiles",
"sourceVolume": "static_volume"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group" : "/ecs/frontend-container",
"awslogs-region": "us-east-1"
}
}
},
{
"name": "nginx",
"image": "${ngnix_docker_image}",
"essential": true,
"cpu": 10,
"memory": 128,
"links": ["django-app"],
"portMappings": [
{
"hostPort": 0,
"containerPort": 80,
"protocol": "tcp"
}
],
"mountPoints": [
{
"containerPath": "/usr/src/app/staticfiles",
"sourceVolume": "static_volume"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/nginx",
"awslogs-region": "us-east-1"
}
}
}
]
My var.tf file where it is contained is as such:
####### Input URL of ALLOWED_HOSTS in Django's settings ############
variable "allowed_hosts" {
description = "Domain name for allowed hosts"
default = ".example.org"
}
And lastly where I call all these variables to is in my data template for Terraform:
### Here lies the container-definition.json file to input what each container's parameters
### must have.
data "template_file" "ecs-containers" {
template = file("container-definitions/container-def.json")
vars = {
django_docker_image = var.django_docker_image
ngnix_docker_image = var.ngnix_docker_image
rds_db_name = var.rds_db_name
rds_username = var.rds_username
rds_password = var.rds_password
allowed_hosts = var.allowed_hosts
}
}
I would appreciate any feedback on this. I know I'm ALMOST there to fixing the issue. Thanks all.
Related
When deploying Task in ECS Cluster with a public repo Docker Hub, the task always Stopped with this error in the Task Container:
Stopped reason
Cannotpullcontainererror:
pull image manifest has been retried 5 time(s):
failed to resolve ref docker.io/username/repo:
failed to do request:
Head "https://registry-1.docker.io/v2/username/repo/manifests/latest":
dial tcp 44.205.64.79:443: i/o timeout
This is my Task Definition:
{
"taskDefinitionArn": "arn:aws:ecs:ap-southeast-1:...:task-definition/taskname_task:6",
"containerDefinitions": [
{
"name": "containername_container",
"image": "username/repo",
"cpu": 0,
"links": [],
"portMappings": [
{
"name": "containername_container-8888-tcp",
"containerPort": 8888,
"hostPort": 8888,
"protocol": "tcp",
"appProtocol": "http"
}
],
"essential": true,
"entryPoint": [],
"command": [],
"environment": [],
"environmentFiles": [],
"mountPoints": [],
"volumesFrom": [],
"secrets": [],
"dnsServers": [],
"dnsSearchDomains": [],
"extraHosts": [],
"dockerSecurityOptions": [],
"dockerLabels": {},
"ulimits": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-create-group": "true",
"awslogs-group": "/ecs/taskname_task",
"awslogs-region": "ap-southeast-1",
"awslogs-stream-prefix": "ecs"
},
"secretOptions": []
},
"systemControls": []
}
],
"family": "taskname_task",
"taskRoleArn": "arn:aws:iam::...:role/ecsTaskExecutionRole",
"executionRoleArn": "arn:aws:iam::...:role/ecsTaskExecutionRole",
"networkMode": "awsvpc",
"revision": 6,
"volumes": [],
"status": "ACTIVE",
"requiresAttributes": [
{
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"name": "ecs.capability.execution-role-awslogs"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.17"
},
{
"name": "com.amazonaws.ecs.capability.task-iam-role"
},
{
"name": "ecs.capability.extensible-ephemeral-storage"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "ecs.capability.task-eni"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.29"
}
],
"placementConstraints": [],
"compatibilities": [
"EC2",
"FARGATE"
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "1024",
"memory": "2048",
"ephemeralStorage": {
"sizeInGiB": 21
},
"runtimePlatform": {
"cpuArchitecture": "X86_64",
"operatingSystemFamily": "LINUX"
},
"registeredAt": "...",
"registeredBy": "arn:aws:iam::...:root",
"tags": [
{
"key": "ecs:taskDefinition:createdFrom",
"value": "ecs-console-v2"
},
{
"key": "ecs:taskDefinition:stackId",
"value": "arn:aws:cloudformation:ap-southeast-1:...:stack/ECS-Console-V2-TaskDefinition-.../..."
}
]
}
I'm new to ECS and AWS also. I have try the request https://registry-1.docker.io/v2/username/repo/manifests/latest in the error of Task Container above and received this:
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":[{"Type":"repository","Class":"","Name":"username/repo","Action":"pull"}]}]}
Is it about the request docker.io configured? I have done a lot of research but not figure anything out.
You can use Dockerhub image from within Amazon ECS Tasks
The format of Dockerhub image would be [registry-url]/[namespace]/[image]:[tag], you do not need registry-url for Dockerhub as the docker client assumes Dockerhub if one is not specified
Alternatively Docker official images should be present on ECR public in addition to Dockerhub so you can reference the ECR public repositories instead from within the ECS Tasks
Now Fargate uses the awsvpc network mode so essentially there are two options to run the task in Fargate:
If the task is being run inside a public subnet, then Auto assign Public IP must be enabled and we need to ensure that public subnet route table has Internet Gateway for internet connectivity to be able to pull the container image from public docker repository
If the task is being run from a private subnet then Auto assign Public IP must be disabled and we need to ensure that private subnet route table has an associated NAT Gateway allowing the task inside private subnet to pull the container image from public docker repository
After lots of tries, I have solved the problem by changing App environment from FARGATE to EC2 and the Network mode from awsvpc to bridge. Although this is not what my beginning intention to use FARGATE but it's solved the problem as well.
And I still don't know what, why, and how the problem has been caused and solved. Help me know.
This is my Task Definition in EC2:
{
"taskDefinitionArn": "arn:aws:ecs:ap-southeast-1:...:task-definition/taskname_task:16",
"containerDefinitions": [
{
"name": "containername_container",
"image": "username/repo",
"cpu": 0,
"links": [
"aws-otel-collector"
],
"portMappings": [
{
"name": "containername_container-8888-tcp",
"containerPort": 8888,
"hostPort": 8888,
"protocol": "tcp",
"appProtocol": "http"
}
],
"essential": true,
"entryPoint": [],
"command": [],
"environment": [],
"environmentFiles": [],
"mountPoints": [],
"volumesFrom": [],
"secrets": [],
"dnsServers": [],
"dnsSearchDomains": [],
"extraHosts": [],
"dockerSecurityOptions": [],
"dockerLabels": {},
"ulimits": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-create-group": "true",
"awslogs-group": "/ecs/taskname_task",
"awslogs-region": "ap-southeast-1",
"awslogs-stream-prefix": "ecs"
},
"secretOptions": []
},
"systemControls": []
}
],
"family": "taskname_task",
"executionRoleArn": "arn:aws:iam::...:role/ecsTaskExecutionRole",
"networkMode": "bridge",
"revision": 16,
"volumes": [],
"status": "ACTIVE",
"requiresAttributes": [
{
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"name": "ecs.capability.execution-role-awslogs"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.17"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.29"
}
],
"placementConstraints": [],
"compatibilities": [
"EC2"
],
"requiresCompatibilities": [
"EC2"
],
"cpu": "1024",
"memory": "922",
"runtimePlatform": {
"cpuArchitecture": "X86_64",
"operatingSystemFamily": "LINUX"
},
"registeredAt": "...",
"registeredBy": "arn:aws:iam::...:root",
"tags": [
{
"key": "ecs:taskDefinition:createdFrom",
"value": "ecs-console-v2"
},
{
"key": "ecs:taskDefinition:stackId",
"value": "arn:aws:cloudformation:ap-southeast-1:...:stack/ECS-Console-V2-TaskDefinition-.../..."
}
]
}
I want to run a Rust application in ECS. The Rust application has a health API and a websocket API. I can not connect to both from my machine to ECS and ECS itself even fails to query the health API using localhost, so I am doing a lot of stuff wrong and I do not know what. It works on my machine.
I checked my security groups, it allows everything. The whole Rust application can be found here: https://github.com/Jasperav/ssh-dockerfile/blob/master/src/main.rs. The relevant code (main.rs) can be found below.
I uploaded the Rust image to ECR public as well: https://eu-central-1.console.aws.amazon.com/ecr/repositories/public/465984508737/hello?region=eu-central-1. This can be deployed on an ARM machine.
The steps I took:
Made a Rust application which serves 2 endpoints, websocket and HTTP
Created an image and uploaded to ECR
In ECS, I created a new cluster with 1 public subnet. I used Fargate as infrastructure.
In the task definition, I mapped port 3014 and 3015 to TCP HTTP and selected my image. In the health api, I added CMD-SHELL curl -f localhost:3015/health
I deployed the task as a service to the cluster.
I verified I see the logging of the server that it is deployed successfully. However, I can not connect through the public ipv4 address to either the websocket API or health API.
What did I do wrong? This is the relevant code:
use std::net::SocketAddr;
use tokio::net::TcpListener;
use warp::Filter;
use warp::hyper::StatusCode;
#[tokio::main]
async fn main() {
println!("Starting up...");
let url = "0.0.0.0:3014";
let listener = TcpListener::bind(url)
.await
.unwrap();
println!("Listing on default URL");
tokio::spawn(async move {
run_health_check().await;
});
loop {
match listener.accept().await {
Ok((stream, _)) => {
let addr = stream.peer_addr().expect("connected streams should have a peer address");
println!("Peer address: {}", addr);
let ws_stream = tokio_tungstenite::accept_async(stream)
.await
.expect("Error during the websocket handshake occurred");
println!("New WebSocket connection: {}", addr);
drop(ws_stream);
}
Err(e) => panic!("{:#?}", e),
}
}
}
async fn run_health_check() {
let routes = warp::get()
.and(warp::path("health"))
.map(move || Ok(warp::reply::with_status("", StatusCode::OK)))
.with(warp::cors().allow_any_origin());
let socket_address: SocketAddr = "0.0.0.0:3015".to_string().parse().unwrap();
warp::serve(routes).run(socket_address).await;
}
This is the ECS configuration:
{
"taskDefinitionArn": "arn:aws:ecs:eu-central-1:XXXX:task-definition/websocket:3",
"containerDefinitions": [
{
"name": "websocket",
"image": "XXX.dkr.ecr.eu-central-1.amazonaws.com/hello3",
"cpu": 0,
"links": [],
"portMappings": [
{
"name": "websocket-3015-tcp",
"containerPort": 3015,
"hostPort": 3015,
"protocol": "tcp",
"appProtocol": "http"
},
{
"name": "websocket-3014-tcp",
"containerPort": 3014,
"hostPort": 3014,
"protocol": "tcp",
"appProtocol": "http"
}
],
"essential": true,
"entryPoint": [],
"command": [],
"environment": [],
"environmentFiles": [],
"mountPoints": [],
"volumesFrom": [],
"secrets": [],
"dnsServers": [],
"dnsSearchDomains": [],
"extraHosts": [],
"dockerSecurityOptions": [],
"dockerLabels": {},
"ulimits": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-create-group": "true",
"awslogs-group": "/ecs/websocket",
"awslogs-region": "eu-central-1",
"awslogs-stream-prefix": "ecs"
},
"secretOptions": []
},
"healthCheck": {
"command": [
"CMD-SHELL curl -f localhost:3015/health"
],
"interval": 30,
"timeout": 5,
"retries": 3
},
"systemControls": []
}
],
"family": "websocket",
"executionRoleArn": "arn:aws:iam::XXX:role/ecsTaskExecutionRole",
"networkMode": "awsvpc",
"revision": 3,
"volumes": [],
"status": "ACTIVE",
"requiresAttributes": [
{
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.24"
},
{
"name": "ecs.capability.execution-role-awslogs"
},
{
"name": "com.amazonaws.ecs.capability.ecr-auth"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.17"
},
{
"name": "ecs.capability.container-health-check"
},
{
"name": "ecs.capability.execution-role-ecr-pull"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "ecs.capability.task-eni"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.29"
}
],
"placementConstraints": [],
"compatibilities": [
"EC2",
"FARGATE"
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "1024",
"memory": "3072",
"runtimePlatform": {
"cpuArchitecture": "ARM64",
"operatingSystemFamily": "LINUX"
},
"registeredAt": "2022-12-11T11:19:54.622Z",
"registeredBy": "arn:aws:iam::465984508737:root",
"tags": [
{
"key": "ecs:taskDefinition:createdFrom",
"value": "ecs-console-v2"
},
{
"key": "ecs:taskDefinition:stackId",
"value": "arn:aws:cloudformation:eu-central-1:XX:stack/ECS-Console-V2-TaskDefinition-c246faa1-4ba5-4b54-ac4a-49ae00ab2f0d/8fd98e10-7944-11ed-abb2-0a564ca58c2a"
}
]
}
Dockerfile:
FROM rustlang/rust:nightly-bullseye AS builder
WORKDIR app
COPY . .
RUN cargo build --bin hello -Z sparse-registry
FROM debian:11.5
COPY --from=builder ./app/target/debug/hello .
CMD ["./hello"]
I'm trying to create an ArangoDB cluster in ECS using the default arangodb/arangodb-starter container but when I start my ECS Task, I'm getting an error saying that /usr/sbin/arangod was not found.
I pulled the arangodb/arangodb-starter image locally using docker pull and then I tagged it according to the push commands from ECR, I pushed it to ECR and I created an ECS Task (Fargate) for it. I created a service in ECS to start that task and the container starts, but the ECS Service logs show this error:
|INFO| Starting arangodb version 0.15.5, build 7832707 component=arangodb
[ERROR| Cannot find arangod (expected at /usr/sbin/arangod). component=arangodb
How to solve this:
1 - Install ArangoDB locally or run the ArangoDB starter in docker. (see README for details).
I started the exact same container by tag locally and it works. Why doesn't it work in ECS?
edit The ECS Task definition is in the snippet below:
{
"taskDefinitionArn": "arn:aws:ecs:eu-west-1:123456789:task-definition/dev-arangodb-server:1",
"containerDefinitions": [
{
"name": "dev-arangodb-server",
"image": "123456789.dkr.ecr.eu-west-1.amazonaws.com/arangodb:latest",
"cpu": 0,
"links": [],
"portMappings": [
{
"containerPort": 8529,
"hostPort": 8529,
"protocol": "tcp"
}
],
"essential": true,
"entryPoint": [],
"command": [],
"environment": [
{
"name": "ARANGO_ROOT_PASSWORD",
"value": "password"
}
],
"environmentFiles": [],
"mountPoints": [
{
"sourceVolume": "storage",
"containerPath": "/mnt/storage",
"readOnly": false
}
],
"volumesFrom": [],
"secrets": [],
"dnsServers": [],
"dnsSearchDomains": [],
"extraHosts": [],
"dockerSecurityOptions": [],
"dockerLabels": {},
"ulimits": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-create-group": "true",
"awslogs-group": "/ecs/dev-arangodb-server",
"awslogs-region": "eu-west-1",
"awslogs-stream-prefix": "ecs"
},
"secretOptions": []
},
"systemControls": []
}
],
"family": "dev-arangodb-server",
"taskRoleArn": "arn:aws:iam::123456789:role/dev-aws-ecs-ecr-power-user",
"executionRoleArn": "arn:aws:iam::123456789:role/ecsTaskExecutionRole",
"networkMode": "awsvpc",
"revision": 1,
"volumes": [
{
"name": "storage",
"host": {}
}
],
"status": "ACTIVE",
"requiresAttributes": [
{
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"name": "ecs.capability.execution-role-awslogs"
},
{
"name": "com.amazonaws.ecs.capability.ecr-auth"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.17"
},
{
"name": "com.amazonaws.ecs.capability.task-iam-role"
},
{
"name": "ecs.capability.execution-role-ecr-pull"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "ecs.capability.task-eni"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.29"
}
],
"placementConstraints": [],
"compatibilities": [
"EC2",
"FARGATE"
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "1024",
"memory": "3072",
"runtimePlatform": {
"cpuArchitecture": "X86_64",
"operatingSystemFamily": "LINUX"
},
"registeredAt": "2022-11-03T08:43:25.264Z",
"registeredBy": "arn:aws:iam::123456789:user/MY_USER",
"tags": [
{
"key": "ecs:taskDefinition:createdFrom",
"value": "ecs-console-v2"
},
{
"key": "ecs:taskDefinition:stackId",
"value": "arn:aws:cloudformation:eu-west-1:123456789:stack/ECS-Console-V2-TaskDefinition-e1519bf7-ff78-423a-951d-2bc8d79242ec/925d88d0-5b53-11ed-97a3-066ee48e3b9b"
}
]
}
I tested on my cluster and seems like that image is not running with default options like yours task definition. That image is not documented so we don't know how to start it correctly
Please try this official image and do the same process. Remember the environment, or you will face this issue.
error: database is uninitialized and password option is not specified
You need to specify one of ARANGO_ROOT_PASSWORD, ARANGO_ROOT_PASSWORD_FILE, ARANGO_NO_AUTH and ARANGO_RANDOM_ROOT_PASSWORD
when I run a Jenkins build, my back-end code is not updating in AWS. There seems to be an error when I try to build the Elastic Beanstalk:
Encountered error starting new ECS task: { "failures": [ { "reason": "RESOURCE:PORTS", "arn": "arn:aws:ecs:eu-west-1:863820595425:container-instance/d6b92955-eb16-4911-b874-683155fcd630" } ], "tasks": [] }
Has anyone encountered this before? The port set up in the dockerrun.aws.json file has not been changed in over a year. If I manually restart Task Definitions this backend will eventually update but I need to see why the Ports issue is happening.
My dockerrrun.aws.json file:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "customerportal-backend",
"image": "<AWS_ACCOUNT_ID>.dkr.ecr.eu-west-1.amazonaws.com/<ECR_REPO_NAME>:latest",
"essential": true,
"memory": 1024,
"portMappings": [
{
"hostPort": 80,
"containerPort": 3001
}
],
"mountPoints": [
{
"sourceVolume": "store-efs",
"containerPath": "/efs-mount-point"
}
],
"links": [
"clamav-rest"
]
},
{
"name": "clamav-server",
"image": "mkodockx/docker-clamav:latest",
"essential": true,
"memory": 1536
},
{
"name": "clamav-rest",
"image": "lokori/clamav-rest",
"essential": true,
"memory": 1024,
"links": [
"clamav-server:clamav-server"
],
"portMappings": [
{
"hostPort": 3100,
"containerPort": 8080
}
],
"environment" : [
{ "name" : "CLAMD_HOST", "value" : "clamav-server" }
]
}
],
"volumes": [
{
"name" : "store-efs",
"host": {
"sourcePath": "/var/app/efs"
}
}
]
}
My current approach:
The Dockerrun.aws.json file looks like this:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "mysql-volume",
"host": {
"sourcePath": "/var/app/current/web-app"
}
}
],
"containerDefinitions": [
{
"name": "mysql",
"image": "mysql:5.7",
"environment": [
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "root"
},
{
"name": "MYSQL_USER",
"value": "user"
},
{
"name": "MYSQL_PASSWORD",
"value": "pass"
},
{
"name": "MYSQL_DATABASE",
"value": "db"
}
],
"essential": true,
"memory": 256,
"portMappings": [
{
"hostPort": 3306,
"containerPort": 3306
}
],
"mountPoints": [
{
"sourceVolume": "mysql-volume",
"containerPath": "/var/lib/mysql"
}
]
},
{
"name": "web-application",
"image": "some image",
"essential": true,
"memory": 512,
"portMappings": [
{
"hostPort": 80,
"containerPort": 8080
}
],
"links": [
"mysql"
]
}
]
}
When I connect via SSH to EC2 instance I see that no volumes are created. Not sure, what's the reason and how can I change the Dockerrrun.aws.json file to make it work. Also, when I stop and restart the instance the data is lost