I am trying to run service with following definition on AWS ECS cluster. But I am getting an error. Invalid 'volumesFrom' setting. Unknown container: 'docker-data'. I have separately launched docker-data service but my other service is still unable to find my data container. Below are task definitions.
{
"containerDefinitions": [{
"name": "docker-data",
"image": "jaigouk/data-only-container",
"memory": 128
}],
"family": "docker-data"
}
{
"containerDefinitions": [{
"image": "my/work-image",
"name": "image2",
"memory": 512,
"portMappings": [{
"hostPort": 8040,
"containerPort": 8040,
"protocol": "tcp"
}],
"volumesFrom": [{
"sourceContainer": "docker-data"
}]
}],
"family": "console"
}
Looks like you're defining 2 containers in two task definitions. In this case, docker-data is not known.
Here is one of my working example
{
"taskDefinitionArn":"arn:aws:ecs:ap-southeast-1:xxxx:task-definition/vc-p-tim:15",
"networkMode":"bridge",
"status":"ACTIVE",
"revision":15,
"taskRoleArn":null,
"containerDefinitions":[
{
"volumesFrom":[
],
"memory":2048,
"extraHosts":null,
"dnsServers":null,
"disableNetworking":null,
"dnsSearchDomains":null,
"hostname":null,
"essential":true,
"entryPoint":null,
"mountPoints":[
{
"containerPath":"/apps",
"sourceVolume":"tinymce-data",
"readOnly":null
}
],
"name":"tim-srv",
"ulimits":null,
"dockerSecurityOptions":null,
"environment":[
],
"links":null,
"workingDirectory":null,
"readonlyRootFilesystem":null,
"image":"xxxxxx.dkr.ecr.ap-southeast-1.amazonaws.com/tinymce-server:latest",
"command":null,
"user":null,
"dockerLabels":null,
"logConfiguration":null,
"cpu":0,
"privileged":null,
"memoryReservation":null
},
{
"volumesFrom":[
{
"readOnly":null,
"sourceContainer":"tim-srv"
}
],
"memory":256,
"extraHosts":null,
"dnsServers":null,
"disableNetworking":null,
"dnsSearchDomains":null,
"hostname":null,
"essential":true,
"entryPoint":null,
"mountPoints":[
],
"name":"tim-api",
"ulimits":null,
"dockerSecurityOptions":null,
"links":null,
"workingDirectory":null,
"readonlyRootFilesystem":null,
"image":"xxxxxx.dkr.ecr.ap-southeast-1.amazonaws.com/tinymce-api:0118171510",
"command":null,
"user":null,
"dockerLabels":null,
"logConfiguration":null,
"cpu":0,
"privileged":null,
"memoryReservation":null
}
],
"volumes":[
{
"host":{
"sourcePath":"tinymce-data"
},
"name":"tinymce-data"
}
],
"family":"vc-p-tim"
}
Related
I want to run a Rust application in ECS. The Rust application has a health API and a websocket API. I can not connect to both from my machine to ECS and ECS itself even fails to query the health API using localhost, so I am doing a lot of stuff wrong and I do not know what. It works on my machine.
I checked my security groups, it allows everything. The whole Rust application can be found here: https://github.com/Jasperav/ssh-dockerfile/blob/master/src/main.rs. The relevant code (main.rs) can be found below.
I uploaded the Rust image to ECR public as well: https://eu-central-1.console.aws.amazon.com/ecr/repositories/public/465984508737/hello?region=eu-central-1. This can be deployed on an ARM machine.
The steps I took:
Made a Rust application which serves 2 endpoints, websocket and HTTP
Created an image and uploaded to ECR
In ECS, I created a new cluster with 1 public subnet. I used Fargate as infrastructure.
In the task definition, I mapped port 3014 and 3015 to TCP HTTP and selected my image. In the health api, I added CMD-SHELL curl -f localhost:3015/health
I deployed the task as a service to the cluster.
I verified I see the logging of the server that it is deployed successfully. However, I can not connect through the public ipv4 address to either the websocket API or health API.
What did I do wrong? This is the relevant code:
use std::net::SocketAddr;
use tokio::net::TcpListener;
use warp::Filter;
use warp::hyper::StatusCode;
#[tokio::main]
async fn main() {
println!("Starting up...");
let url = "0.0.0.0:3014";
let listener = TcpListener::bind(url)
.await
.unwrap();
println!("Listing on default URL");
tokio::spawn(async move {
run_health_check().await;
});
loop {
match listener.accept().await {
Ok((stream, _)) => {
let addr = stream.peer_addr().expect("connected streams should have a peer address");
println!("Peer address: {}", addr);
let ws_stream = tokio_tungstenite::accept_async(stream)
.await
.expect("Error during the websocket handshake occurred");
println!("New WebSocket connection: {}", addr);
drop(ws_stream);
}
Err(e) => panic!("{:#?}", e),
}
}
}
async fn run_health_check() {
let routes = warp::get()
.and(warp::path("health"))
.map(move || Ok(warp::reply::with_status("", StatusCode::OK)))
.with(warp::cors().allow_any_origin());
let socket_address: SocketAddr = "0.0.0.0:3015".to_string().parse().unwrap();
warp::serve(routes).run(socket_address).await;
}
This is the ECS configuration:
{
"taskDefinitionArn": "arn:aws:ecs:eu-central-1:XXXX:task-definition/websocket:3",
"containerDefinitions": [
{
"name": "websocket",
"image": "XXX.dkr.ecr.eu-central-1.amazonaws.com/hello3",
"cpu": 0,
"links": [],
"portMappings": [
{
"name": "websocket-3015-tcp",
"containerPort": 3015,
"hostPort": 3015,
"protocol": "tcp",
"appProtocol": "http"
},
{
"name": "websocket-3014-tcp",
"containerPort": 3014,
"hostPort": 3014,
"protocol": "tcp",
"appProtocol": "http"
}
],
"essential": true,
"entryPoint": [],
"command": [],
"environment": [],
"environmentFiles": [],
"mountPoints": [],
"volumesFrom": [],
"secrets": [],
"dnsServers": [],
"dnsSearchDomains": [],
"extraHosts": [],
"dockerSecurityOptions": [],
"dockerLabels": {},
"ulimits": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-create-group": "true",
"awslogs-group": "/ecs/websocket",
"awslogs-region": "eu-central-1",
"awslogs-stream-prefix": "ecs"
},
"secretOptions": []
},
"healthCheck": {
"command": [
"CMD-SHELL curl -f localhost:3015/health"
],
"interval": 30,
"timeout": 5,
"retries": 3
},
"systemControls": []
}
],
"family": "websocket",
"executionRoleArn": "arn:aws:iam::XXX:role/ecsTaskExecutionRole",
"networkMode": "awsvpc",
"revision": 3,
"volumes": [],
"status": "ACTIVE",
"requiresAttributes": [
{
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.24"
},
{
"name": "ecs.capability.execution-role-awslogs"
},
{
"name": "com.amazonaws.ecs.capability.ecr-auth"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.17"
},
{
"name": "ecs.capability.container-health-check"
},
{
"name": "ecs.capability.execution-role-ecr-pull"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "ecs.capability.task-eni"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.29"
}
],
"placementConstraints": [],
"compatibilities": [
"EC2",
"FARGATE"
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "1024",
"memory": "3072",
"runtimePlatform": {
"cpuArchitecture": "ARM64",
"operatingSystemFamily": "LINUX"
},
"registeredAt": "2022-12-11T11:19:54.622Z",
"registeredBy": "arn:aws:iam::465984508737:root",
"tags": [
{
"key": "ecs:taskDefinition:createdFrom",
"value": "ecs-console-v2"
},
{
"key": "ecs:taskDefinition:stackId",
"value": "arn:aws:cloudformation:eu-central-1:XX:stack/ECS-Console-V2-TaskDefinition-c246faa1-4ba5-4b54-ac4a-49ae00ab2f0d/8fd98e10-7944-11ed-abb2-0a564ca58c2a"
}
]
}
Dockerfile:
FROM rustlang/rust:nightly-bullseye AS builder
WORKDIR app
COPY . .
RUN cargo build --bin hello -Z sparse-registry
FROM debian:11.5
COPY --from=builder ./app/target/debug/hello .
CMD ["./hello"]
Using AWS CodePipeline and setting a Source, Build and passing taskdef.json and appspec.yaml as artifacts, the deployment action Amazon ECS (Blue/Green) will fail with the error:
STRING_VALUE can not be converted to an Integer
This error does not specify where this error happens and therefore it is not possible to fix.
For reference, the files look like this:
# appspec.yaml
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: <TASK_DEFINITION>
LoadBalancerInfo:
ContainerName: "my-project"
ContainerPort: 3000
// taskdef.json
{
"family": "my-project-web",
"taskRoleArn": "arn:aws:iam::1234567890:role/ecsTaskRole-role",
"executionRoleArn": "arn:aws:iam::1234567890:role/ecsTaskExecutionRole-web",
"networkMode": "awsvpc",
"cpu": "256",
"memory": "512",
"containerDefinitions":
[
{
"name": "my-project",
"memory": "512",
"image": "01234567890.dkr.ecr.us-east-1.amazonaws.com/my-project:a09b7d81",
"environment": [],
"secrets":
[
{
"name": "APP_ENV",
"valueFrom": "arn:aws:secretsmanager:us-east-1:1234567890:secret:web/my-project-NBcsLj:APP_ENV::"
},
{
"name": "PORT",
"valueFrom": "arn:aws:secretsmanager:us-east-1:1234567890:secret:web/my-project-NBcsLj:PORT::"
},
{
"name": "APP_NAME",
"valueFrom": "arn:aws:secretsmanager:us-east-1:1234567890:secret:web/my-project-NBcsLj:APP_NAME::"
},
{
"name": "LOG_CHANNEL",
"valueFrom": "arn:aws:secretsmanager:us-east-1:1234567890:secret:web/my-project-NBcsLj:LOG_CHANNEL::"
},
{
"name": "APP_KEY",
"valueFrom": "arn:aws:secretsmanager:us-east-1:1234567890:secret:web/my-project-NBcsLj:APP_KEY::"
},
{
"name": "APP_DEBUG",
"valueFrom": "arn:aws:secretsmanager:us-east-1:1234567890:secret:web/my-project-NBcsLj:APP_DEBUG::"
}
],
"essential": true,
"logConfiguration":
{
"logDriver": "awslogs",
"options":
{
"awslogs-group": "",
"awslogs-region": "",
"awslogs-stream-prefix": ""
}
},
"portMappings":
[
{
"hostPort": 3000,
"protocol": "tcp",
"containerPort": 3000
}
],
"entryPoint": [ "web" ],
"command": []
}
],
"requiresCompatibilities": [ "FARGATE", "EC2" ],
"tags":
[
{
"key": "project",
"value": "my-project"
}
]
}
Any insights on this issue are highly appreciated!
Please refer to the following guide that outlines the supported data type for each parameter: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html. It appears that you've provided a string where an integer is expected.
If I was to guess, looking at the above, the value for memory under containerDefinitions should be an integer not a string: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definition_memory
when I run a Jenkins build, my back-end code is not updating in AWS. There seems to be an error when I try to build the Elastic Beanstalk:
Encountered error starting new ECS task: { "failures": [ { "reason": "RESOURCE:PORTS", "arn": "arn:aws:ecs:eu-west-1:863820595425:container-instance/d6b92955-eb16-4911-b874-683155fcd630" } ], "tasks": [] }
Has anyone encountered this before? The port set up in the dockerrun.aws.json file has not been changed in over a year. If I manually restart Task Definitions this backend will eventually update but I need to see why the Ports issue is happening.
My dockerrrun.aws.json file:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "customerportal-backend",
"image": "<AWS_ACCOUNT_ID>.dkr.ecr.eu-west-1.amazonaws.com/<ECR_REPO_NAME>:latest",
"essential": true,
"memory": 1024,
"portMappings": [
{
"hostPort": 80,
"containerPort": 3001
}
],
"mountPoints": [
{
"sourceVolume": "store-efs",
"containerPath": "/efs-mount-point"
}
],
"links": [
"clamav-rest"
]
},
{
"name": "clamav-server",
"image": "mkodockx/docker-clamav:latest",
"essential": true,
"memory": 1536
},
{
"name": "clamav-rest",
"image": "lokori/clamav-rest",
"essential": true,
"memory": 1024,
"links": [
"clamav-server:clamav-server"
],
"portMappings": [
{
"hostPort": 3100,
"containerPort": 8080
}
],
"environment" : [
{ "name" : "CLAMD_HOST", "value" : "clamav-server" }
]
}
],
"volumes": [
{
"name" : "store-efs",
"host": {
"sourcePath": "/var/app/efs"
}
}
]
}
My current approach:
The Dockerrun.aws.json file looks like this:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "mysql-volume",
"host": {
"sourcePath": "/var/app/current/web-app"
}
}
],
"containerDefinitions": [
{
"name": "mysql",
"image": "mysql:5.7",
"environment": [
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "root"
},
{
"name": "MYSQL_USER",
"value": "user"
},
{
"name": "MYSQL_PASSWORD",
"value": "pass"
},
{
"name": "MYSQL_DATABASE",
"value": "db"
}
],
"essential": true,
"memory": 256,
"portMappings": [
{
"hostPort": 3306,
"containerPort": 3306
}
],
"mountPoints": [
{
"sourceVolume": "mysql-volume",
"containerPath": "/var/lib/mysql"
}
]
},
{
"name": "web-application",
"image": "some image",
"essential": true,
"memory": 512,
"portMappings": [
{
"hostPort": 80,
"containerPort": 8080
}
],
"links": [
"mysql"
]
}
]
}
When I connect via SSH to EC2 instance I see that no volumes are created. Not sure, what's the reason and how can I change the Dockerrrun.aws.json file to make it work. Also, when I stop and restart the instance the data is lost
But to run the
aws ecs register-task-definition --family --container-definitions wordpress file: //wordpress.json"
is giving the following error below:
Error parsing parameter '--container-definitions': Invalid JSON: Expecting object: line 1 column 1 (char 0)
JSON received: {
My wordpress.json:
{
"containerDefinitions": [
{
"name": "wordpress",
"links": [
"mysql"
],
"image": "wordpress",
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 500,
"cpu": 10
},
{
"environment": [
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "password"
}
],
"name": "mysql",
"image": "mysql",
"cpu": 10,
"memory": 500,
"essential": true
}
],
"family": "wordpress"
}
Any suggestions?
The correct aws cli command for registration of the Task is
aws ecs register-task-definition --cli-input-json file://wordpress.json