AWS Fargate Containers with multiple open ports - amazon-web-services

This is resolved, probably through one of the security group changes I made.
I have a container that spawns multiple programs. Each program listens on a unique port. It's a round-robin thing, and in a regular docker environment, we expose the possible range. Everything works just fine. Another container has an app that attaches to each of the little agents running in the first container. It's normal socket communications from there.
Now we're trying to migrate to Fargate. I've done the port mappings when creating the task definition, although there's a note that it might be getting ignored by Fargate. I'm seeing hints that Fargate really only lets you open a single port, referred to as the containerPort, and that's all you get. That seems... insane.
nmap shows the ports as filtered.
Am I just doing something wrong? Does anyone have hints what I should look at?
I read one paper that talked about a network load balancer. That seems like a crazy solution.
I don't want to spawn multiple containers for two basic reasons. First, we'd have to entirely rewrite the app that spawns these agents. Secondly, container startup time is way way too long for a responsive environment.
Suggestions of what I should look at?
Per a request, here's the relevant JSON, edited for brevity.
{
"family": "agents",
"executionRoleArn": "ecsTaskExecutionRole",
"networkMode": "awsvpc",
"containerDefinitions": [
{
"name": "agent-container",
"image": "agent-continer:latest",
"cpu": 256,
"memory": 1024,
"portMappings": [
{
"containerPort": 22,
"hostPort": 22,
"protocol": "tcp"
},
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp"
},
{
"containerPort": 15000,
"hostPort": 15000,
"protocol": "tcp"
},
{
"containerPort": 15001,
"hostPort": 15001,
"protocol": "tcp"
},
...
],
"essential": true,
"environment": [ ... ],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/ct-test-agents",
"awslogs-region": "",
"awslogs-stream-prefix": "ct-test-agents"
}
}
}
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "256",
"memory": "1024"
}

Could it be an issue with the security group attached to the service / task? Did you add rules that allow incoming traffic on the specified ports?
As you could reach the service with nmap I assume it is already publicly reachable and has a public IP address. But maybe the SecurityGroup does not allow access to the ports.

Related

Communicate between docker containers in a docker network in AWS Beanstalk

I'm using AWS Beanstalk to deploy my project in 'Multi-container Docker running on 64bit Amazon Linux'
Here's my Dockerrun.aws.json as per documentation
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "child",
"image": "nithinsgowda/child",
"essential": true,
"memory": 256,
"portMappings": [
{
"hostPort": 9000,
"containerPort": 9000
}
],
"links": [
"master"
]
},
{
"name": "master",
"image": "nithinsgowda/master",
"essential": true,
"memory": 512,
"portMappings": [
{
"hostPort": 80,
"containerPort": 8080
}
],
"links": [
"child"
]
}
]
}
I can access my master container at port 80 from the public internet
Inside my master container I have an API call to be made to the child container
I have tried the below options :
None of them worked
fetch('http://child/api')
fetch('http://child:9000/api')
fetch('http://15.14.13.12:9000/api') //My public DNS for the beanstalk application (Example)
If it was in a local docker-compose environment 'http://child/api' works perfectly fine. But this doesn't work on Beanstalk.
How do I communicate with the child container from my master container ?
I have even tried bindIP attribute and assigned a local IP and tried accessing it with the local IP, it still doesn't work
When looked into the server logs, docker ps was executed by the environment and both containers were up and running, port mappings were also displayed correct.
Here's what you need to specify in Dockerrun.aws.json
"containerDefinitions": [
{
"name": "child",
"image": "nithinsgowda/child",
"essential": true,
"memory": 256,
"portMappings": [
{
"hostPort": 9000,
"containerPort": 9000
}
],
"links": [
"master"
],
"environment": [
{
"name": "Container",
"value": "child"
}
]
},
The environment variable named Container will be the name given to your container inside the network.
"environment": [
{
"name": "Container",
"value": "child" //Any custom name accepted
}
]
And hence after specifying the environment variable, I can now access the child container as fetch('http://child:9000/api')
Here's AWS official documentation link specifying the above content https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html#create_deploy_docker_v2config_dockerrun

Cannot talk to another container inside same task on ECS Fargate using awsvpc networking type

I am running a standard SPA frontend app and a node server in the same ECS task.
Everything that I have read about awsvpc can use localhost when 2 containers inside the same task are interfacing with each other.
However, with this set up I can't seem to return data from my node server to my UI. All my requests immediately fail. I have asserted that it is actually trying to hit localhost on my actual computer.
Browser failure:
https://pasteboard.co/JFJLnLO.png
For testing purposes I exposed port 8080 to see if I could interact with the node server directly and that works as expected. I just cant get the UI to talk to it.
Any help would be much appreciated
EDIT:
My task definition looks like this:
"containerDefinitions": [
{
"essential": true,
"image": "[my-account-id].dkr.ecr.eu-west-1.amazonaws.com/[my-account]/app-ui:latest",
"name": "app-ui",
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/app-ui",
"awslogs-region": "eu-west-1",
"awslogs-stream-prefix": "ecs",
"awslogs-create-group": "true"
}
},
"portMappings": [
{
"containerPort": 3000,
"hostPort": 3000,
"protocol": "tcp"
}
]
},
{
"essential": true,
"image": "[my-account-id].dkr.ecr.eu-west-1.amazonaws.com/[my-account]/app-api:latest",
"name": "app-api",
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/app-api",
"awslogs-region": "eu-west-1",
"awslogs-stream-prefix": "ecs",
"awslogs-create-group": "true"
}
},
"portMappings": [
{
"containerPort": 8080,
"hostPort": 8080,
"protocol": "tcp"
}
]
}
],
"cpu": "256",
"executionRoleArn": "arn:aws:iam::[my-account-id]:role/AWSServiceRoleECS",
"family": "app",
"memory": "512",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"]
}
[1]: https://i.stack.imgur.com/S7zl3.png
Based on the comments.
The issue is caused by calling localhost api endpoint in on the client side in the browser. This will resolve to localhost on the client machine, not within the ECS tasks. The localhost will work when the API is called from the inside of the ECS task, not from outside of the task.
To call the API endpoint from the client side, regular public IP or public DNS is required, not localhost.

Problem with setting a service in AWS ECS

I was trying to set up a ECS service running a container image on a cluster, but could not get the setup working.
I have basically followed the guide on https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-blue-green.html, except that I was trying to host the containers on EC2 instances.
I wonder if the issue is related to the network mode (used "awsvpc").
Expectation
It should show something on index.html on access witht eh ALB link
Observation
When I tried to access with the load balancer link, it gives HTTP 503, and the health-check also showed unhealthy
And it seems ECS keeps "re-creating" the conatiners? (Forgive me as I am still not familiar with ECS)
Tried to access the container instance directly but also could not reach
I had a look on the ECS agent log (/var/logs/ecs-agent.log) on the container instance, the image should have been pulled sucessfully
And the task should have been started
ECS service events
It seems it kept register and deregister target
Security groups have been set to accept HTTP traffic
Setup
Tomcat server on container starts on port 80
ALB
Listener
Target group
ECS task definition creation
{
"family": "TestTaskDefinition",
"networkMode": "awsvpc",
"containerDefinitions": [
{
"name": "TestContainer",
"image": "<Image URI>",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp"
}
],
"essential": true
}
],
"requiresCompatibilities": [
"EC2"
],
"cpu": "256",
"memory": "512",
"executionRoleArn": "<ECS execution role ARN>"
}
ECS service creation
{
"cluster": "TestCluster",
"serviceName": "TestService",
"taskDefinition": "TestTaskDefinition",
"loadBalancers": [
{
"targetGroupArn": "<target group ARN>",
"containerName": "TestContainer",
"containerPort": 80
}
],
"launchType": "EC2",
"schedulingStrategy": "REPLICA",
"deploymentController": {
"type": "CODE_DEPLOY"
},
"networkConfiguration": {
"awsvpcConfiguration": {
"assignPublicIp": "DISABLED",
"securityGroups": [ "sg-0f9b629686ca3bd08" ],
"subnets": [ "subnet-05f47b367df4f50d4", "subnet-0fd76fc8e47ea3be7" ]
}
},
"desiredCount": 1
}
Based on the comments.
To investigate the issue, it was recommended to tested the ECS service without ALB. Based on the test, it was found that the ALB was treating the ECS service as unhealthy due to long application starting time.
The issue was solved by increasing ALB health-check grace period to (e.g. 300s).
not sure if EC2 launch type must use "bridge"
You can use awsvpc on EC2 instances as well, but bridge is easier to use in this case.

Force EBS to respect Dockerrun.aws.json exposed ports

Long story short. I'm struggling with setup Rabbitmq single instance in EBS. Locally everything works as expected and I'm able to connect to RabbitMQ via 5672 port. When I deploy the image to EBS it seems that the first port from Dockerrun.aws.json is auto bound to port 80, therefore the amqp is accessible via this port.
Is there any hack which I can apply to correctly bind the port 5672 to 5672 of the ec2 host?
Dockerfile
FROM rabbitmq:3.7.7-management-alpine
ADD rabbitmq.config /etc/rabbitmq/
ADD definitions.json /etc/rabbitmq/
EXPOSE 5672
EXPOSE 15672
CMD ["rabbitmq-server"]
Dockerrun.aws.json
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "some-image-name",
"Update": "true"
},
"Ports": [{
"HostPort": "5672",
"ContainerPort": "5672"
},
{
"HostPort": "15672",
"ContainerPort": "15672"
}
],
"Volumes": []
}
The hack for that is very easy. Simply expose random port as the first entry.
Now other ports are correctly mapped.
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "some-image",
"Update": "true"
},
"Ports": [{
"HostPort": "80",
"ContainerPort": "80"
},
{
"HostPort": "5672",
"ContainerPort": "5672"
},
{
"HostPort": "15672",
"ContainerPort": "15672"
}
],
"Volumes": []
}

HTTPS on Elastic Beanstalk (Docker Multi-container)

I've been looking around and haven't found much content with regards to a best practice when it comes to setting up HTTPS/SSL on Amazon Elastic Beanstalk with a Multi-container Docker environment.
There is a bunch of stuff when it comes to single container configuration, but nothing when it comes to multi-container.
My Dockerrun.aws.json looks like this:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "app-frontend",
"host": {
"sourcePath": "/var/app/current/app-frontend"
}
},
{
"name": "app-backend",
"host": {
"sourcePath": "/var/app/current/app-backend"
}
}
],
"containerDefinitions": [
{
"name": "app-backend",
"image": "xxxxx/app-backend",
"memory": 512,
"mountPoints": [
{
"containerPath": "/app/app-backend",
"sourceVolume": "app-backend"
}
],
"portMappings": [
{
"containerPort": 4000,
"hostPort": 4000
}
],
"environment": [
{
"name": "PORT",
"value": "4000"
},
{
"name": "MIX_ENV",
"value": "dev"
},
{
"name": "PG_PASSWORD",
"value": "xxxx"
},
{
"name": "PG_USERNAME",
"value": "xx"
},
{
"name": "PG_HOST",
"value": "xxxxx"
}
]
},
{
"name": "app-frontend",
"image": "xxxxxxx/app-frontend",
"memory": 512,
"links": [
"app-backend"
],
"command": [
"npm",
"run",
"production"
],
"mountPoints": [
{
"containerPath": "/app/app-frontend",
"sourceVolume": "app-frontend"
}
],
"portMappings": [
{
"containerPort": 3000,
"hostPort": 80
}
],
"environment": [
{
"name": "REDIS_HOST",
"value": "xxxxxx"
}
]
}
],
"family": ""
}
My thinking thus far is I would need to bring an nginx container into the mix in order to proxy the two services and handle things like mapping different domain names to different services.
Would I go the usual route of just setting up nginx and configuring the SSL as normal, or is there a better way, like I've seen for the single containers using the .ebextensions method (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-docker.html) ?
This is more of an idea (I haven't actually done this and not sure if it would work). But the components appear to be all available to create a ALB that could direct traffic to one process or another based on path rules.
Here is what I am thinking that could be done via .ebextensions config files based on the options available from http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html:
Use aws:elasticbeanstalk:environment:process:default to make sure the default application port and health check is set the way you intend (let's say port 80 is your default in this case.
Use aws:elasticbeanstalk:environment:process:process_name to create a backend process that goes to your second service (port 4000 in this case).
Create a rule for your backend with aws:elbv2:listenerrule:backend which would use something like /backend/* as the path.
Create the SSL listener with aws:elbv2:listener:443 (example at http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-applicationloadbalancer.html) that uses this new backend rule.
I am not sure if additional rules need to be created for the default listener of aws:elbv2:listener:default. It seems like the default might just match /* so in this case anything sent to /backend/* would go to port 4000 container and anything else goes to the port 3000 container.
You will definitely need an nginx container, for the simple fact that a multicontainer ELB setup does not provide one by default. The reason that you see a single container setup on ELB with these .ebextension configs, is that for this type of setup the ELB does provide nginx.
The benefit of having your own nginx container is that you won't need a frontend container (assuming you are serving static files). You can write our nginx config so that you serve static files straight.
Here is my Dockerrun file:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "dist",
"host": {
"sourcePath": "/var/app/current/frontend/dist"
}
},
{
"name": "nginx-proxy-conf",
"host": {
"sourcePath": "/var/app/current/compose/production/nginx/nginx.conf"
}
}
],
"containerDefinitions": [
{
"name": "backend",
"image": "abc/xyz",
"essential": true,
"memory": 256,
},
{
"name": "nginx-proxy",
"image": "nginx:latest",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"depends_on": ["backend"],
"links": [
"backend"
],
"mountPoints": [
{
"sourceVolume": "dist",
"containerPath": "/var/www/app/frontend/dist",
"readOnly": true
},
{
"sourceVolume": "awseb-logs-nginx-proxy",
"containerPath": "/var/log/nginx"
},
{
"sourceVolume": "nginx-proxy-conf",
"containerPath": "/etc/nginx/nginx.conf",
"readOnly": true
}
]
}
]
}
I also highly recommend to use AWS services for setting up your SSL: Route 53 and Certificate manager. They play nice together and if I understand correctly, it allows you to apply SSL on load balancing level.