Force EBS to respect Dockerrun.aws.json exposed ports - amazon-web-services

Long story short. I'm struggling with setup Rabbitmq single instance in EBS. Locally everything works as expected and I'm able to connect to RabbitMQ via 5672 port. When I deploy the image to EBS it seems that the first port from Dockerrun.aws.json is auto bound to port 80, therefore the amqp is accessible via this port.
Is there any hack which I can apply to correctly bind the port 5672 to 5672 of the ec2 host?
Dockerfile
FROM rabbitmq:3.7.7-management-alpine
ADD rabbitmq.config /etc/rabbitmq/
ADD definitions.json /etc/rabbitmq/
EXPOSE 5672
EXPOSE 15672
CMD ["rabbitmq-server"]
Dockerrun.aws.json
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "some-image-name",
"Update": "true"
},
"Ports": [{
"HostPort": "5672",
"ContainerPort": "5672"
},
{
"HostPort": "15672",
"ContainerPort": "15672"
}
],
"Volumes": []
}

The hack for that is very easy. Simply expose random port as the first entry.
Now other ports are correctly mapped.
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "some-image",
"Update": "true"
},
"Ports": [{
"HostPort": "80",
"ContainerPort": "80"
},
{
"HostPort": "5672",
"ContainerPort": "5672"
},
{
"HostPort": "15672",
"ContainerPort": "15672"
}
],
"Volumes": []
}

Related

Traefik 502 Bad Gateway with WSS on AWS ECS Fargate

I'm trying to run a secured websocket service, as well as a HTTPS service in an AWS Fargate task.
I use traefik with an ECS provider to do the routing inside the task.
My HTTPS service is working great but the WSS isn't : I get a 502 Bad Gateway error
Here is the log that I get :
level=debug msg="'502 Bad Gateway' caused by: EOF"
Here is my (simplified) JSON file that I use to deploy my task definition :
Container 1 is the one with WSS,
Container 2 is the one with HTTPS,
{
"containerDefinitions": [
{
"portMappings": [
{
"hostPort": 443,
"protocol": "tcp",
"containerPort": 443
},
{
"hostPort": 8080,
"protocol": "tcp",
"containerPort": 8080
}
],
"image": "traefik",
"name": "Traefik",
"command": [
"--api.dashboard=true",
"--api.insecure=true",
"--log.level=DEBUG",
"--providers.ecs=true",
"--providers.ecs.exposedbydefault=false",
"--providers.ecs.autoDiscoverClusters=false",
"--providers.ecs.clusters=ClusterName",
"--entrypoints.websecure.address=:443",
"--serversTransport.insecureSkipVerify=true"
]
},
{
"portMappings": [
{
"hostPort": 1234,
"protocol": "tcp",
"containerPort": 1234
}
],
"image": "xxx",
"name": "Container1",
"dockerLabels": {
"traefik.enable": "true",
"traefik.port": "1234",
"traefik.http.routers.container1-router.tls": "true",
"traefik.http.routers.container1-router.rule": "PathPrefix(`/container1`)",
"traefik.http.routers.container1-router.entrypoints": "websecure",
"traefik.http.middlewares.container1-strip.stripprefix.prefixes": "/container1",
"traefik.http.middlewares.container1-strip.stripprefix.forceSlash": "false",
"traefik.http.routers.container1-router.middlewares": "container1-strip"
}
},
{
"portMappings": [
{
"hostPort": 5000,
"protocol": "tcp",
"containerPort": 5000
}
],
"image": "xxx",
"name": "Container2",
"dockerLabels": {
"traefik.enable": "true",
"traefik.port": "5000",
"traefik.http.routers.container2-router.tls": "true",
"traefik.http.routers.container2-router.rule": "PathPrefix(`/container2`)",
"traefik.http.routers.container2-router.entrypoints": "websecure",
"traefik.http.middlewares.container2-strip.stripprefix.prefixes": "/container2",
"traefik.http.middlewares.container2-strip.stripprefix.forceSlash": "false",
"traefik.http.routers.container2-router.middlewares": "container2-strip"
}
}
]
}
The traefik dashboard displays my router just fine so I can't figure what might be the issue ?
Thanks,

Communicate between docker containers in a docker network in AWS Beanstalk

I'm using AWS Beanstalk to deploy my project in 'Multi-container Docker running on 64bit Amazon Linux'
Here's my Dockerrun.aws.json as per documentation
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "child",
"image": "nithinsgowda/child",
"essential": true,
"memory": 256,
"portMappings": [
{
"hostPort": 9000,
"containerPort": 9000
}
],
"links": [
"master"
]
},
{
"name": "master",
"image": "nithinsgowda/master",
"essential": true,
"memory": 512,
"portMappings": [
{
"hostPort": 80,
"containerPort": 8080
}
],
"links": [
"child"
]
}
]
}
I can access my master container at port 80 from the public internet
Inside my master container I have an API call to be made to the child container
I have tried the below options :
None of them worked
fetch('http://child/api')
fetch('http://child:9000/api')
fetch('http://15.14.13.12:9000/api') //My public DNS for the beanstalk application (Example)
If it was in a local docker-compose environment 'http://child/api' works perfectly fine. But this doesn't work on Beanstalk.
How do I communicate with the child container from my master container ?
I have even tried bindIP attribute and assigned a local IP and tried accessing it with the local IP, it still doesn't work
When looked into the server logs, docker ps was executed by the environment and both containers were up and running, port mappings were also displayed correct.
Here's what you need to specify in Dockerrun.aws.json
"containerDefinitions": [
{
"name": "child",
"image": "nithinsgowda/child",
"essential": true,
"memory": 256,
"portMappings": [
{
"hostPort": 9000,
"containerPort": 9000
}
],
"links": [
"master"
],
"environment": [
{
"name": "Container",
"value": "child"
}
]
},
The environment variable named Container will be the name given to your container inside the network.
"environment": [
{
"name": "Container",
"value": "child" //Any custom name accepted
}
]
And hence after specifying the environment variable, I can now access the child container as fetch('http://child:9000/api')
Here's AWS official documentation link specifying the above content https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html#create_deploy_docker_v2config_dockerrun

Docker compose depedency pass to AWS Elastic Beanstalk

In Docker Compose to communicate between images is used name of service. For example:
In docker-compose.yml file should be defined
depends_on:
- database
This dependency can be used in:
"server=database;uid=root;pwd=root;database=database"
Mainly name of defined services in docker-compose.yml file indicate hostname. I use AWS Elastic Beanstalk to deploy my microservices architecture to the cloud and when I run local run by Dockerrun.aws.json generated by container-transform this dependency is not available.
My question is. Do I do some wrong?
Does dependency like in Docker Compose available from WS Elastic Beanstalk?
In my real examples. Parts of docker-compose.yml
version: '3'
services:
rabbitmq: # login guest:guest
image: rabbitmq:3-management
hostname: "rabbitmq"
labels:
NAME: "rabbitmq"
ports:
- "4369:4369"
- "5671:5671"
- "5672:5672"
- "25672:25672"
- "15671:15671"
- "15672:15672"
xms.accounts:
image: ditrikss/accounts
build: ./Microservices/Account/Xms
restart: always
ports:
- 6001:80
depends_on:
- xdb.accounts
- rabbitmq
environment:
- ASPNETCORE_ENVIRONMENT=Production
xdb.accounts:
image: mysql/mysql-server
restart: always
environment:
MYSQL_DATABASE: 'xdb_accounts'
MYSQL_USER: 'root'
MYSQL_PASSWORD: 'root'
MYSQL_ROOT_PASSWORD: 'root'
ports:
- '6002:3306'
volumes:
- "./Databases/Scripts/xdb_Accounts/Create/1_accounts.sql:/docker-entrypoint-initdb.d/1.sql"
- "./Databases/Scripts/xdb_Accounts/Create/2_passwords.sql:/docker-entrypoint-initdb.d/2.sql"
- "./Databases/Scripts/xdb_Accounts/Create/3_channel_features.sql:/docker-entrypoint-initdb.d/3.sql"
- "./Databases/Scripts/xdb_Accounts/Create/4_streaming_features.sql:/docker-entrypoint-initdb.d/4.sql"
And reflecting code of Dockerrun.aws.json file
{
"AWSEBDockerrunVersion": "2",
"containerDefinitions": [
{
"dockerLabels": {
"NAME": "rabbitmq"
},
"essential": true,
"image": "rabbitmq:3-management",
"name": "rabbitmq",
"portMappings": [
{
"containerPort": 4369,
"hostPort": 4369
},
{
"containerPort": 5671,
"hostPort": 5671
},
{
"containerPort": 5672,
"hostPort": 5672
},
{
"containerPort": 25672,
"hostPort": 25672
},
{
"containerPort": 15671,
"hostPort": 15671
},
{
"containerPort": 15672,
"hostPort": 15672
}
]
}
{
"environment": [
{
"name": "MYSQL_DATABASE",
"value": "xdb_accounts"
},
{
"name": "MYSQL_USER",
"value": "root"
},
{
"name": "MYSQL_PASSWORD",
"value": "root"
},
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "root"
}
],
"essential": true,
"image": "mysql/mysql-server",
"mountPoints": [
{
"containerPath": "/docker-entrypoint-initdb.d/1.sql",
"sourceVolume": "_DatabasesScriptsXdb_AccountsCreate1_Accounts_Sql"
},
{
"containerPath": "/docker-entrypoint-initdb.d/2.sql",
"sourceVolume": "_DatabasesScriptsXdb_AccountsCreate2_Passwords_Sql"
},
{
"containerPath": "/docker-entrypoint-initdb.d/3.sql",
"sourceVolume": "_DatabasesScriptsXdb_AccountsCreate3_Channel_Features_Sql"
},
{
"containerPath": "/docker-entrypoint-initdb.d/4.sql",
"sourceVolume": "_DatabasesScriptsXdb_AccountsCreate4_Streaming_Features_Sql"
}
],
"name": "xdb.accounts",
"portMappings": [
{
"containerPort": 3306,
"hostPort": 6002
}
]
},
{
"environment": [
{
"name": "ASPNETCORE_ENVIRONMENT",
"value": "Production"
}
],
"essential": true,
"image": "ditrikss/accounts",
"name": "xms.accounts",
"portMappings": [
{
"containerPort": 80,
"hostPort": 6001
}
]
}
]
}
Thanks in advance!
According to Dockerrun.aws.json v2 reference, you should add links section in your Dockerrun.aws.json file:
Definition of links:
List of containers to link to. Linked containers can discover
each other and communicate securely.
Example usage:
{
"name": "nginx-proxy",
"image": "nginx",
"essential": true,
"memory": 128,
"portMappings": [{
"hostPort": 80,
"containerPort": 80
}],
"links": [
"php-app"
],
"mountPoints": [
{
"sourceVolume": "php-app",
"containerPath": "/var/www/html",
"readOnly": true
}
]
}

AWS Fargate Containers with multiple open ports

This is resolved, probably through one of the security group changes I made.
I have a container that spawns multiple programs. Each program listens on a unique port. It's a round-robin thing, and in a regular docker environment, we expose the possible range. Everything works just fine. Another container has an app that attaches to each of the little agents running in the first container. It's normal socket communications from there.
Now we're trying to migrate to Fargate. I've done the port mappings when creating the task definition, although there's a note that it might be getting ignored by Fargate. I'm seeing hints that Fargate really only lets you open a single port, referred to as the containerPort, and that's all you get. That seems... insane.
nmap shows the ports as filtered.
Am I just doing something wrong? Does anyone have hints what I should look at?
I read one paper that talked about a network load balancer. That seems like a crazy solution.
I don't want to spawn multiple containers for two basic reasons. First, we'd have to entirely rewrite the app that spawns these agents. Secondly, container startup time is way way too long for a responsive environment.
Suggestions of what I should look at?
Per a request, here's the relevant JSON, edited for brevity.
{
"family": "agents",
"executionRoleArn": "ecsTaskExecutionRole",
"networkMode": "awsvpc",
"containerDefinitions": [
{
"name": "agent-container",
"image": "agent-continer:latest",
"cpu": 256,
"memory": 1024,
"portMappings": [
{
"containerPort": 22,
"hostPort": 22,
"protocol": "tcp"
},
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp"
},
{
"containerPort": 15000,
"hostPort": 15000,
"protocol": "tcp"
},
{
"containerPort": 15001,
"hostPort": 15001,
"protocol": "tcp"
},
...
],
"essential": true,
"environment": [ ... ],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/ct-test-agents",
"awslogs-region": "",
"awslogs-stream-prefix": "ct-test-agents"
}
}
}
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "256",
"memory": "1024"
}
Could it be an issue with the security group attached to the service / task? Did you add rules that allow incoming traffic on the specified ports?
As you could reach the service with nmap I assume it is already publicly reachable and has a public IP address. But maybe the SecurityGroup does not allow access to the ports.

Elastic Beanstalk Multicontainer Docker environment: no entries in etc/hosts for lined containers

I have an environment with a few containers. Some of them are linked. When I run the environment with "docker-compose up -d", it creates entries in etc/hosts for linked containers. When I run it with "eb local run", no entries are created. Why is that?
My Dockerrun.aws.json
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "api",
"image": "php7",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 8080,
"containerPort": 80
}
],
"mountPoints": [
{
"sourceVolume": "api",
"containerPath": "/var/www/html/"
}
]
},
{
"name": "nodeapi",
"image": "nodejs",
"essential": true,
"memory": 256,
"portMappings": [
{
"hostPort": 5000,
"containerPort": 5000
}
],
"mountPoints": [
{
"sourceVolume": "nodeapi",
"containerPath": "/var/www/app/"
}
],
"Logging": "/var/eb_log"
},
{
"name": "proxy",
"image": "nginx",
"essential": true,
"memory": 128,
"links": [
"api",
"nodeapi"
],
"portMappings": [
{
"hostPort": 8443,
"containerPort": 80
}
]
}
]
}
This generates docker-compose.yml:
api:
image: php7
ports:
- 8080:80
nodeapi:
image: nodejs
ports:
- 5000:5000
proxy:
image: nginx
links:
- api:api
- nodeapi:nodeapi
ports:
- 8443:80
Docker switched to DNS based lookups a while back instead of adding entries to /etc/hosts. Linking is also discouraged in favor of using a common network for the containers.
Ok, this is was a local issue. I upgraded Docker and EB cli to the latest versions and this solved the issue. I'm not sure why EB cli failed to add aliases to etc/hosts previously, but after upgrade it does. Now I get same results either by using "docker-compose up" or "eb local run". All linked container are linked now and work as expected.