I'm trying to run a secured websocket service, as well as a HTTPS service in an AWS Fargate task.
I use traefik with an ECS provider to do the routing inside the task.
My HTTPS service is working great but the WSS isn't : I get a 502 Bad Gateway error
Here is the log that I get :
level=debug msg="'502 Bad Gateway' caused by: EOF"
Here is my (simplified) JSON file that I use to deploy my task definition :
Container 1 is the one with WSS,
Container 2 is the one with HTTPS,
{
"containerDefinitions": [
{
"portMappings": [
{
"hostPort": 443,
"protocol": "tcp",
"containerPort": 443
},
{
"hostPort": 8080,
"protocol": "tcp",
"containerPort": 8080
}
],
"image": "traefik",
"name": "Traefik",
"command": [
"--api.dashboard=true",
"--api.insecure=true",
"--log.level=DEBUG",
"--providers.ecs=true",
"--providers.ecs.exposedbydefault=false",
"--providers.ecs.autoDiscoverClusters=false",
"--providers.ecs.clusters=ClusterName",
"--entrypoints.websecure.address=:443",
"--serversTransport.insecureSkipVerify=true"
]
},
{
"portMappings": [
{
"hostPort": 1234,
"protocol": "tcp",
"containerPort": 1234
}
],
"image": "xxx",
"name": "Container1",
"dockerLabels": {
"traefik.enable": "true",
"traefik.port": "1234",
"traefik.http.routers.container1-router.tls": "true",
"traefik.http.routers.container1-router.rule": "PathPrefix(`/container1`)",
"traefik.http.routers.container1-router.entrypoints": "websecure",
"traefik.http.middlewares.container1-strip.stripprefix.prefixes": "/container1",
"traefik.http.middlewares.container1-strip.stripprefix.forceSlash": "false",
"traefik.http.routers.container1-router.middlewares": "container1-strip"
}
},
{
"portMappings": [
{
"hostPort": 5000,
"protocol": "tcp",
"containerPort": 5000
}
],
"image": "xxx",
"name": "Container2",
"dockerLabels": {
"traefik.enable": "true",
"traefik.port": "5000",
"traefik.http.routers.container2-router.tls": "true",
"traefik.http.routers.container2-router.rule": "PathPrefix(`/container2`)",
"traefik.http.routers.container2-router.entrypoints": "websecure",
"traefik.http.middlewares.container2-strip.stripprefix.prefixes": "/container2",
"traefik.http.middlewares.container2-strip.stripprefix.forceSlash": "false",
"traefik.http.routers.container2-router.middlewares": "container2-strip"
}
}
]
}
The traefik dashboard displays my router just fine so I can't figure what might be the issue ?
Thanks,
Related
I have elasticbeanstalk with multi-container with the following Dockerrun.json. And elb address http://{app-name-env}.ap-south-1.elasticbeanstalk.com/ , Now i wanted to convert HTTP to HTTPS without buying AWS 53 endpoints(any other cost-based purchase also). In some example, they are using HTTPS with {app-name-env}.ap-south-1.elasticbeanstalk.com as a domain.
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [{
"environment": [{
"name": "POSTGRES_USER",
"value": "admin"
},
{
"name": "POSTGRES_PASSWORD",
"value": "postgres"
},
{
"name": "POSTGRES_DB",
"value": "some-db"
}
],
"essential": true,
"image": "postgres:12-alpine",
"memory": 300,
"mountPoints": [{
"containerPath": "/var/lib/postgresql/data/",
"sourceVolume": "postgres_data"
}],
"name": "db",
"portMappings": [{
"containerPort": 5432,
"hostPort": 5432
}]
},
{
"essential": true,
"links": [
"db"
],
"name": "web",
"image": "**********.dkr.ecr.ap-south-1.amazonaws.com/***:***",
"memory": 300,
"portMappings": [{
"containerPort": 80,
"hostPort": 80
}]
}
],
"volumes": [{
"host": {
"sourcePath": "postgres_data"
},
"name": "postgres_data"
}
]
}
Sadly you can't do this. For a valid public ssl certificate required for Https you need your own domain. You can't use aws provided domain for eb.
The tutorial you are following is using ACM for the ssl certificates which can only be obtained for your own domain.
I have a prisma project that works fine locally when I run $ docker-compose up. I converted the docker-compose.yml file to Dockerrun.aws.json. But now when i try to run the project locally via $ eb local run I get an error
mysql_1 | Version: '5.7.24' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL)
prisma_1 | Exception in thread "main" java.sql.SQLTransientConnectionException: database - Connection is not available, request timed out after 5001ms.
Below is my Dockerrun.aws.json file:
{
"AWSEBDockerrunVersion": "2",
"containerDefinitions": [
{
"environment": [
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "prisma"
}
],
"essential": true,
"memory": 128,
"image": "mysql:5.7",
"mountPoints": [
{
"containerPath": "/var/lib/mysql",
"sourceVolume": "Mysql"
}
],
"name": "mysql",
"portMappings": [
{
"containerPort": 3306,
"hostPort": 3306
}
]
},
{
"environment": [
{
"name": "PRISMA_CONFIG",
"value": "port: 4466\ndatabases:\n default:\n connector: mysql\n host: mysql\n port: 3306\n user: root\n password: prisma\n migrations: true\n"
}
],
"essential": true,
"memory": 128,
"image": "prismagraphql/prisma:1.21",
"name": "prisma",
"portMappings": [
{
"containerPort": 4466,
"hostPort": 4466
}
]
}
],
"family": "",
"volumes": [
{
"host": {
"sourcePath": "mysql"
},
"name": "Mysql"
}
]
}
The error message leads me to believe that there's an issue connecting the prisma container to the mysql instance. If i had to guess it's the PRISMA_CONFIG value but not I'm not 100% sure. Can someone tell me what I'm doing wrong here?
You can not have those /n in there. YAML cares about real carriage and spaces.
Long story short. I'm struggling with setup Rabbitmq single instance in EBS. Locally everything works as expected and I'm able to connect to RabbitMQ via 5672 port. When I deploy the image to EBS it seems that the first port from Dockerrun.aws.json is auto bound to port 80, therefore the amqp is accessible via this port.
Is there any hack which I can apply to correctly bind the port 5672 to 5672 of the ec2 host?
Dockerfile
FROM rabbitmq:3.7.7-management-alpine
ADD rabbitmq.config /etc/rabbitmq/
ADD definitions.json /etc/rabbitmq/
EXPOSE 5672
EXPOSE 15672
CMD ["rabbitmq-server"]
Dockerrun.aws.json
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "some-image-name",
"Update": "true"
},
"Ports": [{
"HostPort": "5672",
"ContainerPort": "5672"
},
{
"HostPort": "15672",
"ContainerPort": "15672"
}
],
"Volumes": []
}
The hack for that is very easy. Simply expose random port as the first entry.
Now other ports are correctly mapped.
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "some-image",
"Update": "true"
},
"Ports": [{
"HostPort": "80",
"ContainerPort": "80"
},
{
"HostPort": "5672",
"ContainerPort": "5672"
},
{
"HostPort": "15672",
"ContainerPort": "15672"
}
],
"Volumes": []
}
I've been looking around and haven't found much content with regards to a best practice when it comes to setting up HTTPS/SSL on Amazon Elastic Beanstalk with a Multi-container Docker environment.
There is a bunch of stuff when it comes to single container configuration, but nothing when it comes to multi-container.
My Dockerrun.aws.json looks like this:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "app-frontend",
"host": {
"sourcePath": "/var/app/current/app-frontend"
}
},
{
"name": "app-backend",
"host": {
"sourcePath": "/var/app/current/app-backend"
}
}
],
"containerDefinitions": [
{
"name": "app-backend",
"image": "xxxxx/app-backend",
"memory": 512,
"mountPoints": [
{
"containerPath": "/app/app-backend",
"sourceVolume": "app-backend"
}
],
"portMappings": [
{
"containerPort": 4000,
"hostPort": 4000
}
],
"environment": [
{
"name": "PORT",
"value": "4000"
},
{
"name": "MIX_ENV",
"value": "dev"
},
{
"name": "PG_PASSWORD",
"value": "xxxx"
},
{
"name": "PG_USERNAME",
"value": "xx"
},
{
"name": "PG_HOST",
"value": "xxxxx"
}
]
},
{
"name": "app-frontend",
"image": "xxxxxxx/app-frontend",
"memory": 512,
"links": [
"app-backend"
],
"command": [
"npm",
"run",
"production"
],
"mountPoints": [
{
"containerPath": "/app/app-frontend",
"sourceVolume": "app-frontend"
}
],
"portMappings": [
{
"containerPort": 3000,
"hostPort": 80
}
],
"environment": [
{
"name": "REDIS_HOST",
"value": "xxxxxx"
}
]
}
],
"family": ""
}
My thinking thus far is I would need to bring an nginx container into the mix in order to proxy the two services and handle things like mapping different domain names to different services.
Would I go the usual route of just setting up nginx and configuring the SSL as normal, or is there a better way, like I've seen for the single containers using the .ebextensions method (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-docker.html) ?
This is more of an idea (I haven't actually done this and not sure if it would work). But the components appear to be all available to create a ALB that could direct traffic to one process or another based on path rules.
Here is what I am thinking that could be done via .ebextensions config files based on the options available from http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html:
Use aws:elasticbeanstalk:environment:process:default to make sure the default application port and health check is set the way you intend (let's say port 80 is your default in this case.
Use aws:elasticbeanstalk:environment:process:process_name to create a backend process that goes to your second service (port 4000 in this case).
Create a rule for your backend with aws:elbv2:listenerrule:backend which would use something like /backend/* as the path.
Create the SSL listener with aws:elbv2:listener:443 (example at http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-applicationloadbalancer.html) that uses this new backend rule.
I am not sure if additional rules need to be created for the default listener of aws:elbv2:listener:default. It seems like the default might just match /* so in this case anything sent to /backend/* would go to port 4000 container and anything else goes to the port 3000 container.
You will definitely need an nginx container, for the simple fact that a multicontainer ELB setup does not provide one by default. The reason that you see a single container setup on ELB with these .ebextension configs, is that for this type of setup the ELB does provide nginx.
The benefit of having your own nginx container is that you won't need a frontend container (assuming you are serving static files). You can write our nginx config so that you serve static files straight.
Here is my Dockerrun file:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "dist",
"host": {
"sourcePath": "/var/app/current/frontend/dist"
}
},
{
"name": "nginx-proxy-conf",
"host": {
"sourcePath": "/var/app/current/compose/production/nginx/nginx.conf"
}
}
],
"containerDefinitions": [
{
"name": "backend",
"image": "abc/xyz",
"essential": true,
"memory": 256,
},
{
"name": "nginx-proxy",
"image": "nginx:latest",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"depends_on": ["backend"],
"links": [
"backend"
],
"mountPoints": [
{
"sourceVolume": "dist",
"containerPath": "/var/www/app/frontend/dist",
"readOnly": true
},
{
"sourceVolume": "awseb-logs-nginx-proxy",
"containerPath": "/var/log/nginx"
},
{
"sourceVolume": "nginx-proxy-conf",
"containerPath": "/etc/nginx/nginx.conf",
"readOnly": true
}
]
}
]
}
I also highly recommend to use AWS services for setting up your SSL: Route 53 and Certificate manager. They play nice together and if I understand correctly, it allows you to apply SSL on load balancing level.
How to pass environment variables to Docker containers running in AWS Elastic Beanstalk multiple docker container configuration(different for different containers)?
Use the environment key in your container descriptions.
{
"containerDefinitions": [
{
"name": "myContainer",
"image": "something",
"environment": [
{
"name": "MY_DB_PASSWORD",
"value": "password"
}
],
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 500,
"cpu": 10
}]
}
For more information, see: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html