How to set up an SSL reverse proxy within Amazon ECS? - amazon-web-services

I have a web app in a Docker container, and I want to proxy it via another container (running NGINX) that is exposed to the outside world and only handles HTTPS traffic. Both containers would be hosted by ECS on the same EC2 instance. What's the best way to make the NGINX container retrieve the SSL private key from S3 or IAM?
Note that I don't want to include the private key in the Docker image for security reasons. I want to retrieve the private key from S3 or IAM when the container is created, taking the AWS user credentials from environment variables.
Amazon Elastic Beanstalk has a nice way to achieve this, but I don't think there's anything like that for ECS. I'm thinking that I'll need to write a Docker entrypoint (or wrap the Nginx command) in order to install an S3/IAM client and then download the key using the credentials in the environment. Does AWS provide a nicer way? It seems like Elastic Load Balancing is a solution, but I can't find any information regarding the security measures between the LB and the EC2 instance.

There are a few parts to the question,
Placing Nginx and web app on the same instance can be achieved by defining both containers as part of the same ecs task. I would recommend creating a docker link from nginx to webapp container.This would allow you to avoid exposing the webapp container port externally. The only port exposed to the host will be the nginx port.
One of the ways to retrieve the Keys securely at the container startup is to have a custom shell script start the nginx service on the container. For this you will need to build your own docker container. In you docker build file define the start of the container using a custom shell script. In the shell script, you can have logic to retrieve the file from s3 using aws cli. After the retrieval you can start the nginx service. Using aws cli will allow you to use the credential available on the ec2 instance.
In case your base docker image does not have aws cli, you will need to add it.

Related

How to register multiple target groups via aws ecs cli service command

I have currently deployed a web server via aws ecs cli compose service up cli command, and I further registered a domain in route 53 service, register a certificate through Amazon Certificate Manager. Making use of the ALB (application load balancer), I am able to perform dynamic port mapping and https for my web application, but here is the problem.
Using docker compose as the blueprint for my web application, which consists of 3 containers, frontend, loopback and database (mongo), my frontend container's dynamic port mapping and https are up and running fine
However the problem comes to the loopback container, there are chances frontend needs to fetch something via loopback API server (which makes use of 3002 port), but the loopback container does not is not have configured in https which causes the error below when calling the API.
Through ecs cli compose service up command, I can configure the target group to allow elb to forward the request to frontend container (using --target-group-arn, --container-name and --container-port attributes to specify the frontend container with the specific target group), but this command seems unable to map the 2nd target group to my loopback container. Reading https://docs.aws.amazon.com/AmazonECS/latest/developerguide/register-multiple-targetgroups.html which seems to allow the possibility of multiple target groups for a service, but I cannot figure out how to use create service command to link up my docker containers without using user ecs cli compose service up command.
Is there a way to
Use ecs cli compose service up command to register multiple target groups on my containers?
Apply https also on my loopback URL (which domain name is myDomain.com:3002)?
======================================================
Follow-up tasks
Created 2 target groups
Configured rules and listeners
Knowing ecs cli service up cannot register multiple groups, I tried to do via console, still only 1 container can be registered
Thanks and appreciate for all helps
As far your question is a concern, it possible to perform that using AWS console, but ecs cli currently does not support multiple target group at the moment.
you can check this ecs-cli compose service up with a load balancer also consider this amazon-ecs-cli-register-service.
The second error occurred when the frontend application tries to use load mix HTTP and https resources. you can look into the error, there can be static or API calls that are based on HTTP, convert all these calls to HTTPS then it should work fine. you can check error seems like static file loading from http site.
Once you applied HTTPS it should point to https://example.com or https://api.example.com, the port is not required with HTTPs call if its bind with standard HTTPS port.
Update:
ALB target group route traffic base on the target group, so the target group contain the desired container. adding screenshot to make it more clear.
ecs cli compose service up contains a parameter --target-groups allowing you to add multiple target groups at once.
ecs-cli compose --file "../../src/docker-compose.yml" `
--ecs-params "../../src/ecs-params.yml" `
--project-name xxxxx service up `
--target-groups "targetGroupArn=arn:aws:elasticloadbalancing:eu-west-3:xxxxx:targetgroup/xxxx_tg1,containerPort=80,containerName=webapi" `
--target-groups "targetGroupArn=arn:aws:elasticloadbalancing:eu-west-3:xxxxx:targetgroup/xxxx_tg2,containerPort=81,containerName=webapi2" `
--cluster-config myconfig`
--ecs-profile myprofile
ecs-cli compose service up documentation

Force DNS Redirect in AWS VPC for Public Hostname

I am trying to deploy a kubernetes cluster into an AWS environment which does not support Route53 queries from the generated hostname ($HostA). This environment requires an override of the Endpoint configuration to resolve all Route53 queries to $HostB. Note that I am in not control of either host, and they are both reachable on the public internet. The protokube docker image I am deploying is not aware of this; to make it aware, I would need to build the image and host it myself, something I do not wish to do if I can avoid it (as I would probably have to do this for every docker image I am deploying).
I am looking for a way to redirect all requests to $HostA without having to change any docker configuration. Ideally, I would like a way to override all requests to $HostA from within my VPC to go to $HostB. If this is not possible, I am in control of the EC2 userdata which starts up the EC2 instances which hosts the images. Thus, perhaps there is a way I can set /etc/host.alises in the EC2 host and force this to be used for all running containers (instead of the container's /etc/host). Again, please keep in mind that I need to be able to control this from the host instance and NOT by overriding the docker image's configuration.
Thank you!

Unable to access REST service deployed in docker swarm in AWS

I used the cloud formation template provided by Docker for AWS setup & prerequisites to set up a docker swarm.
I created a REST service using Tibco BusinessWorks Container Edition and deployed it into the swarm by creating a docker service.
docker service create --name aka-swarm-demo --publish 8087:8085 akamatibco/docker_swarm_demo:part1
The service starts successfully but the CloudWatch logs show the below exception:
I have tried passing the JVM environment variable in the Dockerfile as :
ENV JAVA_OPTS= "-Dbw.rest.docApi.port=7778"
but it doesn't help.
The interesting fact is at the end the log says:
com.tibco.thor.frwk.Application - TIBCO-THOR-FRWK-300006: Started BW Application [SFDemo:1.0]
So I tried to access the application using CURL -
curl -X GET --header 'Accept: application/json' 'URL of AWS load balancer : port which I exposed while creating the service/resource URI'
But I am getting the below message:
The REST service works fine when I do docker run.
I have checked the Security Groups of the manager and load-balancer. The load-balancer has inbound open to all traffic and for the manager I opened HTTP connections.
I am not able to figure out if anything I have missed. Can anyone please help ?
As mentioned in Deploy services to swarm, if you read along, you will find the following:
PUBLISH A SERVICE’S PORTS DIRECTLY ON THE SWARM NODE
Using the routing mesh may not be the right choice for your application if you need to make routing decisions based on application state or you need total control of the process for routing requests to your service’s tasks. To publish a service’s port directly on the node where it is running, use the mode=host option to the --publish flag.
Note: If you publish a service’s ports directly on the swarm node using mode=host and also set published= this creates an implicit limitation that you can only run one task for that service on a given swarm node. In addition, if you use mode=host and you do not use the --mode=global flag on docker service create, it will be difficult to know which nodes are running the service in order to route work to them.
Publishing ports for services works different than for regular containers. The problem was; the image does not expose the port after running service create --publish and hence the swarm routing layer cannot reach the REST service. To resolve this use mode = host.
So I used the below command to create a service:
docker service create --name tuesday --publish mode=host,target=8085,published=8087 akamatibco/docker_swarm_demo:part1
Which eventually removed the exception.
Also make sure to configure the firewall settings of your load balancer so as to allow communications through desired protocols in order to access your applications deployed inside the container.
For my case it was HTTP protocol, enabling port 8087 on load balancer which served the purpose.

AWS ECS Production Docker Deployment

I've recently started using Docker for my own personal website. So the design my website is basically
Nginx -> Frontend -> Backend -> Database
Currently, the database is hosted using AWS RDS. So we can leave that out for now.
So here's my questions
I currently have my application separated into different repository. Frontend and Backend respectively.
Where should I store my 'root' docker-compose.yml file. I can't decide to store it in either the frontend/backend repository.
In a docker-compose.yml file, Can the nginx serve mount a volume from my frontend service without any ports and serve that directory?
I have been trying for so many days but I can't seem to deploy a proper production with Docker with my 3 tier application in ECS Cluster. Is there any good example nginx.conf that I can refer to?
How do I auto-SSL my domain?
Thank you guys!
Where should I store my 'root' docker-compose.yml file.
Many orgs use a top level repo which is used for storing infrastructure related metadata such as CloudFormation templates, and docker-compose.yml files. So it would be something like. So devs clone the top level repo first, and that repo ideally contains either submodules or tooling for pulling down the sub repos for each sub component or microservice.
In a docker-compose.yml file, Can the nginx serve mount a volume from my frontend service without any ports and serve that directory?
Yes you could do this but it would be dangerous and the disk would be a bottleneck. If your intention is to get content from the frontend service, and have it served by Nginx then you should link your frontend service via a port to your Nginx server, and setup your Nginx as a reverse proxy in front of your application container. You can also configure Nginx to cache the content from your frontend server to a disk volume (if it is too much content to fit in memory). This will be a safer way instead of using the disk as the communication link. Here is an example of how to configure such a reverse proxy on AWS ECS: https://github.com/awslabs/ecs-nginx-reverse-proxy/tree/master/reverse-proxy
I can't seem to deploy a proper production with Docker with my 3 tier application in ECS Cluster. Is there any good example nginx.conf that I can refer to?
The link in my last answer contains a sample nginx.conf that should be helpful, as well as a sample task definition for deploying an application container as well as a nginx container, linked to each other, on Amazon ECS.
How do I auto-SSL my domain?
If you are on AWS the best way to get SSL is to use the built in SSL termination capabilities of the Application Load Balancer (ALB). AWS ECS integrates with ALB as a way to get web traffic to your containers. ALB also integrates with Amazon certificate manager (https://aws.amazon.com/certificate-manager/) This service will give you a free SSL certificate which automatically updates. This way you don't have to worry about your SSL certificate expiring ever again, because its just automatically renewed and updated in your ALB.

running a docker loop device on aws

I'm new to aws and am having some issues with getting my mobile app back running again. Forgive me if this question seems vague.
For a school project we created a mobile app on aws and deployed using docker containers (another student managed these tasks). When trying to get my own key pair to ssh into my ec2 instance i detached the volume associated with my instance and reattached it after getting my own key pair. Now i can ssh into my instance but my front end cant talk to my web server.
So my question is, do i create a new application on elastic beanstalk to deploy my app? Even though when i run lsblk is shows a have a docker loop device and when i run docker images i see several that match the name of my application? or do i somehow get the container running again, docker run doesn't seem to be working.
No need, just upload a new update into Elastic Beanstalk. AWS will handle the rest.
FYI, Elastic Beanstalk - Single Docker Container update process (simple under the hood):
You upload the update into AWS.
AWS will put it on your S3.
Inside your EC2, there is an Elastic Beanstalk agent. It will check for a new update.
If there is an update, the agent will download the update file and extract it.
The agent will build a new Docker image.
If the build is success, it will generate a new config to tell Nginx (web proxy) the new web server container.
Nginx will be reloaded.
Your old docker container will be destroyed.
Don't change anything inside EC2 of Elastic Beanstalk, except you know what you do. Elastic Beanstalk is design for automate deployment and scaling. So, if you change something in your EC2 manually, it might be lost. Of course, you can modified your EC2 instance, but you need to automate it using .ebextensions or take an image.