How to create api endpoint after created the docker container on ec2 - amazon-web-services

I've created a docker container for my server on ec2 using nodejs.
I wonder what is the next step I should do if I want to create an rest API endpoint for public access.
Dockerfile
FROM node:lts-alpine
WORKDIR /server
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3030
CMD ["npm", "run", "dev"]
docker-compose.yml
version: '2.1'
services:
test-db:
image: mysql:5.7
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=true
- MYSQL_USER=admin
- MYSQL_PASSWORD=12345
- MYSQL_DATABASE=test
volumes:
- ./db-data:/var/lib/mysql
ports:
- 3306:3306
test-web:
environment:
- NODE_ENV=local
#- DEBUG=*
- PORT=3030
build: .
command: >
./wait-for-db-redis.sh test-db npm run dev
volumes:
- ./:/server
ports:
- "3030:3030"
depends_on:
- test-db

From your comments its seems that you want to use https for your endpoint. And probably this is the greatest thing to setup, assuming you already own a domain, e.g. myapi.com. If you don't have a domain, you have to buy one if you want custom url.
There are several possibilities to add https:
Add nginx to your application as extra container, which will accept connections on port 443 and forward to your app on port 3030. nginx can be setup to use https. For that you need valid, public ssl certificates from a third party (e.g. letsencrypt). You could AWS ACM to get them, but for instance they only work with nitro-enclave instances. For instance you can use EIP and target Route53 record from your domain to the EIP.
Front you instance with Load Balancer (LB). This is the easiest to setup, as you can get free SSL certs from ACM, and deploy them on the LB. Set your Route53 domain to point to LB url as an alias record.
Setup API Gateway or CloudFront distro in-front of your EC2 instance. The issue is that connections between the API/CF and your instance will go over HTTP, unless you setup valid SSL certificates on your instance, like in the first possibility. You will also need EIP for the instance, or front it with LB, before using API Gateway or CloudFront.

Related

How to map host port and container port in AWS Fargate and AWS Beanstalk for a docker deployment?

I am trying to deploy a web application in AWS fargate as well as AWS Beanstalk.
my docker compose file looks like this.(just an example , please focus on ports)
services:
application-gateway:
image: "gcr.io/docker-public/application:latest"
container_name: application-name
ports:
- "443:9443"
- "8443:8443"
**Issue with AWS Fargate
**
I need to know how to map these ports - Bridge doesnt get enabled and I see only
How to change Host Port
I can see that once I deploy the public docker image it gets deployed in Fargate however how to access the application DNS URL ?
**Issue facing in AWS Beanstalk
**
I was able to deploy the application in single instance however I am unable to deploy it in application load balanced enviroment. again I suspect the issue is with the ports in load balancer , I have opened these ports in security group though.
Thanks,

Docker Compose ECS Service fails when using a provided LoadBalancer

I am deploying a compose to an AWS ECS context with the following docker-compose.yml
x-aws-loadbalancer: "${LOADBALANCER_ARN}"
services:
webapi:
image: ${DOCKER_REGISTRY-}webapi
build:
context: .
dockerfile: webapi/Dockerfile
environment:
ASPNETCORE_URLS: http://+:80
ASPNETCORE_ENVIRONMENT: Development
ports:
- target: 80
x-aws-protocol: http
When I create a loadbalancer using these instructions the loadbalancer assigns the default security group for the default vpc. Which apparently doesn't match the ingress rules for the docker services because if I go and look at the task in ECS I see it being killed over and over for failing an ELB healtcheck.
The only way to fix it is to go into AWS Console and assign the created security group created by docker compose to represent the default network to the loadbalancer. But thats insane.
How do I create a loadbalancer with the correct minimum access security group so it will be able to talk to later created compose generated services?

Can't connect to elasticache redis from docker container running in EC2

As a part of my CI process, I am creating a docker-machine EC2 instance and running 2 docker containers inside of it via docker-compose. The server container test script attempts to connect to an AWS elasticache redis instance within the same VPC as the EC2. When the test script is run I get the following error:
1) Storage
check cache connection
should return seeded value in redis:
Error: Timeout of 2000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/usr/src/app/test/scripts/test.js)
at listOnTimeout (internal/timers.js:549:17)
at processTimers (internal/timers.js:492:7)
Update: I can connect via redis-cli from the EC2 itself:
redis-cli -c -h ***.cache.amazonaws.com -p 6379 ping
> PONG
It looks like I cant connect to my redis instance because my docker container is using an IP that is not within the same VPC as my elasticache instance. How can I setup my docker config to use the same IP as the host machine while building my containers from remote images? Any help would be appreciated.
Relevant section of my docker-compose.yml:
version: '3.8'
services:
server:
build:
context: ./
dockerfile: Dockerfile
image: docker.pkg.github.com/$GITHUB_REPOSITORY/$REPOSITORY_NAME-server:github_ci_$GITHUB_RUN_NUMBER
container_name: $REPOSITORY_NAME-server
command: npm run dev
ports:
- "8080:8080"
- "6379:6379"
env_file: ./.env
Server container Dockerfile:
FROM node:12-alpine
# create app dir
WORKDIR /usr/src/app
# install dependencies
COPY package*.json ./
RUN npm install
# bundle app source
COPY . .
EXPOSE 8080 6379
CMD ["npm", "run", "dev"]
Elasticache redis SG inbound rules:
EC2 SG inbound rules:
I solved the problem through extensive trial and error. The major hint that pointed me in the right direction was found in the Docker docs:
By default, the container is assigned an IP address for every Docker network it connects to. The IP address is assigned from the pool assigned to the network...
Elasticache instances are only accessible internally from their respective VPC. Based on my config, the docker container and the ec2 instance were running on 2 different IP addresses but only the EC2 IP was whitelisted to connect to Elasticache.
I had to bind the docker container IP to the host EC2 IP in my docker.compose.yml by setting the container network_mode to "host":
version: '3.8'
services:
server:
image: docker.pkg.github.com/$GITHUB_REPOSITORY/$REPOSITORY_NAME-server:github_ci_$GITHUB_RUN_NUMBER
container_name: $REPOSITORY_NAME-server
command: npm run dev
ports:
- "8080:8080"
- "6379:6379"
network_mode: "host"
env_file: ./.env
...

Deploy Node Express API via Docker Compose on EC2

In my EC2, I pulled my Docker images from my ECR : API + WEB
I then start both of them up via Docker Compose
It seems to start fine, but I don't know why I can't seem to go to my API.
I can go to my site
When I go to my API, I see this
I already open up the 3002 port on my EC2 inbound rule
docker-compose.yml
version: "2"
services:
iproject-api:
image: '616934057156.dkr.ecr.us-east-2.amazonaws.com/bheng-api-script:latest'
ports:
- '3002:3002'
iproject-web:
image: '616934057156.dkr.ecr.us-east-2.amazonaws.com/bheng-web-script:latest'
ports:
- '80:8080'
links:
- iproject-api
Did I forgot to restart any service?
Inbound rule looks fine. Check your API code status in EC2 docker logs {API_Container_Id}/telnet localhost 3002

Kafka on AWS ECS, how to handle advertised.host without known instance?

I'm trying to get Kafka running on an AWS ECS container. I have this setup already / working fine on my local docker environment, using the spotify/kafka image
To get this working locally, I needed to ensure the ADVERTISED_HOST environment variable was set. ADVERTISED_HOST needed to be set as the containers external IP, otherwise when I try to connect it was just giving me connection refused.
My local docker-compose.yaml has this for the kafka container:
kafka:
image: spotify/kafka
hostname: kafka
environment:
- ADVERTISED_HOST=192.168.0.70
- ADVERTISED_PORT=9092
ports:
- "9092:9092"
- "2181:2181"
restart: always
Now the problem is, I don't know what the IP is going to be, as I dont know which instance this will run on. So how do I set that environment variable?
Your entrypoint script will need to call the EC2 Metadata Service on startup (in this case http://169.254.169.254/latest/meta-data/local-hostname) to get the external-to-docker hostname and set that variable.
Sample:
[ec2-user ~]$ curl http://169.254.169.254/latest/meta-data/local-hostname
ip-10-251-50-12.ec2.internal