How to deploy multiple services to AWS ECS? - amazon-web-services

I have a docker-compose setup made up with 2+ containers that I use for local development. I'm trying to deploy the following to AWS ECS and I successfully deployed the first service but I'm struggling deploying more. It seems like I'm missing something in the configuration.
To deploy the first service to ECS, I followed these steps:
Deployed docker image to ECR
Created an ECS cluster
Created a task that I attached my ECR image to
Configured a VPC
Configured ELB
Successfully launched a new instance inside of my ECS container
To deploy the second service, I'm trying to follow steps 1,3 and 6 and it seems like it's deployed successfully. However, when I try to hit any endpoint from the second service, it results in /404 at the first one.
As I understand, my current ELB configuration is set up to route all the traffic to the first instance, so my question is - which steps I should also re-use while deploying the 2nd service? Should it have a separate ELB and security group?
I tried googling it but all the articles are about deploying a single instance which I had no problems with.

You dont need to create a cluster for second service. You need create another task definition with new image of second service (deployed to the ECR). Next step is create a service in the cluster and setup Service discovery in the Configure network step.
Documentation

You might want to consider this: https://docs.docker.com/engine/context/ecs-integration/. It's still in beta but it can deploy your compose file directly to ECS for you.

Related

request times out when pinging aws load balancer

I have a dockerized Node.JS express application that I am migrating to AWS from Google Cloud. I had done this before successfully on the same project before deciding Cloud Run was more cost effective because of their free tier. Now, wanting to switch back to Fargate, but am unable to do it again due what I'm guessing is a crucial step. For minimal setup, I used the following guide: https://docs.docker.com/cloud/ecs-integration/ Essentially, using docker compose up with aws context and project name to deploy to ECS and Fargate.
The Load Balancer gives me a public DNS name in the format: xxxxx.elb.us-west-2.amazonaws.com and I have defined a port of 5002 in my Docker container. I know the issue is not related to exposing port numbers or any code-related issue since I had this successfully running in Google Cloud Run. When I try to hit any of my express endpoints, by sending POST to xxxxx.elb.us-west-2.amazonaws.com:5002/my_endpoint, I end up with Error: Request Timed Out
Note: I have already verified that my inbound security rules have been set to all traffic.
I am very new to AWS, so would love guidance if I am missing a critical step.
Thanks!
EDIT (SOLUTION): Turns out everything was deploying correctly, but after checking CloudWatch Logs, it turns out Fargate can't read environment variables defined inside of docker-compose file. Instead, they need to be defined in .env files, then read in docker-compose through -env-file flag. My code was then trying to listen on a port that was in environment variable but was undefined, so was receiving the below error in CloudWatch.

Fargate - How do I make an API call to another container within the same task definition

When developing locally, I run docker-compose, where I have two services Service1 and Service2. Service2 depends on Service1. When I deploy them to ECS, I create them within one task definition and provide JSON array of container definitions to spin them up.
When I run them locally, within docker-compose, from Service2 I can call http://Service1:8080/v1/graphql (since they're in docker-compose together I can call it by the service name) ... however, when I deploy to ECS and I make that same API call, I get a 404.
Based on this: Docker links with awsvpc network mode I've also tried http://localhost:8080/v1/graphql ... I'd appreciate any help!
I'd try service discovery as mentioned here:
Amazon ECS now includes integrated service discovery. This makes it possible for an ECS service to automatically register itself with a predictable and friendly DNS name in Amazon Route 53. As your services scale up or down in response to load or container health, the Route 53 hosted zone is kept up to date, allowing other services to lookup where they need to make connections based on the state of each service.
See an example here.

How to access the apache container of a task on AWS ECS?

I am setting up an infrastructure to deploy my application on AWS. I am using ECS service because I am trying to deploy a Docker-based application. So far I have created a task definition with two containers one for the apache and another one for PHP. Then I launched an ECS cluster with an EC2 instance and a task running. They all seem to be up and running. Now, I am trying to figure out how I can access the apache of my EC2 instance with the Cluster on the browser.
This is how I created the apache container.
And then I created the php container as follow.
Then I launched an EC2 based ECS cluster with one instance in it. Then I run one task within the cluster. Then I tried to open the public IP address of my instance. It just keeps loading loading and loading. What is wrong with my configuration? How can I access it on the browser?
It seems to me there's a couple of possible scenarios here you could check:
If do you reach the service and are stuck on an endless reloading loop, which might point to something in your code that could be causing it to do that,
If you're having a long wait time till the browser actually gives a timeout, which might be caused by not having the right port open on the Security Group associated with your task definition.

running a docker loop device on aws

I'm new to aws and am having some issues with getting my mobile app back running again. Forgive me if this question seems vague.
For a school project we created a mobile app on aws and deployed using docker containers (another student managed these tasks). When trying to get my own key pair to ssh into my ec2 instance i detached the volume associated with my instance and reattached it after getting my own key pair. Now i can ssh into my instance but my front end cant talk to my web server.
So my question is, do i create a new application on elastic beanstalk to deploy my app? Even though when i run lsblk is shows a have a docker loop device and when i run docker images i see several that match the name of my application? or do i somehow get the container running again, docker run doesn't seem to be working.
No need, just upload a new update into Elastic Beanstalk. AWS will handle the rest.
FYI, Elastic Beanstalk - Single Docker Container update process (simple under the hood):
You upload the update into AWS.
AWS will put it on your S3.
Inside your EC2, there is an Elastic Beanstalk agent. It will check for a new update.
If there is an update, the agent will download the update file and extract it.
The agent will build a new Docker image.
If the build is success, it will generate a new config to tell Nginx (web proxy) the new web server container.
Nginx will be reloaded.
Your old docker container will be destroyed.
Don't change anything inside EC2 of Elastic Beanstalk, except you know what you do. Elastic Beanstalk is design for automate deployment and scaling. So, if you change something in your EC2 manually, it might be lost. Of course, you can modified your EC2 instance, but you need to automate it using .ebextensions or take an image.

Pointing amazon AWS Elastic Beanstalk to existing EC2

Was wondering if someone can help with below amazon AWS question, seems a basic item but can't find any answers, getting very frustrated.
1) I have an EC2 instance running that has a third party process running in the background, and when called from command line it spits out a number.
2) I have a java web app that runs this command line and uses the output for the web gui etc..
But for the life of me, i cannot figure out how to deploy my java web app on the SAME existing EC2 that's running the process, every time i try to create an elastic beanstalk it creates a new EC2 instance.
How do i make the elastic beanstalk to run off the same existing EC2 i already have? I understand there are other workaround to pass the data remotely but this seems a fundamental requirement that is missing from AWS - that you cannot run your web app with backend/batch processes on the same EC2 instance?
Thank you
Elastic Beanstalk is basically a higher abstraction layer on EC2 and it's tightly coupled with it. That means at a minimum every time you deploy your application it will spin up an EC2 server.
The advantage is that you don't need to manage your EC2 instances, for example it will autoscale automatically depending on your traffic demand.
The disadvantage is that it theoretically doesn't allow you to tweak little things in the EC2 instance because you may mess up how the Elastic Beanstalk interprets your app. Also, I believe you cannot force your Elastic Beanstalk deployment to use a specific AMI.
If you want more flexibility in your app (which sounds like your do) I recommend do your own deployment for your application (No Elastic Beanstalk). That way you can run the your app and your jobs on the same EC2 Instance.
You can use custom AMI with Elastic Beanstalk.
AWS documentation has a guide on how to create and use a customized AMI: Using Custom AMIs
But then again, nobody's stopping you from running your background processes on the standard Elastic Beanstalk instance. I run background cron jobs and Flask application on one Elastic Beanstalk instance.
files:
"/tmp/cronjob-for-foobar" :
mode: "000777"
owner: ec2-user
group: ec2-user
content: |
# skip
# clean up files created by above cronjob
30 23 * * * rm $HOME/cron*.log
container_commands:
70-foobar-cronjobs:
command: crontab /tmp/cronjob-for-foobar
Obviously, you can have anything scheduled in cron, as long as you stay within your instance limits.