How to access the apache container of a task on AWS ECS? - amazon-web-services

I am setting up an infrastructure to deploy my application on AWS. I am using ECS service because I am trying to deploy a Docker-based application. So far I have created a task definition with two containers one for the apache and another one for PHP. Then I launched an ECS cluster with an EC2 instance and a task running. They all seem to be up and running. Now, I am trying to figure out how I can access the apache of my EC2 instance with the Cluster on the browser.
This is how I created the apache container.
And then I created the php container as follow.
Then I launched an EC2 based ECS cluster with one instance in it. Then I run one task within the cluster. Then I tried to open the public IP address of my instance. It just keeps loading loading and loading. What is wrong with my configuration? How can I access it on the browser?

It seems to me there's a couple of possible scenarios here you could check:
If do you reach the service and are stuck on an endless reloading loop, which might point to something in your code that could be causing it to do that,
If you're having a long wait time till the browser actually gives a timeout, which might be caused by not having the right port open on the Security Group associated with your task definition.

Related

request times out when pinging aws load balancer

I have a dockerized Node.JS express application that I am migrating to AWS from Google Cloud. I had done this before successfully on the same project before deciding Cloud Run was more cost effective because of their free tier. Now, wanting to switch back to Fargate, but am unable to do it again due what I'm guessing is a crucial step. For minimal setup, I used the following guide: https://docs.docker.com/cloud/ecs-integration/ Essentially, using docker compose up with aws context and project name to deploy to ECS and Fargate.
The Load Balancer gives me a public DNS name in the format: xxxxx.elb.us-west-2.amazonaws.com and I have defined a port of 5002 in my Docker container. I know the issue is not related to exposing port numbers or any code-related issue since I had this successfully running in Google Cloud Run. When I try to hit any of my express endpoints, by sending POST to xxxxx.elb.us-west-2.amazonaws.com:5002/my_endpoint, I end up with Error: Request Timed Out
Note: I have already verified that my inbound security rules have been set to all traffic.
I am very new to AWS, so would love guidance if I am missing a critical step.
Thanks!
EDIT (SOLUTION): Turns out everything was deploying correctly, but after checking CloudWatch Logs, it turns out Fargate can't read environment variables defined inside of docker-compose file. Instead, they need to be defined in .env files, then read in docker-compose through -env-file flag. My code was then trying to listen on a port that was in environment variable but was undefined, so was receiving the below error in CloudWatch.

AWS ECS Task can't connect to RDS Database

I'm a newer AWS user and today I got stuck while working on a sample project. I successfully created a docker container that runs a simple R script that connects to my AWS RDS MySQL Database and creates & writes some basic files to it. I built a public ECR repository, pushed my docker image there, and built a ECS cluster & task choosing Fargate and using the container image from my repository. My task ran and I could see the R code being executed when I went through the logs, but it was never able to connect to the SQL Database and exited afterwards.
I've had to whitelist my own IP address in the security group for the RDS Database so that I can connect to it, so I'm aware I probably have to do that for my ECS task to establish that connection too. But won't that IP address constantly change because I won't have a static IP for the Fargate Server that is executing my task? I'm trying to stay on the free tier so I'm not sure I want to setup an elastic IP address for this server.
These 2 articles seem close if not the same issue I'm having but I can't figure out a solution. I haven't found any other info.
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-task-database-connection/
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-static-elastic-ip-address/
The end goal is to get this sample project successfully running on a scheduled fixed interval, and then running actual scripts on there to help automate things and make my life easier, so this sample project is a first step towards that. Any help or info on the questions I'm having would be appreciated !
Yes, your task is ephemeral (whether you launch it manually or as part of an ECS service) and its private/public ip address may change over time if it gets replaced. The way you'd make the connectivity rules to stick is to assign a security group to the task (that may have inbound access on a specific port you need I assume and outbound to everything) and assign another security group to the RDS db that has inbound access on port 3306 for the security group you assigned to the task (this is the trick, the SG will not change and you are telling RDS to allow access to ALL traffic coming from that SG). I see the first article you posted doesn't talk about this part (it should).

How to deploy multiple services to AWS ECS?

I have a docker-compose setup made up with 2+ containers that I use for local development. I'm trying to deploy the following to AWS ECS and I successfully deployed the first service but I'm struggling deploying more. It seems like I'm missing something in the configuration.
To deploy the first service to ECS, I followed these steps:
Deployed docker image to ECR
Created an ECS cluster
Created a task that I attached my ECR image to
Configured a VPC
Configured ELB
Successfully launched a new instance inside of my ECS container
To deploy the second service, I'm trying to follow steps 1,3 and 6 and it seems like it's deployed successfully. However, when I try to hit any endpoint from the second service, it results in /404 at the first one.
As I understand, my current ELB configuration is set up to route all the traffic to the first instance, so my question is - which steps I should also re-use while deploying the 2nd service? Should it have a separate ELB and security group?
I tried googling it but all the articles are about deploying a single instance which I had no problems with.
You dont need to create a cluster for second service. You need create another task definition with new image of second service (deployed to the ECR). Next step is create a service in the cluster and setup Service discovery in the Configure network step.
Documentation
You might want to consider this: https://docs.docker.com/engine/context/ecs-integration/. It's still in beta but it can deploy your compose file directly to ECS for you.

running a docker loop device on aws

I'm new to aws and am having some issues with getting my mobile app back running again. Forgive me if this question seems vague.
For a school project we created a mobile app on aws and deployed using docker containers (another student managed these tasks). When trying to get my own key pair to ssh into my ec2 instance i detached the volume associated with my instance and reattached it after getting my own key pair. Now i can ssh into my instance but my front end cant talk to my web server.
So my question is, do i create a new application on elastic beanstalk to deploy my app? Even though when i run lsblk is shows a have a docker loop device and when i run docker images i see several that match the name of my application? or do i somehow get the container running again, docker run doesn't seem to be working.
No need, just upload a new update into Elastic Beanstalk. AWS will handle the rest.
FYI, Elastic Beanstalk - Single Docker Container update process (simple under the hood):
You upload the update into AWS.
AWS will put it on your S3.
Inside your EC2, there is an Elastic Beanstalk agent. It will check for a new update.
If there is an update, the agent will download the update file and extract it.
The agent will build a new Docker image.
If the build is success, it will generate a new config to tell Nginx (web proxy) the new web server container.
Nginx will be reloaded.
Your old docker container will be destroyed.
Don't change anything inside EC2 of Elastic Beanstalk, except you know what you do. Elastic Beanstalk is design for automate deployment and scaling. So, if you change something in your EC2 manually, it might be lost. Of course, you can modified your EC2 instance, but you need to automate it using .ebextensions or take an image.

Pointing amazon AWS Elastic Beanstalk to existing EC2

Was wondering if someone can help with below amazon AWS question, seems a basic item but can't find any answers, getting very frustrated.
1) I have an EC2 instance running that has a third party process running in the background, and when called from command line it spits out a number.
2) I have a java web app that runs this command line and uses the output for the web gui etc..
But for the life of me, i cannot figure out how to deploy my java web app on the SAME existing EC2 that's running the process, every time i try to create an elastic beanstalk it creates a new EC2 instance.
How do i make the elastic beanstalk to run off the same existing EC2 i already have? I understand there are other workaround to pass the data remotely but this seems a fundamental requirement that is missing from AWS - that you cannot run your web app with backend/batch processes on the same EC2 instance?
Thank you
Elastic Beanstalk is basically a higher abstraction layer on EC2 and it's tightly coupled with it. That means at a minimum every time you deploy your application it will spin up an EC2 server.
The advantage is that you don't need to manage your EC2 instances, for example it will autoscale automatically depending on your traffic demand.
The disadvantage is that it theoretically doesn't allow you to tweak little things in the EC2 instance because you may mess up how the Elastic Beanstalk interprets your app. Also, I believe you cannot force your Elastic Beanstalk deployment to use a specific AMI.
If you want more flexibility in your app (which sounds like your do) I recommend do your own deployment for your application (No Elastic Beanstalk). That way you can run the your app and your jobs on the same EC2 Instance.
You can use custom AMI with Elastic Beanstalk.
AWS documentation has a guide on how to create and use a customized AMI: Using Custom AMIs
But then again, nobody's stopping you from running your background processes on the standard Elastic Beanstalk instance. I run background cron jobs and Flask application on one Elastic Beanstalk instance.
files:
"/tmp/cronjob-for-foobar" :
mode: "000777"
owner: ec2-user
group: ec2-user
content: |
# skip
# clean up files created by above cronjob
30 23 * * * rm $HOME/cron*.log
container_commands:
70-foobar-cronjobs:
command: crontab /tmp/cronjob-for-foobar
Obviously, you can have anything scheduled in cron, as long as you stay within your instance limits.