scaling flask app on AWS ECS with nginx and uwsgi - flask

I am trying to scale a flask micro service in AWS ECS to handle production workloads. My application is using the flask-apschedueler to handle long running tasks. I am using uwsgi web server for deployment in ECS so I am packing the application inside the container along with uwsgi server. The nginx container is running separately on ECS cluster.
My uwsgi config uses a single process, single thread right now.
I have successfully deployed it on AWS ECS but wondering what to scale for handling production workloads. I am debating between these options
1) I can spin up multiple containers and nginx would round robin to all of them distributing requests equally through route 53 dns service
2) I can increase the number of processes in uwsgi config but that messes with my flask-apscheduler as I only need one instance of it running. The workarounds I found are not that neat
It would be great if someone can share how to go about this

The docker mentality is more along the lines of 'one process per task'. Anytime you have more than one task running on a container, you should rethink.
I would advise the first approach. Create a service to wrap your task in ECS and simply vary the 'Desired' number of tasks for that service to scale the service up and down as desired.
If you only need the scheduler running on one of the tasks, you should setup a separate service using the same image, but with an environment variable to tell your container to start the scheduler. Make it true on the scheduler service/task and false on the worker service/tasks. Those ENV variables can be set on the container definition inside your ECS task definition.
This would be the "docker way".

Related

Does AWS Fargate docker image with express app listening and waiting for requests consume cpu?

I configured an AWS Fargate cluster with a docker image that runs nodejs express app and listens on port 80. Now I can browse to the public IP and successfully the request is handled by AWS Fargate.
Is it right that the docker container now is running and still waiting for requests?
Isn't it consuming CPU and so I have to pay as long as the docker container is running?
Do I have to build a docker image that just handles a single request and exits to be really serverless?
Thank you
Is it right that the docker container now is running and still waiting for requests? Isn't it consuming CPU and so I have to pay as long as the docker container is running?
Yes, that's how ECS Fargate works. It's really no different from running a docker container on your local computer. It has to be up and running all the time in order to handle requests that come in.
Do I have to build a docker image that just handles a single request and exits to be really serverless?
The term "serverless" is a vague marketing term and means different things depending on who you ask. Amazon calls ECS Fargate serverless because you don't have to manage, or even know the details of, the server that is running the container. In contrast to ECS EC2 deployments, where you have to have EC2 servers up and running ahead of time and ECS just starts the containers on those EC2 servers.
If you want something that only runs, and only charges you, when a request comes in, then you would need to reconfigure your application to run on AWS Lambda instead of ECS Fargate.

Fargate - How do I make an API call to another container within the same task definition

When developing locally, I run docker-compose, where I have two services Service1 and Service2. Service2 depends on Service1. When I deploy them to ECS, I create them within one task definition and provide JSON array of container definitions to spin them up.
When I run them locally, within docker-compose, from Service2 I can call http://Service1:8080/v1/graphql (since they're in docker-compose together I can call it by the service name) ... however, when I deploy to ECS and I make that same API call, I get a 404.
Based on this: Docker links with awsvpc network mode I've also tried http://localhost:8080/v1/graphql ... I'd appreciate any help!
I'd try service discovery as mentioned here:
Amazon ECS now includes integrated service discovery. This makes it possible for an ECS service to automatically register itself with a predictable and friendly DNS name in Amazon Route 53. As your services scale up or down in response to load or container health, the Route 53 hosted zone is kept up to date, allowing other services to lookup where they need to make connections based on the state of each service.
See an example here.

aws ECS (with ec2), running docker-compose

Say you want to run
a webserver (which can be nginx + uwsgi)
filebeat to collect logs (it basically watches logfiles of nginx and send the log contents to another host specified in the configuration)
I can run the two services with docker-compose.
Question is how can I run the two services on ec2-based ECS
ECS task seems to be associated with a docker image.
And separate task (web task / filebeat task) seems to be run on separate EC2.
They need to run on the same ec2 host (because filebeat needs to collect logs outputted by webserver)
How can I achieve that?

aws-ecs, how to add a filebeat to existing container?

I'm running web service (nginx - uwsgi) on ECS.
I'm running the two applications using supervisor.
Now I want to add another service (filebeat) which will read logs of the web servers and send to logstash on another machine.
I've been told it is good idea to separate applications (all applications run on it's own docker container and get rid of supervisor)
So I'm trying to add a filebeat container to the already running webserver
If I go to define task tab of ECS menu, it seems I'm launching a new ec2 / fargate instance, that's not what I want.
Because filebeat has to run on the same host as the webserver
How do I run filebeat docker container along with webserver container?

Running multiple ECS tasks based on same task definitions in one host, using different ports

I have one ecs task definition. Can multiple tasks of the above task definition can run on one EC2 instance on several ports ?
I have already have running several tasks running on several ec2 instances. I want to reduce the consumption of resources. As one EC2 instance has more than enough resources to run several tasks. Is it possible to run same task several times on different ports on one EC2 instance ?
Yes, ECS has very good support for this since 2016. You can leave the host port empty in the container definition, this will result in a random port to be chosen for your container. As a result, more instances of the same task definition can run on one ECS instance.
You can configure your ECS service in combination with an Application Load Balancer so that when it starts a new task, it will register the port number in the associated target group. This way you never have to deal with the random port.
If you setup your service via the AWS console, configuration is pretty straightforward.
This can be configured by setting Host Port to be 0 in Port Mappings setting of Container Definitions while defining Task.
Following setting is available in Container Definition of Task.
It allows ECS to assign random ports to Tasks running in same EC2.
For more details please check - Setup Dynamic Port Mapping for ECS