Do I need NGINX in Kubernetes for Flask/Django deployment? - django

So I have a bunch of flask apps and a bunch of django apps that need to be thrown onto K8s to then communicate together. Now, I understand I need a WSGI server on each of the containers I deploy. However do I need to deploy an NGINX container to forward the requests to the WSGI servers, or can I just deploy the pods containing the containers inside the service and the service will sort it out?

no need for NGINX in this case, also you can use ingress instead https://kubernetes.io/docs/concepts/services-networking/ingress/ to manages external access to the services (internally it uses nginx).

Related

How to run symfony on aws ecs with loadbalancer

im new to aws services and some things are not really clear.
On my local machine i have to services in my docker-compose.yaml (php - symfony6 ,database - mysql), all dependencies are loaded and installed through Dockerfile. To start the application i have a entrypoint.sh with command symfony server:start.
OK, thats fine for local.
Now, i have exactly this configuration to run up to 16 containers in aws ecs behind a loadbalancer, but im sure that isnt the correct way to run this because i cant configure, increase php settings and so i believe its the worst way when i look at the performance.
Do i need a separately nginx for every container?
Is there any option in the loadbalancer settings to run a webserver from there?
Any idea is welcome.
(i think my config files are not interesting for now. If it is, tell me, then i can update and share)
Do i need a separately nginx for every container?
Yes, that is the standard way to run this sort of thing, using two containers (nginx and php). Also, your MySQL server should not be deployed in the same ECS task. Ideally you would be running MySQL in RDS instead of ECS.
Is there any option in the loadbalancer settings to run a webserver from there?
No, a load balancer is just a load balancer. A load balancer is not a web server.

how to deploy backend and frontend app in ecs?

I have dockersied our webapp into two different docker files, such as:
1.Frontend
2.Backend
Both docker apps have their own .env files , i have to defined deployed server Ip address to connect them.
In frontend-env : configure backend ip
but deploying in ECS service, each container will be different ip ,how to solve this issue scale out and still connect the service each other.
so far :
Create spreate ecs cluster for both frontend and backend, with ALB.
Give the ALB in env files to connect them or hit the api.
any other solutions for this deployment?
You should be using Service Discovery to achieve this. You can read the announcement blog here. In a nutshell the way it works is that if you have two ECS services frontend and backend you want to expose frontend with an ALB for public access but enabling service discovery you will be able to have all tasks that belong to frontend to be able to connect to the tasks that belong to backend by calling backend.<domain> (where is the SD namespace you defined). When you do so ECS Service Discovery will resolve backend.<domain> to the private IP addresses of the tasks in backend thus eliminating the need for a load balancer in front of it.
If you want a practical example of how this works you can investigate this basic demo app:

I want to connect nginx and django on openshift

So I have an instance of nginx running on my openshift and another pod for a django app, the thing is I don't know how to connect both services. I'm able to access hte default url for nginx and the url for django. Both are working fine but I don't know how to connect both services. Is there a way to do it modifying the yaml of the services or the pods? I already try to build the container myself of nginx and is giving me permission issues, so I'm using a version of nginx thats comes preloaded in openshift. Any help would be greatly appreciated. thank you so much.
To have access between pods you have to have service created for every pod.
Then you can use service-name as DNS names to reach pods. If pods are placed in different projects you should additionally specify project name like .
Furthermore, there's environment variables for service discovery (see service discovery)
Objects examples check in Kubernetes documentation

AWS ECS Production Docker Deployment

I've recently started using Docker for my own personal website. So the design my website is basically
Nginx -> Frontend -> Backend -> Database
Currently, the database is hosted using AWS RDS. So we can leave that out for now.
So here's my questions
I currently have my application separated into different repository. Frontend and Backend respectively.
Where should I store my 'root' docker-compose.yml file. I can't decide to store it in either the frontend/backend repository.
In a docker-compose.yml file, Can the nginx serve mount a volume from my frontend service without any ports and serve that directory?
I have been trying for so many days but I can't seem to deploy a proper production with Docker with my 3 tier application in ECS Cluster. Is there any good example nginx.conf that I can refer to?
How do I auto-SSL my domain?
Thank you guys!
Where should I store my 'root' docker-compose.yml file.
Many orgs use a top level repo which is used for storing infrastructure related metadata such as CloudFormation templates, and docker-compose.yml files. So it would be something like. So devs clone the top level repo first, and that repo ideally contains either submodules or tooling for pulling down the sub repos for each sub component or microservice.
In a docker-compose.yml file, Can the nginx serve mount a volume from my frontend service without any ports and serve that directory?
Yes you could do this but it would be dangerous and the disk would be a bottleneck. If your intention is to get content from the frontend service, and have it served by Nginx then you should link your frontend service via a port to your Nginx server, and setup your Nginx as a reverse proxy in front of your application container. You can also configure Nginx to cache the content from your frontend server to a disk volume (if it is too much content to fit in memory). This will be a safer way instead of using the disk as the communication link. Here is an example of how to configure such a reverse proxy on AWS ECS: https://github.com/awslabs/ecs-nginx-reverse-proxy/tree/master/reverse-proxy
I can't seem to deploy a proper production with Docker with my 3 tier application in ECS Cluster. Is there any good example nginx.conf that I can refer to?
The link in my last answer contains a sample nginx.conf that should be helpful, as well as a sample task definition for deploying an application container as well as a nginx container, linked to each other, on Amazon ECS.
How do I auto-SSL my domain?
If you are on AWS the best way to get SSL is to use the built in SSL termination capabilities of the Application Load Balancer (ALB). AWS ECS integrates with ALB as a way to get web traffic to your containers. ALB also integrates with Amazon certificate manager (https://aws.amazon.com/certificate-manager/) This service will give you a free SSL certificate which automatically updates. This way you don't have to worry about your SSL certificate expiring ever again, because its just automatically renewed and updated in your ALB.

How to map subdomains to multiple docker containers (as web servers) hosted using Elastic Bean Stack on AWS

I have seen an aws example of hosting a nginx server and a php server running on separate docker containers within an instance.
I want to use this infrastructure to host multiple docker containers (each being its own web server on a different unique port).
Each of these unique web application needs to be available on the internet with a unique subdomain.
Since one instance will not be enough for all the Docker containers, I will need them spread over multiple instances.
How can I host hundreds of docker containers over several instances, while one nginx-proxy container does the routing to map a subdomain to each web application container using it's unique port?
E.g.
app1.mydomain.com --> docker container exposing port 10001
app2.mydomain.com --> docker container exposing port 10002
app3.mydomain.com --> docker container exposing port 10003
....
...
If I use a nginx-proxy container, it would be easy to map each port number to a different subdomain. This would be true of all the Docker containers are in the same instance as the nginx-proxy container.
But can I map it to docker containers that is hosted on a different instance. I am planning to use elastic beanstalk for creating new instances for the extra docker containers.
Now nginx is running on one instance, while there are containers on different instances.
How do I achieve the end goal of hundreds of web applications hosted on separate docker containers mapped to unique subdomains?
To be honest, you question is not quite clear to me. It seems you could deploy an Nginx container in each instance having the proxy configuration for every app container you have on it, and as the cluster scales out, all them would have an Nginx as well. So, you could just set an ELB on top of it (Elastic Beanstalk supports it natively), and you would be good.
Nonetheless, I think you're intending to push Elastic Beanstalk to hard. I mean, it's not supposed to be used that way, like a big and generic Docker cluster. Elastic Beanstalk was built to facilitate application deployments, and nowadays containers are just one of the, let's say, platforms available (although it's not a language or framework, off course) for people to do it. But Elastic Beanstalk is not a container manager.
So, in my opinion, what makes sense is to deploy a single container per Beanstalk application with an ELB on top of it, so you don't need to worry about the underlying machines and their IPs. That way, you can easily set up a frontend proxy to route requests, because you have a permanent address for you application pool. And being independent pools, they can scale independently, and so on.
There are some more complex solutions out there which try to solve that problem of deploying containers in a wide single cluster, like Google's Kubernetes, and keeping track of them and providing endpoints for each application group. Also, there are solutions for dynamic reverse proxies like this one, recently released, and probably a lot of other solutions popping up every day, but all them would demand a lot of customization. But, in that case, we are not talking about an AWS solution.