I want to connect nginx and django on openshift - django

So I have an instance of nginx running on my openshift and another pod for a django app, the thing is I don't know how to connect both services. I'm able to access hte default url for nginx and the url for django. Both are working fine but I don't know how to connect both services. Is there a way to do it modifying the yaml of the services or the pods? I already try to build the container myself of nginx and is giving me permission issues, so I'm using a version of nginx thats comes preloaded in openshift. Any help would be greatly appreciated. thank you so much.

To have access between pods you have to have service created for every pod.
Then you can use service-name as DNS names to reach pods. If pods are placed in different projects you should additionally specify project name like .
Furthermore, there's environment variables for service discovery (see service discovery)
Objects examples check in Kubernetes documentation

Related

How to run symfony on aws ecs with loadbalancer

im new to aws services and some things are not really clear.
On my local machine i have to services in my docker-compose.yaml (php - symfony6 ,database - mysql), all dependencies are loaded and installed through Dockerfile. To start the application i have a entrypoint.sh with command symfony server:start.
OK, thats fine for local.
Now, i have exactly this configuration to run up to 16 containers in aws ecs behind a loadbalancer, but im sure that isnt the correct way to run this because i cant configure, increase php settings and so i believe its the worst way when i look at the performance.
Do i need a separately nginx for every container?
Is there any option in the loadbalancer settings to run a webserver from there?
Any idea is welcome.
(i think my config files are not interesting for now. If it is, tell me, then i can update and share)
Do i need a separately nginx for every container?
Yes, that is the standard way to run this sort of thing, using two containers (nginx and php). Also, your MySQL server should not be deployed in the same ECS task. Ideally you would be running MySQL in RDS instead of ECS.
Is there any option in the loadbalancer settings to run a webserver from there?
No, a load balancer is just a load balancer. A load balancer is not a web server.

how to deploy backend and frontend app in ecs?

I have dockersied our webapp into two different docker files, such as:
1.Frontend
2.Backend
Both docker apps have their own .env files , i have to defined deployed server Ip address to connect them.
In frontend-env : configure backend ip
but deploying in ECS service, each container will be different ip ,how to solve this issue scale out and still connect the service each other.
so far :
Create spreate ecs cluster for both frontend and backend, with ALB.
Give the ALB in env files to connect them or hit the api.
any other solutions for this deployment?
You should be using Service Discovery to achieve this. You can read the announcement blog here. In a nutshell the way it works is that if you have two ECS services frontend and backend you want to expose frontend with an ALB for public access but enabling service discovery you will be able to have all tasks that belong to frontend to be able to connect to the tasks that belong to backend by calling backend.<domain> (where is the SD namespace you defined). When you do so ECS Service Discovery will resolve backend.<domain> to the private IP addresses of the tasks in backend thus eliminating the need for a load balancer in front of it.
If you want a practical example of how this works you can investigate this basic demo app:

Do I need NGINX in Kubernetes for Flask/Django deployment?

So I have a bunch of flask apps and a bunch of django apps that need to be thrown onto K8s to then communicate together. Now, I understand I need a WSGI server on each of the containers I deploy. However do I need to deploy an NGINX container to forward the requests to the WSGI servers, or can I just deploy the pods containing the containers inside the service and the service will sort it out?
no need for NGINX in this case, also you can use ingress instead https://kubernetes.io/docs/concepts/services-networking/ingress/ to manages external access to the services (internally it uses nginx).

Deploy flask-socketio on beanstalk

I can't quite get Flask-SocketIO working with my instance on AWS Elastic Beanstalk (ELB), with the requirement of running Flask-SocketIO with socketio.run(application), ELB appears to make the calls to the global application object itself.
The ELB documentation states Using application.py as the filename and providing a callable application object (the Flask object, in this case) allows Elastic Beanstalk to easily find your application's code.
My ELB instance logs show the error RuntimeError: You need to use the eventlet server. See the Deployment section of the documentation for more information.
Is there any way to approach this problem assuming that AWS calls application.run()?
Thanks in advance
Flask-SocketIO has very specific requirements on the load balancer and the web server. I think you can configure the ELB load balancer with sticky sessions and that would make it work, but the part that I think does not work is using the eventlet or gevent web servers, since AWS invokes the callable in its own way. What you need is a way to use socketio.run() or an equivalent procedure that starts the eventlet/gevent web server.
There has been some changes to AWS beanstalk lately. Per default it uses gunicorn and nginx.
I got the setup working using a single instance setup without loadbalancers. The loadbalancer config in beanstalk allows stickiness configuration but my applications design would work only on a single instance anyway, so I didn't care.
To create a single instance beanstalk environment:
eb create --single my_env
Then, configure the way gunicorn is started: Create a file Procfile (see aws docs)
For eventlet use this:
web: gunicorn --worker-class eventlet -w 1 application:application
And place this into requirements.txt:
gunicorn==20.1.0
eventlet==0.30.2
The particular versions are needed to prevent the cannot import name 'ALREADY_HANDLED' error, see here.
See flask-socketio doc for other deployment options besides gunicorn/eventlet.

AWS ECS Production Docker Deployment

I've recently started using Docker for my own personal website. So the design my website is basically
Nginx -> Frontend -> Backend -> Database
Currently, the database is hosted using AWS RDS. So we can leave that out for now.
So here's my questions
I currently have my application separated into different repository. Frontend and Backend respectively.
Where should I store my 'root' docker-compose.yml file. I can't decide to store it in either the frontend/backend repository.
In a docker-compose.yml file, Can the nginx serve mount a volume from my frontend service without any ports and serve that directory?
I have been trying for so many days but I can't seem to deploy a proper production with Docker with my 3 tier application in ECS Cluster. Is there any good example nginx.conf that I can refer to?
How do I auto-SSL my domain?
Thank you guys!
Where should I store my 'root' docker-compose.yml file.
Many orgs use a top level repo which is used for storing infrastructure related metadata such as CloudFormation templates, and docker-compose.yml files. So it would be something like. So devs clone the top level repo first, and that repo ideally contains either submodules or tooling for pulling down the sub repos for each sub component or microservice.
In a docker-compose.yml file, Can the nginx serve mount a volume from my frontend service without any ports and serve that directory?
Yes you could do this but it would be dangerous and the disk would be a bottleneck. If your intention is to get content from the frontend service, and have it served by Nginx then you should link your frontend service via a port to your Nginx server, and setup your Nginx as a reverse proxy in front of your application container. You can also configure Nginx to cache the content from your frontend server to a disk volume (if it is too much content to fit in memory). This will be a safer way instead of using the disk as the communication link. Here is an example of how to configure such a reverse proxy on AWS ECS: https://github.com/awslabs/ecs-nginx-reverse-proxy/tree/master/reverse-proxy
I can't seem to deploy a proper production with Docker with my 3 tier application in ECS Cluster. Is there any good example nginx.conf that I can refer to?
The link in my last answer contains a sample nginx.conf that should be helpful, as well as a sample task definition for deploying an application container as well as a nginx container, linked to each other, on Amazon ECS.
How do I auto-SSL my domain?
If you are on AWS the best way to get SSL is to use the built in SSL termination capabilities of the Application Load Balancer (ALB). AWS ECS integrates with ALB as a way to get web traffic to your containers. ALB also integrates with Amazon certificate manager (https://aws.amazon.com/certificate-manager/) This service will give you a free SSL certificate which automatically updates. This way you don't have to worry about your SSL certificate expiring ever again, because its just automatically renewed and updated in your ALB.