How to run symfony on aws ecs with loadbalancer - amazon-web-services

im new to aws services and some things are not really clear.
On my local machine i have to services in my docker-compose.yaml (php - symfony6 ,database - mysql), all dependencies are loaded and installed through Dockerfile. To start the application i have a entrypoint.sh with command symfony server:start.
OK, thats fine for local.
Now, i have exactly this configuration to run up to 16 containers in aws ecs behind a loadbalancer, but im sure that isnt the correct way to run this because i cant configure, increase php settings and so i believe its the worst way when i look at the performance.
Do i need a separately nginx for every container?
Is there any option in the loadbalancer settings to run a webserver from there?
Any idea is welcome.
(i think my config files are not interesting for now. If it is, tell me, then i can update and share)

Do i need a separately nginx for every container?
Yes, that is the standard way to run this sort of thing, using two containers (nginx and php). Also, your MySQL server should not be deployed in the same ECS task. Ideally you would be running MySQL in RDS instead of ECS.
Is there any option in the loadbalancer settings to run a webserver from there?
No, a load balancer is just a load balancer. A load balancer is not a web server.

Related

I want to connect nginx and django on openshift

So I have an instance of nginx running on my openshift and another pod for a django app, the thing is I don't know how to connect both services. I'm able to access hte default url for nginx and the url for django. Both are working fine but I don't know how to connect both services. Is there a way to do it modifying the yaml of the services or the pods? I already try to build the container myself of nginx and is giving me permission issues, so I'm using a version of nginx thats comes preloaded in openshift. Any help would be greatly appreciated. thank you so much.
To have access between pods you have to have service created for every pod.
Then you can use service-name as DNS names to reach pods. If pods are placed in different projects you should additionally specify project name like .
Furthermore, there's environment variables for service discovery (see service discovery)
Objects examples check in Kubernetes documentation

Deploy flask-socketio on beanstalk

I can't quite get Flask-SocketIO working with my instance on AWS Elastic Beanstalk (ELB), with the requirement of running Flask-SocketIO with socketio.run(application), ELB appears to make the calls to the global application object itself.
The ELB documentation states Using application.py as the filename and providing a callable application object (the Flask object, in this case) allows Elastic Beanstalk to easily find your application's code.
My ELB instance logs show the error RuntimeError: You need to use the eventlet server. See the Deployment section of the documentation for more information.
Is there any way to approach this problem assuming that AWS calls application.run()?
Thanks in advance
Flask-SocketIO has very specific requirements on the load balancer and the web server. I think you can configure the ELB load balancer with sticky sessions and that would make it work, but the part that I think does not work is using the eventlet or gevent web servers, since AWS invokes the callable in its own way. What you need is a way to use socketio.run() or an equivalent procedure that starts the eventlet/gevent web server.
There has been some changes to AWS beanstalk lately. Per default it uses gunicorn and nginx.
I got the setup working using a single instance setup without loadbalancers. The loadbalancer config in beanstalk allows stickiness configuration but my applications design would work only on a single instance anyway, so I didn't care.
To create a single instance beanstalk environment:
eb create --single my_env
Then, configure the way gunicorn is started: Create a file Procfile (see aws docs)
For eventlet use this:
web: gunicorn --worker-class eventlet -w 1 application:application
And place this into requirements.txt:
gunicorn==20.1.0
eventlet==0.30.2
The particular versions are needed to prevent the cannot import name 'ALREADY_HANDLED' error, see here.
See flask-socketio doc for other deployment options besides gunicorn/eventlet.

AWS ECS Production Docker Deployment

I've recently started using Docker for my own personal website. So the design my website is basically
Nginx -> Frontend -> Backend -> Database
Currently, the database is hosted using AWS RDS. So we can leave that out for now.
So here's my questions
I currently have my application separated into different repository. Frontend and Backend respectively.
Where should I store my 'root' docker-compose.yml file. I can't decide to store it in either the frontend/backend repository.
In a docker-compose.yml file, Can the nginx serve mount a volume from my frontend service without any ports and serve that directory?
I have been trying for so many days but I can't seem to deploy a proper production with Docker with my 3 tier application in ECS Cluster. Is there any good example nginx.conf that I can refer to?
How do I auto-SSL my domain?
Thank you guys!
Where should I store my 'root' docker-compose.yml file.
Many orgs use a top level repo which is used for storing infrastructure related metadata such as CloudFormation templates, and docker-compose.yml files. So it would be something like. So devs clone the top level repo first, and that repo ideally contains either submodules or tooling for pulling down the sub repos for each sub component or microservice.
In a docker-compose.yml file, Can the nginx serve mount a volume from my frontend service without any ports and serve that directory?
Yes you could do this but it would be dangerous and the disk would be a bottleneck. If your intention is to get content from the frontend service, and have it served by Nginx then you should link your frontend service via a port to your Nginx server, and setup your Nginx as a reverse proxy in front of your application container. You can also configure Nginx to cache the content from your frontend server to a disk volume (if it is too much content to fit in memory). This will be a safer way instead of using the disk as the communication link. Here is an example of how to configure such a reverse proxy on AWS ECS: https://github.com/awslabs/ecs-nginx-reverse-proxy/tree/master/reverse-proxy
I can't seem to deploy a proper production with Docker with my 3 tier application in ECS Cluster. Is there any good example nginx.conf that I can refer to?
The link in my last answer contains a sample nginx.conf that should be helpful, as well as a sample task definition for deploying an application container as well as a nginx container, linked to each other, on Amazon ECS.
How do I auto-SSL my domain?
If you are on AWS the best way to get SSL is to use the built in SSL termination capabilities of the Application Load Balancer (ALB). AWS ECS integrates with ALB as a way to get web traffic to your containers. ALB also integrates with Amazon certificate manager (https://aws.amazon.com/certificate-manager/) This service will give you a free SSL certificate which automatically updates. This way you don't have to worry about your SSL certificate expiring ever again, because its just automatically renewed and updated in your ALB.

Recommendation: Deploy Docker application to AWS

I got a local Docker stack running Node.js, MongoDB and Nginx.
It runs perfectly using docker-compose up --build.
Now it's time to deploy my application to a production environment.
I have considered EC2 Container Service and EC2, but can you recommend an easier approach? The learning curve is steep!
For MongoDB -
Use AWS quick start MongoDB
http://docs.aws.amazon.com/quickstart/latest/mongodb/overview.html
http://docs.aws.amazon.com/quickstart/latest/mongodb/architecture.html
For rest of the docker stack i.e NodeJS & Nginx -
Use the AWS ElasticBeanstalk Multi Container Deployment
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_ecs.html
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html
Elastic Beanstalk supports Docker, as documented here. Elastic Beanstalk would manage the EC2 resources for you so that you, which should make things a bit easier on you.
You can install Kontena to AWS and use that to deploy your application to production environment (of course other cloud providers are also supported). Transition from Docker Compose is very smooth since kontena.yml uses similar syntax and keys as docker-compose.yml.
With Kontena you will have private image registry, load balancer and secret management built-in that are very useful when running containers in production.

nginx http-proxy infront of PHP-App running on Beanstalk

i am very new to AWS and Beanstalk so maybe my question is very easy...
I want to put up a Drupal-Page (with boost) on Beanstalk as PHP-App (PHP + Apache).
And i would like to use nginx as a reserve proxy infront of it. My simple question:
How to do this? And is it a good idea?
I searched the internet and all the tutorials i found were assuming that i run the server. But i couldnt figure out how to use an nginx http-proxy infront of PHP-App running on Beanstalk... unfortunatly.
You can set-up a 2-tier infra within Elastic Beanstalk and then perhaps with an ELB in front
ELB->Nginx->Drupal
However, for the added cost of that instance, not many people run this set-up. At least not within a small set-up that Beanstalk is likely running.