Is there some easy way that I am missing to get an unchanging, accessible to the internet URL for something I deploy to ECS with docker compose up?
I've written a small web app using flask and Nginx, put the flask and nginx portions into Docker containers, and deployed the thing to AWS ECS using this workflow, which boils down to:
docker context use myecscontext
docker compose up
This deploys the whole thing using AWS Fargate and makes it accessible from the internet at timot-LoadB-xyzxyzxyzxyzx-xyzxyzxyz.us-east-2.elb.amazonaws.com. So far so good.
Now I'd like to make my-fancy-domain.com, registered with a non-AWS registrar, point to my web app. I know I can edit the DNS entry at my registrar to do this; here's the catch: that URL with all the xyzs changes every time I docker compose up after making changes to my web app. Must I really monkey around in my registrar's DNS settings every time I update something?
I had imagined I would simply slap an elastic IP on my new Fargate cluster when I'm satisfied that I want to replace the current live version with an update. I see now that I can't easily associate an elastic IP with the load balancer that Fargate sets up. And I would just as soon not move my-fancy-domain.com to Route53 simply to accomplish this.
For anyone who finds this in the future: what ended up working in the end was to move the DNS records for my site from my domain registrar to AWS Route53.
Once I did that the AWS Route53 console makes it straightforward to add an alias record pointing to the Application Load Balancer that the docker-compose/ECS integration set up. Those alias records are "a Route 53–specific extension to DNS".
I did not want to move DNS records to Route53, but it did solve the problem.
Related
So I’ve just finished working on my first big personal project, bought a domain name, created an AWS account, watched a lot of AWS tutorials, but I still can’t figure out how to host my web app on AWS. The whole AWS thing is a mystery to me. No tutorial online seems to teach exactly what I need.
What I’m trying to do is this:
Host my dynamic web app on a secure https connection.
Host the web app using the personalized domain name I purchased.
Link my git repo to AWS so I can easily commit and push changes when needed.
Please assist me by pointing me to a resource that can help me achieve the above 3 tasks.
For now, the web app is still hosted on Heroku’s free service; feel free to take a look at the application, and provide some feedback if you can.
Link to web app:my web app
You mentioned - The web app is still hosted on Heroku’s free service
So, if you want the same thing in AWS, use Elastic Beanstalk.
First Question: How to host my web app on AWS?
There can be multiple options to host your web app:-
S3 Bucket to host your website. How to Host in S3
Elastic Beanstalk. Link
ECS - using containers
Single EC2 Server to host your website.
EKS - Kubernetes
By the way, there are many couples of things which you need to take care of before starting.
Second Question, Host the web app using the personalized domain name I purchased.
If you have used S3, the hosted URL will be in HTTP and you can create a route entry in your purchased domain settings. If it is AWS, create a new record in Route53.
If you host your website on EC2, you will get Public IP Address. Make a route entry with that Public IP.
If you have used ECS or EKS, you might require to use the Load Balancer and then you will have the Load Balancer DNS. Make a route entry with your Load Balancer DNS. Then again question will arise which kind of Load Balancer you want to use. [Like Application, Classic or Network Load Balancer]
If you use Elastic Beanstalk. It's a managed service, when you host you will directly get an endpoint. Make a route entry with that endpoint.
Third, Link my git repo to AWS so I can easily commit and push changes when needed.
For this, you have to use Code Build and connect Github as a Source while creating Code Build Project. Link
For CI-CD, there are multiple things again.
As Heroku’s is a PaaS, which provides you the platform and but when it comes to AWS, it is an IaaS. So you get the infrastructure and when you get the provisioned infrastructure, there are so many things which you need to take care of like you have to think like an Architect. Prepare the architecture and then proceed. It requires knowledge of other things also networking, security etc.
To answer your question, the best way to host a web app in AWS is Elastic Beanstalk
But what is AWS Elastic Beanstalk and what does it do?
AWS Elastic Beanstalk encompasses processes and operations connected with the deployment of web apps into the cloud environment, as well as their scaling.
Elastic Beanstalk automates the deployment by putting forward the required capacity, balancing the load, autoscaling, and monitoring software efficiency and performance. All that is left for a developer to do is to apply the code. In these conditions, the application owner has overall control over the capacity that AWS provides for the software and can access it at any time.
So this is the best way to deploy the app and let’s follow the steps.
Open the Elastic Beanstalk console and find the management page of your environment.
Select “Upload and Deploy”.
Select “Choose File” and choose the source bundle with the dialog box.
Deploy and select the URL to open the new website.
You can use CodeDeploy to connect your Github and deploy your code
Conclusion
I have taken a simplistic approach and told you exactly what you need to do the required task without going into the hus and fuss of AWS. Saying that there is still a lot that can be done to bring the real value of your application in terms of balancing the load, scaling or improving the performance.
I am trying to deploy a kubernetes cluster into an AWS environment which does not support Route53 queries from the generated hostname ($HostA). This environment requires an override of the Endpoint configuration to resolve all Route53 queries to $HostB. Note that I am in not control of either host, and they are both reachable on the public internet. The protokube docker image I am deploying is not aware of this; to make it aware, I would need to build the image and host it myself, something I do not wish to do if I can avoid it (as I would probably have to do this for every docker image I am deploying).
I am looking for a way to redirect all requests to $HostA without having to change any docker configuration. Ideally, I would like a way to override all requests to $HostA from within my VPC to go to $HostB. If this is not possible, I am in control of the EC2 userdata which starts up the EC2 instances which hosts the images. Thus, perhaps there is a way I can set /etc/host.alises in the EC2 host and force this to be used for all running containers (instead of the container's /etc/host). Again, please keep in mind that I need to be able to control this from the host instance and NOT by overriding the docker image's configuration.
Thank you!
I've recently started using Docker for my own personal website. So the design my website is basically
Nginx -> Frontend -> Backend -> Database
Currently, the database is hosted using AWS RDS. So we can leave that out for now.
So here's my questions
I currently have my application separated into different repository. Frontend and Backend respectively.
Where should I store my 'root' docker-compose.yml file. I can't decide to store it in either the frontend/backend repository.
In a docker-compose.yml file, Can the nginx serve mount a volume from my frontend service without any ports and serve that directory?
I have been trying for so many days but I can't seem to deploy a proper production with Docker with my 3 tier application in ECS Cluster. Is there any good example nginx.conf that I can refer to?
How do I auto-SSL my domain?
Thank you guys!
Where should I store my 'root' docker-compose.yml file.
Many orgs use a top level repo which is used for storing infrastructure related metadata such as CloudFormation templates, and docker-compose.yml files. So it would be something like. So devs clone the top level repo first, and that repo ideally contains either submodules or tooling for pulling down the sub repos for each sub component or microservice.
In a docker-compose.yml file, Can the nginx serve mount a volume from my frontend service without any ports and serve that directory?
Yes you could do this but it would be dangerous and the disk would be a bottleneck. If your intention is to get content from the frontend service, and have it served by Nginx then you should link your frontend service via a port to your Nginx server, and setup your Nginx as a reverse proxy in front of your application container. You can also configure Nginx to cache the content from your frontend server to a disk volume (if it is too much content to fit in memory). This will be a safer way instead of using the disk as the communication link. Here is an example of how to configure such a reverse proxy on AWS ECS: https://github.com/awslabs/ecs-nginx-reverse-proxy/tree/master/reverse-proxy
I can't seem to deploy a proper production with Docker with my 3 tier application in ECS Cluster. Is there any good example nginx.conf that I can refer to?
The link in my last answer contains a sample nginx.conf that should be helpful, as well as a sample task definition for deploying an application container as well as a nginx container, linked to each other, on Amazon ECS.
How do I auto-SSL my domain?
If you are on AWS the best way to get SSL is to use the built in SSL termination capabilities of the Application Load Balancer (ALB). AWS ECS integrates with ALB as a way to get web traffic to your containers. ALB also integrates with Amazon certificate manager (https://aws.amazon.com/certificate-manager/) This service will give you a free SSL certificate which automatically updates. This way you don't have to worry about your SSL certificate expiring ever again, because its just automatically renewed and updated in your ALB.
I am pretty new to AWS and want to build a simple example auto scaling wordpress application with EC2 instances.
I understand how to create a loadbalancer, how to create bitnami wordpress ec2 instances and a autoscaling group and get all running but here is what i dont get and cannot find in any documentation:
Every EC2 Wordpress instance that i create has obviously its own wordpress data and database. They are not synchronized. So if the Load Balancer sends the Traffic to EC2 A the user will see an other Appplication set then EC2 B.
How do people set this up / solve this to be able to add unlimited ressources which hold the same application / work for the same Application.
Running Wordpress behind a Load Balancer (ELB) is a little bit tricky as by default Wordpress is storing data on volumes of the EC2 instances.
A possible solution:
Use RDS to launch a managed MySQL database and connect Wordpress to it.
Outsource the user uploads to S3 with Wordpress plugins amazon-web-services and amazon-s3-and-cloudfront.
But beware: you need to disable auto-update, the Wordpress theme gallery, ... and everything else that is changing files on a single EC2 instance.
I've written a blog post covering that topic: https://cloudonaut.io/wordpress-on-aws-you-are-holding-it-wrong/ some time ago.
Alternatives:
Use a distributed file system (e.g. GlusterFS) to store all Wordpress files.
Use CloudFront (CDN) to cache incoming requests and run everything on a single EC2 instance.
There is official best practices and blog post. Check here
https://blogs.aws.amazon.com/php/post/Tx1TRYG42UP11ET/WordPress-on-AWS-Whitepapers
I am trying to set up an Amazon Server to host a dynamic website I'm currently creating. I have the domain bought on GoDaddy.com, and I believe that what I've done so far has linked the domain to my Amazon account.
I followed this tutorial : http://www.mycowsworld.com/blog/2013/07/29/setting-up-a-godaddy-domain-name-with-amazon-web-services/
In short, this walked me through setting up and Amazon S3 (Simple Storage Service) and Amazon Route 53. I then configured the DNS Servers, and my website now launches properly on the domain.
I'm not sure on the next step from here, but I would like to set up:
-A database server
-Anything else that might be necessary to run a dynamic website.
I am very new to hosting websites, and semi-new to web development in general, so the more in depth the better.
Thanks a lot
You have two options on AWS. Run an EC2 server and setup your application or continue to use the AWS managed services like S3.
Flask apps can be hosted on Elastic Beanstalk and
your database can be hosted on RDS (Relational Database Service). Then the two can be integrated.
Otherwise, spin up your own t2.micro instance in EC2. Log in via ssh and set up the database server and application like you have locally. This server could also host the (currently S3 hosted) static files too.
I have no idea what your requirements are, personally I would start with setting up the EC2 instance and go from there as integrating AWS services is without knowing what you need is probably not the easiest first step.
Heroku might be another option. They host their services on AWS and give you an end to end solution for deploying and running your python code without getting your hands dirty setting up servers.