Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm using an image annotation tool (coco-annotator based on Vue.js) locally and I would like to run it on a AWS webserver, to be able to access it from anywhere.
The source code contains some docker files and can locally be run using
docker-compose up
Does someone know, what are the high-level steps to run this application on a AWS webserver?
AWS seems quite complicated as it has a million options, so I'd like to know
What "product" should I choose? ("EC2" (virtual machine)? "Elatic Beanstalk" (web application)?
What pre-installation should I choose? ("Docker - single instance"?)
How do I tell AWS, how to launch the command for the coco-annotator? (login via ssh and run the command manually? or is there some pre-configuration that enters the respective source folder automatically and runs docker-compose up on startup?)
Solution
Select a AWS EC2
As virtual machine chose "t2.micro" (free tier eligible) for example with Ubuntu 18.04
Login to your EC2 virtual machine instance via ssh and run manually install the coco-annotator (or other software) -> note the local port that the server is running on
make the IP-Adress to your ec2 instance permanent (click)
make your EC2 instance accessible via the browser (click) --> Add Rule for "TCP/IP" + access from anywhere
sudo apt-get install nginx -> enter your domain in the default config file of nginx
Use the AWS "Route 53"-Service to create a "hosted zone"
Register a free domain, eg here at freenom
On AWS - Route 53: "Create Record Set" -> Name: "www.yourWebdomain.com" Value: "yourAwsEc2IpAddress"
note the nameservers provided by Amazon (ns-.awsdns-20.) and enter them as custom nameservers on the config page of your domain provider (freenom). Do not use URL forwarding!
now the communication between nginx, and your AWS EC2 instance should be working
This is a very loaded question and you'd probably get a better response with a more direct question with an example. With that said, if your app is containerized you can use ECS or Elastic Beanstalk. If your application is stateless (you don't need local disk storage this is persistent between restarts of your application, you can still use a database or other services for storage) the easiest is probably ECS using Fargate tasks.
There are numerous blogs and tutorials online because, as you've already said, there are very many different types of applications, deployment and configuration options. Start reading some blogs and the docs for using docker compose to deploy contained to AWS.
A place to get started might be here.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-fargate.html
Edit:
I see you cross posted here https://github.com/jsbroks/coco-annotator/issues/231. The OP there said they used a VM. In AWS this is an EC2 instance. The get started https://github.com/jsbroks/coco-annotator/wiki/Getting-Started#dedicated-servervps-setup also says this requires a dedicated server or VSP. That would also point to EC2. If you want to use ECS or Beanstalk you need to deploy a container. I don't know if this app supports running in a container and if you want to pursue that routeyour best place to ask is probably in that projects community not SO.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 months ago.
Improve this question
I have been programming a full-stack application consisting of a NodeJS frontend, .Net Backend, and SQL Server DB and I have managed to fully dockerize the application using docker-compose. I have come a long way since I started this project but I still need a bit more help to finalize it. I am now in the process of deploying my docker containers into AWS (somehow) and I am having a bit of a problem on my end. Please bear in mind I am a beginner and this is quite complex to me.
So far this is the closest I have come to an actual solution to properly deploying all 3 parts of the app.
Created a security group w/ Inbound to all IPv4s and IPv6s, Outbound to all IPv4s
Created a load balancer listening on port 80 with default VPC
Created a key pair to SSH
Created a cluster with 3 instances (backend, frontend, db) default VPC, SG created, default role
Created ECR and pushed all my docker images seperately, 3 ECRs
Created EC2 task def, no role, 512 mem, container with each ECR url, 512 mem, 0:80 mapping
(Unsure if necessary) Created a service to link the LB etc.
When I do this, I am able to run all 3 tasks at the same time with no issues so it seems like progress to me. However, I am doing something wrong when it comes to the ports or IPs since I am not able to access the public DNS or even SSH to any of the instances, it times out.
Question:
Have I made an error anywhere? Specifically, in the ports or IP, I am not sure where the mistake is
Notes:
This is a simple project which I will have up for maybe 1-2 months, I do not plan on spending more than $5-$10. It is just a simple project with CRUD operations.
The end goal is simply to have everything up on AWS and running together, so I can perform CRUD on the DB, nothing long-term or complex.
P.S I MUST use AWS
The simplest way to achieve your goal considering the amount you want to spend would be to move your solution to EC2 as described by you. What issues do you face doing so?
You may also explore the integration of Docker Compose and ECS
Also check this out -
https://aws.amazon.com/blogs/containers/deploy-applications-on-amazon-ecs-using-docker-compose/
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
We are building a small micro service architecture which we would like to deploy to AWS.
The number of services is growing, so we need solution that allows scaling (horizontal).
What's the best way to build this on AWS? We don't have too much experience with docker, we used EC2 based stuff in the past.
I'm thinking about something like:
Use ECR, create a private docker repository. We push release images there.
Use ECS to automatically deploy those images.
Is this correct? Or should we go for Kubernetes instead? Which one is better?
Our needs:
Automated deployments based on docker images
Deploy to test and prod environments
Prod cluster should be able to handle multiple instances of certain services with load balancing.
Thanks in advance for any advice!
AWS container service team member here. Agreed with others that answers may potentially be very skewed to personal opinions. If you come in with good AWS knowledge but no container knowledge I would suggest ECS/Fargate. Note that deploying on ECS would require a bit of CloudFormation mechanics because you need to deploy a number of resources (load balancers, IAM roles, etc) in addition to ECS tasks that embeds your containers. It could be daunting if not abstracted.
We have created a few tools that allows you to offload some of that boiler plating. In order of what I would suggest for your use case:
Copilot which is a CLI tool that can prepare environments and deploy your app according to specific patterns. Have a look here
Docker Compose integration with ECS. This is a new integration we built with Docker that allows you to start from a simple Docker Compose file and deploy to ECS/Fargate. See here.
CDK is a sw development framework to define your AWS infrastructure as code. See here. These are the specific CDK ECS patterns if you want to go down that route.
I have a basic django/postgres app running locally, based on the Docker Django docs. It uses docker compose to run the containers locally.
I'd like to run this app on Amazon Web Services (AWS), and to deploy it using the command line, not the AWS console.
My Attempt
When I tried this, I ended up with:
this yml config for ecs-cli
these notes on how I deployed from the command line.
Note: I was trying to fire up the Python dev server in a janky way, hoping that would work before I added nginx. The cluster (RDS+server) would come up, but then the instances would die right away.
Issues I Then Failed to Solve
I realized over the course of this:
the setup needs another container for a web server (nginx) to run on AWS (like this blog post, but the tutorial uses the AWS Console, which I wanted to avoid)
ecs-cli uses a different syntax for yml/json config than docker-compose, so you need to have some separate/similar code from your local docker.yml (and I'm not sure if my file above was correct)
Question
So, what ecs-cli commands and config do I use to deploy the app, or am I going about things all wrong?
Feel free to say I'm doing it all wrong. I could also use Elastic Beanstalk - the tutorials on this don't seem to use docker/docker-compose, but seem easier overall (at least well documented).
I'd like to understand why any given approach is a good way to do this.
One alternative you may wish to consider in lieu of ECS, if you just want to get it up in the amazon cloud, is to make use of docker-machine using the amazonec2 driver.
When executing the docker-compose, just ensure the remote Amazon host machine is ACTIVE which can be viewed with a docker-machine ls
One item you will have to revisit with the Amazon Mmgt Console is to open the applicable PORTS such as Port 80 and any other ports exposed in the compose file. Once the security group is in place for the VPC, you should be able to simply refer to the VPC ID on subsequent executions bypassing any need to use the Mgmt console to add the ports. You may wish to bump up the instance size from the default t2.micro to match the t2.medium specified in your NOTES.
If ECS orchestration is needed, then a task definition will need to be created containing the container definitions you require as defined in your docker compose file. My recommendation would be to take advantage of the Mgmt console to construct the definition and then grab the accompanying JSON defintion which is made available and store in your source code repository for future executions on the command line where they can be referenced in registering task definitions, executing tasks and services within a given cluster.
Major newbie when it comes to Amazon EC2 servers, and web development in general.
At the moment I have a web app that is hosted on parse. Everything is done on the client side in the browser, and I want to change it to a client server model by writing a server in node.js.
I've looked into Amazon EC2, I've set up and instance and it looks good. My question is however:
Is there an easier way to update files on the instance? At the moment I'm pushing all the files from my computer to a github repo, then pulling them on to the instance- this seems very long winded. When using parse, all I needed to type was 'parse deploy' into the command line to update and deploy my application, is there something like this for Amazon EC2?
Thank you
I typically install or enable FTP on my ec2 instances and then just use the ftp client of my choice to update files.
I've been following the official Amazon documentation on deplaying to the Elastic Bean Stalk.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python.html
and the customization environment
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html#customize-containers-format
however, I am stuck. I do not want to use the built in RDS database I want to use mongodb but have my django/python application scale as a RESTful frontend or rather API endpoint for my users.
Currently I am running one EC2 instance to test out my django application.
Some problems that I have with the Elastic Bean:
1. I cannot figure out how to run commands such as
pip install git+https://github.com/django-nonrel/django#nonrel-1.5
Since I cannot install the device mongo driver for use by django I cannot run my mongodb commands.
I was wondering if I am just skipping over some concepts or just not understanding how deploying on the beanstalk works. I can see that beanstalk just launches EC2 instances and possibly need to write custom scripts or something I don't know.
I've searched around but I don't exactly know what to ask in regards to this. Top results of google are always Amazon documents which are less than helpful in customization outside of their RDS environment. I know that Django traditionally uses RDS environments but again I don't want to use those as they are not flexible enough for the web application I am writing.
You can create a customize AMI to your specific needs the steps are outline in the AWS documentation below. Basically you would create a custom AMI with the packages needed to host your application and then update the Beanstalk config to use your customize AMI.
Using Custom AMIs