Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 months ago.
Improve this question
I have been programming a full-stack application consisting of a NodeJS frontend, .Net Backend, and SQL Server DB and I have managed to fully dockerize the application using docker-compose. I have come a long way since I started this project but I still need a bit more help to finalize it. I am now in the process of deploying my docker containers into AWS (somehow) and I am having a bit of a problem on my end. Please bear in mind I am a beginner and this is quite complex to me.
So far this is the closest I have come to an actual solution to properly deploying all 3 parts of the app.
Created a security group w/ Inbound to all IPv4s and IPv6s, Outbound to all IPv4s
Created a load balancer listening on port 80 with default VPC
Created a key pair to SSH
Created a cluster with 3 instances (backend, frontend, db) default VPC, SG created, default role
Created ECR and pushed all my docker images seperately, 3 ECRs
Created EC2 task def, no role, 512 mem, container with each ECR url, 512 mem, 0:80 mapping
(Unsure if necessary) Created a service to link the LB etc.
When I do this, I am able to run all 3 tasks at the same time with no issues so it seems like progress to me. However, I am doing something wrong when it comes to the ports or IPs since I am not able to access the public DNS or even SSH to any of the instances, it times out.
Question:
Have I made an error anywhere? Specifically, in the ports or IP, I am not sure where the mistake is
Notes:
This is a simple project which I will have up for maybe 1-2 months, I do not plan on spending more than $5-$10. It is just a simple project with CRUD operations.
The end goal is simply to have everything up on AWS and running together, so I can perform CRUD on the DB, nothing long-term or complex.
P.S I MUST use AWS
The simplest way to achieve your goal considering the amount you want to spend would be to move your solution to EC2 as described by you. What issues do you face doing so?
You may also explore the integration of Docker Compose and ECS
Also check this out -
https://aws.amazon.com/blogs/containers/deploy-applications-on-amazon-ecs-using-docker-compose/
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
We are building a small micro service architecture which we would like to deploy to AWS.
The number of services is growing, so we need solution that allows scaling (horizontal).
What's the best way to build this on AWS? We don't have too much experience with docker, we used EC2 based stuff in the past.
I'm thinking about something like:
Use ECR, create a private docker repository. We push release images there.
Use ECS to automatically deploy those images.
Is this correct? Or should we go for Kubernetes instead? Which one is better?
Our needs:
Automated deployments based on docker images
Deploy to test and prod environments
Prod cluster should be able to handle multiple instances of certain services with load balancing.
Thanks in advance for any advice!
AWS container service team member here. Agreed with others that answers may potentially be very skewed to personal opinions. If you come in with good AWS knowledge but no container knowledge I would suggest ECS/Fargate. Note that deploying on ECS would require a bit of CloudFormation mechanics because you need to deploy a number of resources (load balancers, IAM roles, etc) in addition to ECS tasks that embeds your containers. It could be daunting if not abstracted.
We have created a few tools that allows you to offload some of that boiler plating. In order of what I would suggest for your use case:
Copilot which is a CLI tool that can prepare environments and deploy your app according to specific patterns. Have a look here
Docker Compose integration with ECS. This is a new integration we built with Docker that allows you to start from a simple Docker Compose file and deploy to ECS/Fargate. See here.
CDK is a sw development framework to define your AWS infrastructure as code. See here. These are the specific CDK ECS patterns if you want to go down that route.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm using an image annotation tool (coco-annotator based on Vue.js) locally and I would like to run it on a AWS webserver, to be able to access it from anywhere.
The source code contains some docker files and can locally be run using
docker-compose up
Does someone know, what are the high-level steps to run this application on a AWS webserver?
AWS seems quite complicated as it has a million options, so I'd like to know
What "product" should I choose? ("EC2" (virtual machine)? "Elatic Beanstalk" (web application)?
What pre-installation should I choose? ("Docker - single instance"?)
How do I tell AWS, how to launch the command for the coco-annotator? (login via ssh and run the command manually? or is there some pre-configuration that enters the respective source folder automatically and runs docker-compose up on startup?)
Solution
Select a AWS EC2
As virtual machine chose "t2.micro" (free tier eligible) for example with Ubuntu 18.04
Login to your EC2 virtual machine instance via ssh and run manually install the coco-annotator (or other software) -> note the local port that the server is running on
make the IP-Adress to your ec2 instance permanent (click)
make your EC2 instance accessible via the browser (click) --> Add Rule for "TCP/IP" + access from anywhere
sudo apt-get install nginx -> enter your domain in the default config file of nginx
Use the AWS "Route 53"-Service to create a "hosted zone"
Register a free domain, eg here at freenom
On AWS - Route 53: "Create Record Set" -> Name: "www.yourWebdomain.com" Value: "yourAwsEc2IpAddress"
note the nameservers provided by Amazon (ns-.awsdns-20.) and enter them as custom nameservers on the config page of your domain provider (freenom). Do not use URL forwarding!
now the communication between nginx, and your AWS EC2 instance should be working
This is a very loaded question and you'd probably get a better response with a more direct question with an example. With that said, if your app is containerized you can use ECS or Elastic Beanstalk. If your application is stateless (you don't need local disk storage this is persistent between restarts of your application, you can still use a database or other services for storage) the easiest is probably ECS using Fargate tasks.
There are numerous blogs and tutorials online because, as you've already said, there are very many different types of applications, deployment and configuration options. Start reading some blogs and the docs for using docker compose to deploy contained to AWS.
A place to get started might be here.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-fargate.html
Edit:
I see you cross posted here https://github.com/jsbroks/coco-annotator/issues/231. The OP there said they used a VM. In AWS this is an EC2 instance. The get started https://github.com/jsbroks/coco-annotator/wiki/Getting-Started#dedicated-servervps-setup also says this requires a dedicated server or VSP. That would also point to EC2. If you want to use ECS or Beanstalk you need to deploy a container. I don't know if this app supports running in a container and if you want to pursue that routeyour best place to ask is probably in that projects community not SO.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
What is the difference between provisioning using AWS CloudFormation UserData vs. Ansible?
I know that in relation to Puppet for example, it enforces provisioning rules even when a change is done (changes it back to reflect the manifest).
But are there more differences which are worth taking into consideration?
To clarify, "UserData" is part of an EC2 instance, not part of CloudFormation itself. EC2 instances can be launched with User Data, which can be used by the AMI to perform dynamic operations on startup. If CloudFormation is used to launch an EC2 instance, it can provide User Data to the EC2 instance by setting the UserData property on the AWS::EC2::Instance Resource.
Typically, user data is processed by Cloud-Init, and is typically formatted as a simple User-Data Script which is just a shell script that gets invoked on the instance when it is first launched.
That said, 'Shell script vs. Ansible' is an apples-to-oranges comparison. Whether or not Ansible is the appropriate software for your use-case depends on whether you need to use the extra layers of abstraction built into Ansible versus a standard shell script to provision what's needed on your instance. Read the Ansible Documentation and decide for yourself.
It is worth mentioning that aside from the normal 'push' method of running Ansible to provision your instance via SSH, you can also run Ansible in an inverted, 'Ansible-pull' mode, using a User-Data Script to perform the initial 'bootstrap' installation on the EC2 instance.
The short answer is: Use CloudFormation or Terraform
Ansible is a configuration managment tool for many diffrent purposes. The most significant diffrence to many of the other tools is, that is is working in a push mode, so there are no agents on the remote server polling for changes.
It is great when it's about installing packages, creating files and so on.
CloudFormation is desinged to create AWS enviroments. This is good if you onyl use Amazon and nothing else.
Ansible can do the job, but i would recommend to use a tool like CloudFormation or Terraform. The Ansible modules for this are ok, but tools like Terraform have a fokus on creating enviroments and they are much smarter when doing the job.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have an AWS EC2 instance that I would like to have various relevent people stop and start. In a perfect world I would like a really simple way for a select handful of people to stop and start an EC2 instance without giving them too many permissions. If I could make it so they just click 1 button to do it, that would be perfect.
Starting/Stopping an Amazon EC2 instance can be done via the:
AWS Management Console
AWS Command-Line Interface (CLI)
AWS SDK for many popular programming languages
The important thing to realize is that users do not have do issue the stop/start command themselves! They can use an in-between system that makes the call for them.
For example, if you have internal intranet, you could configure some code to start/stop instances when a user requests it via the website. The website would then issue the command to AWS (via the CLI or SDK), without the users themselves requiring any special access credentials (they just need access to your internal website).
This is similar to your "just click 1 button" idea, with the button being on your intranet.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We are getting ready to deploy a new app in the Amazon cloud, using EC2, RDS, and elastic load balancers. RDS would be sharded. Looking at the difficulties of manageing and monitoring anything beyond a few servers, one can see how quickly the task could become pretty crazy. Amazon's interfaces allow you to do all this, but we would have to script it all ourselves.
I was wondering what others have done. There is RightScale, for managed solutions. Has anyone found any other companies, or open source frame works, that do this kind of thing? Looking at:
Monitoring EC2, load balancers, RDS.
Spinning up new instances of the above automatically on predefined load levels.
Sending alerts and taking resources offline automatically when thesholds occur.
Promoting new software/upgrades in PHP and MySQL.
Taking numbers of servers offline for maintenance/troubleshooting.
Any thoughts would be much appreciated.
The type of services you are looking for - automated provisioning, scaling in/out and monitoring is generally referred as PaaS - Platform as a Service. The idea is that you submit your application to the PaaS system and it manages the complete life-cycle of your application.
There are several PaaS providers available that might fit your needs. There's a comparison available here: Looking for PaaS providers recommendations
You should consider your requirements carefully and see which provider is right for you in terms of:
Cloud Support: Do you need just EC2 or maybe additional clouds?
Language support: Some providers target specific coding frameworks and languages
Support
Pricing
Open/Closed source
Disclaimer: I work for GigaSpaces, developer of the Cloudify open-source PaaS Stack.
You could have a look at scalr. They offer this services on their own platform but you can also download the software they're using and set it up on your own.
After Amazon EC2 they started expanding into other cloud services as well, so you can run your scalr managed instances on literally all huge cloud providers.
Very feature rich, but so far I haven't tested it by myself.
You could try Xervmon. They offer integrated cloud management suite of tools to deploy, manage, monitor Amazon AWS along with several other providers. They do offer managed services as well.