AWS: need guidance to deploy my Django Project - amazon-web-services

I have a Django web app. I am planning to deploy on the AWS web server.
I am using celery and rabbitmq que manager for my application.
I have read about the AWS services.
I have two options use :
1) AWS Elastic Beanstalk or
2) Create an EC2 instance of linux and install postgresql, celery, rabbitmq etc
So which is better to use.

AWS EC2 is always a better option as it gives you complete access on the OS and physical access to the data storage. This will help you to manage your application is a much more efficient way. Also EC2 instance can not only host a single application but can have as much ever applications that you require(depends on the capacity/instance type of the server). This will let you tweak the webserver proxy as well.
In case of Beanstalk you do not get similar options, you have to manage the applications with the options that are available to you.
To summarise:
In case you want complete control of you application - Use EC2.
If you are looking for a managed service wherein not much control is required you can opt for Beanstalk. Personally I would like to have the entire control over my application ;)

Related

Spring boot/cloud microservices on AWS

I have created a Spring cloud microservices based application with netflix APIs (Eureka, config, zuul etc). can some one explain me how to deploy that on AWS? I am very new to AWS. I have to deploy development instance of my application.
Do I need to integrate docker before that or I can go ahead without docker as well.
As long as your application is self-contained and you have externalised your configurations, you should not have any issue.
Go through this link which discusses what it takes to deploy an App to Cloud Beyond 15 factor
Use AWS BeanStalk to deploy and Manage your application. Dockerizing your app is not a predicament inorder to deploy your app to AWS.
If you use an EC2 instance then it's configuration is no different to what you do on your local machine/server. It's just a virtual machine. No need to dockerize or anything like that. And if you're new to AWS, I'd rather suggest to to just that. Once you get your head around, you can explore other options.
For example, AWS Beanstalk seems like a popular option. It provides a very secure and reliable configuration out of the box with no effort on your part. And yes, it does use docker under the hood, but you won't need to deal with it directly unless you choose to. Well, at least in most common cases. It supports few different ways of deployment which amazon calls "Application Environments". See here for details. Just choose the one you like and follow instructions. I'd like to warn you though that whilst Beanstalk is usually easier then EC2 to setup and use when dealing with a typical web application, your mileage might vary depending on your application's actual needs.
Amazon Elastic container Service / Elastic Kubernetes Service is also a good option to look into.
These services depend on the Docker Images of your application. Auto Scaling, Availability cross region replication will be taken care by the Cloud provider.
Hope this helps.

Choosing the right AWS Services and software tools

I'm developing a prototype IoT application which does the following
Receive/Store data from sensors.
Web application with a web-based IDE for users to deploy simple JavaScript/Python scripts which gets executed in Docker Containers.
Data from the sensors gets streamed to these containers.
User programs can use this data to do analytics, monitoring etc.
The logs of these programs are outputted to the user on the webapp
Current Architecture and Services
Using one AWS EC2 instance. I chose EC2 because I was trying to figure out the architecture.
Stack is Node.js, RabbitMQ, Express, MySQl, MongoDB and Docker
I'm not interested in using AWS IoT services like AWS IoT and Greengrass
I've ruled out Heroku since I'm using other AWS services.
Questions and Concerns
My goal is prototype development for a Beta release to a set of 50 users
(hopefully someone else will help/work on a production release)
As far as possible, I don't want to spend a lot of time migrating between services since developing the product is key. Should I stick with EC2 or move to Beanstalk?
If I stick with EC2, what is the best way to handle small-medium traffic? Use one large EC2 machine or many small micro instances?
What is a good way to manage containers? Is it worth it use swarm and do container management? What if I have to use multiple instances?
I also have small scripts which have status of information of sensors which are needed by web app and other services. If I move to multiple instances, how can I make these scripts available to multiple machines?
The above question also holds good for servers, message buses, databases etc.
My goal is certainly not production release. I want to complete the product, show I have users who are interested and of course, show that the product works!
Any help in this regard will be really appreciated!
If you want to manage docker containers with least hassle in AWS, you can use Amazon ECS service to deploy your containers or else go with Beanstalk. Also you don't need to use Swarm in AWS, ECS will work for you.
Its always better to scale out rather scale up, using small to medium size EC2 instances. However the challenge you will face here is managing and scaling underlying EC2's as well as your docker containers. This leads you to use Large EC2 instances to keep EC2 scaling aside and focus on docker scaling(Which will add additional costs for you)
Another alternative you can use for the Web Application part is to use, AWS Lambda and API Gateway stack with Serverless Framework, which needs least operational overhead and comes with DevOps tools.
You may keep your web app on Heroku and run your IoT server in AWS EC2 or AWS Lambda. Heroku is on AWS itself, so this split setup will not affect performance. You may heal that inconvenience of "sitting on two chairs" by writing a Terraform script which provisions both EC2 instance and Heroku app and ties them together.
Alternatively, you can use Dockhero add-on to run your IoT server in a Docker container alongside your Heroku app.
ps: I'm a Dockhero maintainer

AWS Elastic Beanstalk - Using Mongodb instead of RDS using Python and Django environment

I've been following the official Amazon documentation on deplaying to the Elastic Bean Stalk.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python.html
and the customization environment
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html#customize-containers-format
however, I am stuck. I do not want to use the built in RDS database I want to use mongodb but have my django/python application scale as a RESTful frontend or rather API endpoint for my users.
Currently I am running one EC2 instance to test out my django application.
Some problems that I have with the Elastic Bean:
1. I cannot figure out how to run commands such as
pip install git+https://github.com/django-nonrel/django#nonrel-1.5
Since I cannot install the device mongo driver for use by django I cannot run my mongodb commands.
I was wondering if I am just skipping over some concepts or just not understanding how deploying on the beanstalk works. I can see that beanstalk just launches EC2 instances and possibly need to write custom scripts or something I don't know.
I've searched around but I don't exactly know what to ask in regards to this. Top results of google are always Amazon documents which are less than helpful in customization outside of their RDS environment. I know that Django traditionally uses RDS environments but again I don't want to use those as they are not flexible enough for the web application I am writing.
You can create a customize AMI to your specific needs the steps are outline in the AWS documentation below. Basically you would create a custom AMI with the packages needed to host your application and then update the Beanstalk config to use your customize AMI.
Using Custom AMIs

efficient way to administer or manage an auto-scaling instances in aws

As a sysadmin, i'm looking for an efficient way or best practices that you do on managing an ec2 instances with autoscaling.
How you manage automate this following scenario: (our environment is running with autoscaling, Elastic Load Balancing and cloudwatch)
patching the latest version of the rpm packages of the server for security reasons? like (yup update/upgrade)
making a configuration change of the Apache server like a change of the httpd.conf and apply it to all instances in the auto-scaling group?
how do you deploy the latest codes to your app to the server with less disruption in production?
how do you use puppet or chef to automate your admin task?
I would really appreciate if you have anything to share on how you automate your administration task with aws
Check out Amazon OpsWorks, the new Chef based DevOps tool for Amazon Web Services.
It gives you the ability to run custom Chef recipes on your instances in the different layers (Load Balancer, App servers, DB...), as well as to manage the deployment of your app from various source repositories (Git, Subversion..).
It supports auto-scaling based on load (like the auto-scaling that you are already using), as well as auto-scaling based on time, which is more complex to achieve with standard EC2 auto-scaling.
This is relatively a young service and not all functionality is available already, but it might be useful for your.
patching the latest version of the rpm packages of the server for
security reasons? like (yup update/upgrade)
You can use puppet or chef to create a cron job that takes care of this for you (the cron would in its most basic form download and or install updates via a bash script). You may want to automatically upgrade, or simply notify an admin via email so you can evaluate before apply updates.
making a configuration change of the Apache server like a change of
the httpd.conf and apply it to all instances in the auto-scaling
group?
I usually handle all of my configuration files through my Puppet manifest. You could setup each EC2 instance to pull updates from a Puppet Server, then you can roll out changes on demand. Part of this process should be updating the AMI stored in your AutoScale group (this is done with the Amazon Command Line tools).
how do you deploy the latest codes to your app to the server with less
disruption in production?
Test it in staging first! Also a neat trick is to versioned deployments, so each time you do a deployment it gets its own folder (/var/www/v1 /var/www/v2 etc) and once you have verified the deployment was successful you simply update a symlink to point to the lastest version (/var/www/current points to /var/www/v2).
OpsWorks handles all this sort of stuff for you so you can look into that if you don't want to do it all yourself.
how do you use puppet or chef to automate your admin task?
You can use Chef or Puppet to do all sorts of things, and anything they can't (or you don't know how to) do can be done via a bash/python script that you invoke from Chef or Puppet.
I normally do things like install packages, build custom packages, set permissions, download things, start services, manage configuration files, setup cron jobs etc
I would really appreciate if you have anything to share on how you automate your administration task with aws
Look into CloudFormation. This can help you setup all your servers and related services (think EC2, LBS, CloudWatch) through configuration files, thus helping you to automate your entire stack (not just the EC2's Operating System).

Drupal with Amazon Web Services?

I'm not sure if this is the write place to ask, but this is the only site I know where I get my questions answered... anyways
I wanted to install drupal but where should I host it? Can amazon web service host this such application? Do I need to go somewhere else and host it? I do have an account with inmotionhosting, but I was thinking if Amazon does the job, why not just use it? Any thoughts and opinions?
You can install Drupal on AWS EC2 if you have sys admin experience. Otherwise you will need to use a managed platform, like Cloudways, for that. Configuring web server like Apache and Nginx, cache like Varnish and Memcached and other features on AWS is little difficult. Many managed servers have those features available in their platform so you don't have to configure anything or go through long process of installing application on AWS.
Amazon Web Services (AWS) will host Drupal no problem.
The service you're looking for is Amazon Elastic Compute Cloud (Amazon EC2). It's pretty much equivalent to a private server with which you can do almost whatever you want (Web hosting included). The downside is that you have to do all the setup yourself.
If you don't know how to install Apache or configure your own Linux machine, you'd probably be better off with managed hosting where they'll set everything up for you.
You can also just use AWS Cloudformation to set up your drupal environment. It's a service that is part of AWS that will set up your stack for you. you may still need to know how to handle your config files but at least you do not have to go into installing the DB , Apache etc all manually.
http://aws.amazon.com/cloudformation/
Bitnami provides a free (Apache-licensed) pre-built Drupal image for AWS that you launch easily. It is great for quickly testing something but if you choose the right instance for your expected load, also for production (disclaimer: I am a cofounder of Bitnami, though as I mentioned the image is open source)
Drupal can be deployed and hosted automatically on Jelastic PaaS. You won't need to configure it from scratch. And if you wish to make some custom settings while installation, you can also easily install it manually. Both variants are described in the guide.
As a result, you'll get automatic scaling, pay-per-use pricing, management via intuitive UI, a wide choice of local service providers from different countries and other options to run your Drupal effectively.