Amazon elastic compute cloud - amazon-web-services

I have created an ubuntu 32 bit instance and installed python packages on it.
My code is running fine on it.
Now I need to create another instance exactly same as this running instance,But the main concern is that both instances shouldnot share the database or mysql.
Can I install different mysql in both or is there any other wayout?

To launch additional EC2 instances based on one you've already created, just go to the EC2 dashboard in your account, view your instances, select the one you want to clone and from the ACTIONS menu select "Launch more like this." Your new server request will kick off and you can change any parameters you want to in that process (such as throwing your new instance into a different AZ, etc.)
For running MySQL separately, you have a couple of good/easy options here:
If the MySQL engine is already installed on the first server (the one you're cloning) you could run a parallel engine on your second server, so they both run in parallel -- and are entirely independent of one another.
Alternatively, you could spin up a MySQL flavored RDS instance and simply run two DBs on that, one for each of your two EC2 servers. That would take the MySQL overhead off of your EC2s, give you one place to manage your DBs, and would probably be less of a management hassle in the long run.

Related

Is it possible to set up auto-scaling so that it always duplicates the most recent version of your main server?

I know that you can create an image of your server as-is and setup auto-scaling on that, but what if I then make changes to my original server? Do I have to then make another snapshot of that and setup auto-scaling again?
There are two approaches for configuring a server:
Creating an Amazon Machine Image (AMI) with all software fully configured, or
Having the instance configure itself via a startup script triggered via User Data
A fully-configured AMI is fast to startup, whereas a configuration script can take several minutes to configure the instance before the instance is ready to accept traffic.
In general, it is not considered good practice to "make changes to my original server" because there is no concept of an "original server". All instances are considered equal. Instead, the configuration should be created and tested on development servers separate to any production servers and then released by deploying new servers or by having an 'update' script intelligently run on existing servers. Some of these capabilities are provided by AWS CodeDeploy.

Accessing a database inside an EC2 instance

Is it possible to create a database server (MySQL or PostgreSQL) inside an EC2 instance (running Windows 2016) and access it the way we access an RDS or do I need to have a separate RDS for that purpose?
My plan was to have an EC2 instance and use it as a server for accessing some Windows applications to my (small) company as well as an always-available database to store our reports.
Please let me know if I am on the wrong path.
Yes, you can install MySQL or PostgreSQL on an EC2 instance, just like you would for a server that was within your company.
You of course won't have all of the extra redundancy/backup features that RDS provides for you - unless you start adding all of that yourself i.e. automated backups, slave/master configurations, read replicas etc. (and if you do start adding all of those extra features in I would reconsider your decision not to use RDS).
I do this for some smaller, less mission critical solutions I support, and generally have not had many issues; I still prefer RDS when possible, but its not always an option for me.
You can install and configure DB on windows and access from your app. the endpoint will be windows machine IP and running service port. you have to allow the application from the security group.

Build system when using auto scaling group with ELB in aws

I was using a free tier aws account in which I had one ec2 machine (Linux). I have a simple website with backend server running on django at 8000 port and front end server written in angular and running on http (80) port. I used nginx for https and redirection of calls to backend and frontend server.
Now for backend build system, I did these 3 main steps (which I automated by running jenkins on the same machine).
1) git pull (Pull the latest code from repo).
2) Do migrations (Updating my db with any new table).
3) Restarting the django server. (I was using gunicorn).
Now, I split my front end and backend server into 2 different machines using auto scaling groups and I am now using ELB (Aws Elastic Load balancer) to route the requests. I am done with the setup. But now I am having problem in continuous deployment. The main thing is that ELB uses auto scaling groups which in turn uses AMI.
Now, since AMI's are created once, my first question is how to automate this process and deploy my latest code in already running aws servers.
Second, if I want to run few steps just once for all the servers like my second step of updating db with new tables then how to achieve that.
And also third if these steps need to run on a machine, then do I need to have another ec2 instance to automate the process of creating AMI, updating auto scaling groups with it and then deploying latest code in that.
So, basically I want to know the best practices that people follow in deploying latest code in aws machines that were created by auto scaling groups with the help of AMI. Also I use bitbucket for code management.
First Question: how to automate 'package based deployment'.
Instead of creating a new AMI for every release, create a baseline AMI which only changes when your new release require OS changes / security patches / etc. Look into tools such as packer to create AMIs automatically. In order to automate your code deployment when it changes, you can use a package-based deployment approach, which means you create a package for every release (Should be part of your CI process), which is stored in some repository such as Nexus, Artifactory, or even a simple S3 bucket.
When you deploy a new instance of your application, it should run some sort of script to pull and unpack/install that package on the instance < this is the basic concept, there are many tools that can help you achieve this, for example, Chef, or AWS CloudFormation.
So essentially, Step 1 should pull the code, create the package and store it in some repository available to your application servers > this can be done offline.
Second Question: How to run other tasks such as updating database schema.
As mentioned above, this can also be part of your 'deployment' automation, so if you are using Chef or even a simple bash script, it can update a database schema before unpacking the new code, this really depends on your database, how you manage it, and who orchestrates the deployment.
For example, you could have a Jenkins job that pulls the new schema and updates your database when ever you rollout a release.
Your third question can be solved by Packer, it can spin up instances, create an AMI, and terminate the instance.
Read more into CICD, and CICD related tools.

Dealing with AWS Elastic Beanstalk Multi-container databases and persistent storage

I'm new to both Elastic Beanstalk, EC2 and Docker and spent the last couple of weeks researching and playing around with it. I have a few questions that I'm finding difficult to find answers to elsewhere.
One thing I like is that I am able to run eb local run to boot a local environment of what will be running in production. This seems to work well until it comes to databases.
1) As far as I understand Elastic Beanstalk spawns instances running the containers inside, which could result in having multiple databases if Elastic Beanstalk spawns multiple instances? Is this correct?
2) Is it better to use AWS RDS in production and then have an external database container locally?
3) In terms of persisting data, I read that EBS can only mount to one EC2 instance, how do people handle storing user files, or do they have their application push to a service such as S3 directly?
I don't know if this is stated anywhere but I am fairly sure AWS does not intent for you to use EB's multi-container to run databases or anything that should run only once on your system. As their examples show, it is for you to have better control what the front end server will be.
If you want to run databases, or store files, you will either move to AWS ECS where you can better control this, or use multiple EB environment (e.g. create a worker-tier, single instance environment for running the database)
One thing I like is that I am able to run eb local run to boot a local environment of what will be running in production. This seems to work well until it comes to databases.
I have not used eb local run and instead use docker-compose, which allows me to properly run a proper environment locally, including my databases. Yes, you may need to duplicate some information between the docker-compose file, and the Dockerrun file, but once you set it up, you will see how powerful it is. Because you are still sharing the Dockerfiles, you can still assume things will run in a similar enough way once deployed.
1) As far as I understand Elastic Beanstalk spawns instances running the containers inside, which could result in having multiple databases if Elastic Beanstalk spawns multiple instances? Is this correct?
Yes, I think that is correct. EB assumes you will use RDS or dynamodb or something else, already centralized and managed.
2) Is it better to use AWS RDS in production and then have an external database container locally?
Yes, and by the way, rather than having EB manage the creation of the database, I find it a better practice for you to manually instantiate it so that it stays persistent after you kill your EB environments.
3) In terms of persisting data, I read that EBS can only mount to one EC2 instance, how do people handle storing user files, or do they have their application push to a service such as S3 directly?
Yes, using S3 is the way to go for multiple reasons, but mostly because AWS manages and you can scale without you having to worry about it. In fact, you want your client to get or even post the files directly on S3, so your server does not have to do any work (note the server may need to sign the URL but that is about it).
If you really have an issue against S3 (for whatever reason), then you will also (like with the database) create a second, single instance EB environment with EBS to ensure you have a single instance. But compared to the S3 solution it won't scale very far, and will in fact be much more expensive than using S3.

Same code for AWS and local application

I want to create Java application with use of Amazon Web Services and I also want to have ability to run it as local application. So it will be in two versions: Amazon cloud and as local application. I don't know AWS yet and I'am worry about if there is some specific api or database access so I couldn't run as local app. I simply do not want to write two separate versions of that app, or just write as less as possible.
Is it possible?
In EC2, you can launch virtual servers (or instances) with root or administrator access. That means your EC2 instances are capable of running mostly everything you can run locally.
There are no specific APIs to learn to run Java code on EC2. Just compile and package your code, upload it to your server (using scp/rsync/anything else you might be more used to), then run it with java -jar myapp.jar, after installing Java on the instance. You can also upload the source code directly into your instance and compile it there if you want. It really behaves like a "normal" server.
About database access, again, it works exactly as you would expect: just install your database server on the instance, say, MySQL, and connect to it normally (using JDBC for example). Also, note that there's a service called Relational Database Service (RDS), which simplifies the deployment and management of a database system: you don't have to install your database software, maintain it, upgrade, backup, etc, everything is done for you. You simply specify the name and password of the "master" user, and it gives you back a connection string. (and there's also a "micro" RDS instance which is included in the free tier so that you can start exploring for free!)
Finally, if you don't want to launch and maintain a virtual server by yourself, you could use Elastic Beanstalk, which automates lots of things for you: using the web interface, you simply upload your ".war" file, and Elastic Beanstalk launches and instance for you, installs Java, Tomcat, deploys your application, and monitor it for you -- you get emails in your inbox if anything goes wrong. There are tons of other features included in Elastic Beanstalk, and it is all completely free (you just pay for the servers it launches -- also, if you instruct it to launch at most a single t1.micro instance, which is included on the free tier, again, you pay nothing!)