lets say you have a big Website so you need to add a additional instance of a AWS Lightsail server. So you do a Snapshot and create a new Instance. The instance hosts Plesk on Ubuntu and has a database and a WordPress site.
Now somebody registers on the website and the created user gets added to the database. Does the user now also get added to the additional instance? If not how can you add the functionality that when a edit happens on one instance the other Instance also overtakes the changes.
I'm just getting started with AWS and would like to find out.
If a user registers on Server 1, it would not reflect on Server 2. Similarly, any posts made on the first site would not show up on the second site. This is because the disk (which holds both the database and any files necessary for posts) is not shared between the two instances, so any changes made to one disk are only reflected on that one server.
Honestly, for hosting Wordpress, your best bet is probably to use a dedicated Wordpress host so you don't need to worry about scaling.
However, you can do what you're looking to do in AWS as well. You essentially need a shared database, which you can do either with a third instances or by using Amazon's RDS service. If you have enough load that you need more than one server, I'd highly suggest using AWS's RDS service instead of trying to build your own database service. Once you have a shared database, posts and users registered in one server will show up on the other. You'll also likely want a load balancer to balance requests between the two servers (there is a service called "Lightsail Load Balancer" that's meant for this).
If you're dealing with a significant request load, you should take a look at AWS's reference architecture for WordPress: https://docs.aws.amazon.com/whitepapers/latest/best-practices-wordpress/reference-architecture.html
Happy Reading!
Related
Total NOOB question. I want to setup a website on google cloud compute platform with:
static IP/IP range(external API requirement)
simple front-end
average to low traffic with a maximum of few thousand requests a
day.
separate database instance.
I went through the documentation of services offered Google and Amazon. Not fully sure what is the best way to go about it. Understand that there is no right answer.
A viable solution is:
Spawn up an n1-standard instance on GCP (I prefer to use Debian)
Get a static IP, which is free if you don't let it dangling.
Depending upon your DB type choose Cloud SQL for structured data or Cloud Datastore for unstructured data
Nginx is a viable option for web-server. Get started here
Rest is upon you. What kind of stack are you using to build your app? How are you gonna deploy your code to instance? You might later wanna use Docker and k8s to get flexibility between cloud providers and scaling needs.
The easiest way of creating the website you want would be Google App Engine with the Datastore as DB. However it doesn't support static IP's, this is due to a design choice. Is this absolutely mandatory?
App Engine does not currently provide a way to map static IP addresses
to an application. In order to optimize the network path between an
end user and an App Engine application, end users on different ISPs or
geographic locations might use different IP addresses to access the
same App Engine application. DNS might return different IP addresses
to access App Engine over time or from different network locations.
I have a semi-popular Django website with postgresql backend where users share photos with one another (approx 3 are shared per minute).
The whole set up is hosted on two separate Azure VMs - one for the web application and one for the database. I use classic VMs, both are part of the same resource group, and map to the same DNS as well (i.e. they both live on xyz.cloudapp.net). I also use Azure blob storage for my images (but not for other static files like the CSS) - for this I've provisioned a storage account.
Since I'm heavily reliant on images and I want to speed up how fast my static content is displayed to my users, I want to get Azure CDN into the mix. I just provisioned one from the portal, making it part of the same resource group as my classic VMs.
Next, I'm trying to add a CDN endpoint. I need help in setting that up:
1) Can a CDN be used with classic VMs, or is it feature solely for the resource manager deployment?
2) Given 'yes' to the previous one, when one provisions a CDN endpoint, what should the origin type be? Should it be the cloud service I'm using (i.e. under which my VMs fall), OR should it be the azure storage which holds all my images? What about other static content (e.g. the CSS), which isn't hosted on Azure blobs?
3) What's the purpose of the optional origin path? Should I specify directories? What happens if I don't?
4) Will I be up and running right after the CDN endpoint is successfully provisioned? Or is there more configuration to come after this? I'm unsure what to expect, and I don't want to disrupt my live service.
Btw, going through the answer here still doesn't comprehensively answer my questions. Reason being:
1) I don't use an azure web app, I've provisioned virtual machines and done my own set up over Ubuntu
2) I'm unsure whether I'm supposed to create a new storage account for the CDN, as discussed in this question's answer.
3) Again, not being a web app, should I map the origin type to my blob service URL? The answer seems to say so, however, I do have the option of using my cloudservice DNS instead. What happens in each case?
Sounds like you have two origins, a storage account and a VM.
What you need to do here is to create two CDN endpoints, one for your pictures in the storage account, one for the css on the VM.
Let's say I created myendpoint1.azureedge.net, using the VM as an origin and I also created myendpoint2.azureedge.net, using the storage account as an origin.
If I access myendpoint1.azureedge.net/Content/css/bootstrap.css, I should be able to get same content as xyz.cloudapp.net/Content/css/bootstrap.css
If I access myendpoint2.azureedge.net/myPictureContainer/pic.jpg, I should be able to get same content as mystorageaccount.blob.core.net/myPictureContainer/pic.jpg
After all the validation is done, you need to change your html files to reference the css from the myendpoint1.azureedge.net and reference the pictures from myendpoint2.azureedge.net, and then you deploy your website. There will be no interrupt of the service.
Also, CDN can be used on any kind of origins, so yes for Classic VM. They type of the origin doesn't matter, if the url of your VM/storage is not in any of the dropdown list, then just use the custom origin and use the correct url.
I'm trying to determine the "best" way for a small company to keep web app EC2 instances in sync with current files while using autoscaling.
From my research, CloudFormation, Chef, Puppet, OpsWorks, and others seem like the tools to do so. All of them seem to have a decent learning curve, so I am hoping someone can point me in the right direction and I'll learn one.
The initial setup I am after is:
Route53
1x Load Balancer
2x EC2 (different AZ) - Apache/PHP
1x ElastiCache Redis
2x EC2 (different AZ) w/ MySQL
Email thru Google Apps
Customer File/Image Storage via S3
CloudFront for CDN
The only major challenge I can see is versioning/syncing the web/app server. We're small now, so I could probably just manually update the EBS or even using rsync, but I would rather automate it and be setup for autoscaling.
This is probably too broad of a question and may be closed, but let me give you a few thoughts.
Why not use RDS for MySQL?
You need to get into the thought of how to make and promote disk images. In the cloud world, you don't want to be rsyncing around a bunch of files from server to server. When you are ready to publish a revised set of code, just make am image from your staging environment, start new EC2 instances in your ELB based on that image, and turn off old instances. You may have a little different deployment sequence if you need to coordinate with DB schema changes, but that is a pretty straightforward approach.
You should still seek to automate some of your activities using tools such as those you mentioned. You don't need to do this all at once. Just figure out a manual part in your process that you want to automate and do it.
Here is my scenario.
We have an ELB setup with two reserved instances of EC2 acting as web server under it (Amazon Linux).
There are some rapidly changing files (pdf, xls, jpg, etc) on the web server which are consumed by the websites hosted on the EC2 instances. Code files are identical and we will be sure to update both the servers manually at the same time with new code as and when needed.
The main problem is the user uploaded content which is stored on the EC2 instance.
What is the best approach to make sure that the uploaded files are available on both the servers almost instantly ?
Many people have suggested the use of rsync or unison, but this will involve setting a cron job. I am looking for something like FileSystemWatcher in C# which is triggered
ONLY when the contents of the specified folder are changed. Moreover due to the ELB we are not sure which of the EC2 instances will actually be connected to the user when the files are uploaded.
To add to the above we have one more Staging Server which pushes certain files to one of the EC2 web servers. We want these files too replicated to the other instance.
I was wondering whether S3 can solve the problem ? Will this setup be still good if we decide to enable auto scaling ?
I am confused at this stage. Please help
S3 will be the choice for your case. In this way, you don't have to sync files between EC2 instances. Also it is probably the best choice if you need to enable auto scaling. You should not put any data in EC2 instances, they should be stateless so that you can easily auto scale.
To use S3, it will require your application to support it instead of directly writing to local file system. It should be quite easy, there are many libraries in each language which can help you to store files into S3.
The startup I'm working for is constructing a website, and we want to use AWS to run and host our site and the accompanying mysql database. Apparently when you terminate an AWS instance, any data stored on it is lost, so we would be keeping the database on the EBS system. The thing I can't figure out, though, is how to interface things running on these two different platforms. How do I tell the web server where the database is?
Sorry if this is a really noob question. Still trying to grasp how this whole cloud service works.
If I am reading correct, your DB is on the EBS mounted on the same machine, if that is the case, you have to make sure you tell MySQL (my.cnf) to point it's datadir to EBS directory.
Rest is as usual. Your host is localhost and your user credentials.
BTW, FYI, there is one more option from Amazon for DB that is RDS (http://aws.amazon.com/rds/) which provides lots of functionality + advantages, take a look at it.