Load balancing question - coldfusion

Just a quick question.
If I wanted to set up load balancing for my Railo / Coldfusion application with the least fail points, which of the following two setups would be most optimal and why? Or, would they be the same?
01) Setup 4 VPS instances each with 1GB Ram
OR
02) Setup 1 Dedicated box with 4GB Ram with 4 Railo / CF instances each allocated 1GB ram
Thanks

If you have 1 dedicated box then you have ONE single point of failure.
Luis

You need to analyze the application(s) you intend to run on this cluster and evaluate if there's a need to easily be able to reallocate resources, CPU and RAM, to a particular server. This type of need arise in situations where you want to direct all traffic from a certain place or type to specific server(s). Examples include search indexers, landing pages of marketing campaigns, and splitting traffic by region using IP address. In each case you might want to make the server receiving this traffic have more resources, and if they're finite, and the expense of others in the cluster.

Related

The zone 'projects/*******/zones/northamerica-northeast1-b' does not have enough resources available

I am unable to restart my VM for 2 hours now, my services are down because of that error :
The zone 'projects/******/zones/northamerica-northeast1-b' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
I can't rely on gcloud having to be down for hours because of ressources. what should I do, I can't afford changing zone, it needs to be in Canada. I can't also afford changing the IP it's behind a DNS. I just need to restart my VM. my business is down...
What's the issue/solution ?
thank you
I'm glad to see that you solved your issue by trying a different machine type. I was about to suggest trying a different machine type and then checking whether it allowed you to restart your VM.
I wanted also to mention in case this can help other users that in case that trying a non-shared core machine type, or a VM from a different family doesn't help you can try to recreate your VM in a different zone of the same region (I've been using northamerica-northeast1-a without any issue so far).
However, in case you want to prevent this from happening at all after a given restart, I recommend you to create a reservation to make sure that these resources are available to you and don't impact your workload/application.
Finally I found this links that maybe you can be interested on: Patterns for scalable apps. It discusses how it's best to deploy your app/workload in different zones to make sure it is more resilient by being balanced and you wouldn't need to change your DNS records every time you need to switch the VM serving the backend.

What is an instance and how do I convert this to $

You'll have to excuse my ignorance on this one...but honestly, I've had a hard time finding clarity on this. That being said, I'm looking for a non technical answer...something in layman's terms!
Anyways, I've been playing around building a web app (first time obviously) and I'm getting to the point where I've started looking into hosting services. A quick google search and a few blogs later, I thought AWS would be a good place to start, since they give a free-year trial. I don't care about speedy upstarts or other hosting serves, so save your key strokes on offering other services.
My question is based on the fact that AWS charges "Linux Usage per hour" and they also use this term "instance". Yeah...an "instance" is an "object", which is also above my head (probably the real source of the problem), but that was the extent I was able to learn via a google search. That being said, I don't know how to translate the cost into a ball park cost. Yes, I can probably use the trial to help monitor predictable costs, but I don't want to go through the effort of learning one hosting companies system just to find out it's not going to work in the end.
OK...so hopefully by now you see where I'm coming from. What is an "instance" and how do I use the "Linux Usage per hour" to estimate cost? Is an instance a server? For example if I start NGINX is that in instance? Is it just one instance running NGINX or does every VPN represent an instance? If I have 100 people calling the server at once, can they fit on one instance? If I start another server say, Apache or Node, does that become another instance? If I connect to a database, is that an instance? Do instances start as needed? Yes, I know, that's more than one question...I'm just trying to express my confusion.
If I'm suppose to choose a pricing model from this list, "Linux Usage per hour", I need to know what them mean by "Linux Usage". If it's based on an "instance", I need to know what that is. So please, in layman's terms, help clear this up. Maybe some examples or analogies, but no deep technical stuff.
This is more a side note, but I was reading this article and it said
For a client needing to run 800 virtual instances, the annual cost of
a private cloud came to below $400,000 vs. somewhere between $800,000
and $1.2 million for public cloud services.
Considering I don't know what an instance is, that kinda made me a bit nervous...WAAAAAAyyyyyy outta my price range! Yes, it's obviously a big company, but can you imagine "hitting the lottery" with an app everyone loves then before you know it, AWS hits you with a bill of $1,000,000. Or even worse, your security sucks and someone spawns millions of these "instances"...help alive my paranoia!!
Basically, an instance is a virtual machine, which looks very much like a server. As such it's running an operating system - e.g. linux - which is capable of running many programs (aka 'processes' or sometimes, 'services') at the same time.
To go through your questions (some of the explanations below are not technically accurate, but are hopefully more explanatory for it - if anything is obvious or already known, apologies - trying not to assume any knowledge)
An instance is an object
This definition is coming up in your searches because 'instance' has many definitions in different situations. If you see the definition of 'instance' as an object, it's from the topic of object oriented programming languages - you define a class in your code (kind of like a 'template'), and then create instances of the class - kind of like real copies of the template.
Amazon borrowed the term to be analogous - because in the 'cloud' world, you can create an AMI (Amazon Machine Image - the template) and then create lots of instances that are copies or clones of that template.
Is an instance a server?
In terms of what you can do with it, yes, it's a server.
(Technically it's a virtual server - Amazon runs multiple virtual servers on each physical server.)
how do I use the "Linux Usage per hour" to estimate cost?
Estimate how long you will have your instance running for in hours per month, multiply it by cost per hour and you will have your estimated cost per instance per month.
e.g. - one instance always turned on would be - 24 hrs * 31 days = 744 hours. At $0.013/hr (for a t2.micro) that would be 744 * $0.013 = $9.672/mth.
(And that's the reason the free tier gives you 750 hours of instance time per month.)
Instances come in different types and sizes and each size costs a different amount. If you are not sure what size you need, I'd start with the smallest until you discover you need more - which would be when your program starts running too slowly.
For example if I start NGINX is that in instance?
Nginx is a program that runs as a daemon in linux terms - a program that runs in the background so it's always on. It will be one of the many programs running on the server (aka the instance)
If I have 100 people calling the server at once, can they fit on one instance?
It depends - on how big your instance is, and how efficient the program is that is responding to their requests. If you are just getting started learning to program websites, I wouldn't worry about handling 100 people issuing requests to the server all at once just yet - walk before you run :) (also, even when there are 100 people visiting your website, the odds that all of them issue a request at exactly the same time is low - usually they load a page and read it - while they're reading it, some of the other people are loading other pages, and it all spreads out so you might only have ~10 page requests actively being processed by your server at the same time.)
However, if you have 2,000 people on your site at the same time, you might be processing 200 page requests at once, so by then you do need to have put some thought into performance and scalability.
(Note: these numbers are arbitrary and depend entirely on the type of site and it's traffic patterns.)
Generally, most websites pick a mid-level instance size, and then to handle more requests they 'scale out' - create lots of copies of that instance, and allow each instance to handle a portion of the traffic.
If I start another server say, Apache or Node, does that become another instance
The language to use here would be 'start another service say, Apache or Node' - they are other programs, and your instance will be perfectly fine running nginx, apache and node all at the same time. Although each will consume some of the resources (e.g. memory and cpu) and the more activity they are doing, the faster you will run out of resources and need to get a bigger instance size
So - no, they don't automatically become another instance. The language is confusing because sometimes people don't distinguish between the 'server' (aka the instance) and the service (aka the program) and will say the 'apache server' and the 'apache service' interchangably.
If I connect to a database, is that an instance?
Your instance, as a fully capable server, could run a database service on it at the same time as the other services - e.g. you could install and run mysql on your instance.
There is another option, though - if you use the AWS RDS product, then you will be starting an RDS instance. An RDS instance is different from an EC2 instance (what we've been talking about so far) in that RDS instances are specialised to just run the database service and nothing else, but EC2 instances are general servers that you can do pretty much anything on.
It's usually recommended to use RDS, but if you are trying to save money and aren't serving many users, there's nothing particularly wrong with installing mysql on your instance yourself (especially while you're learning how it works) and then moving your data to an RDS instance when you want to support more load or traffic.
Do instances start as needed?
Not by default, no - you have to manually start and stop them.
However, there are options other than manually starting and stopping. Amazon provides a lot of APIs, so you could write a program that would connect to the API and automatically start and stop your instance(s) based on rules you build into your program..
Also, Amazon offers a product called "AutoScalingGroups" which allows you to have a related group of instances and for Amazon to automatically start and stop them according to rules that you configure into that product. These rules can be 'scheduled actions' - start/stop at certain times of day - or they can be reactive - e.g. when the average CPU usage is > 50% for more than 5 minutes, start another instance.
This is more a side note, but I was reading this article and it said
For a client needing to run 800 virtual instances, the annual cost of
a private cloud came to below $400,000 vs. somewhere between $800,000
and $1.2 million for public cloud services.
The 'free tier' gives you a t2.micro sized instance (1 vCPU, 1 GiB RAM) which you could leave turned on permanently for free during that free year.
Even after your free tier expires, that same instance would cost you $9.67/mth, and you have the option to go downgrade to a t2.nano (0.5 GiB RAM) which would only cost ~$4/mth - but 0.5GiB RAM isn't much these days, so may not be enough for you.
A t2.micro should be more than enough to learn how to build websites on. If you are fortunate enough to build a site that is popular enough that you are getting more requests than that server can handle, then you will have to decide if you can generate revenue from that popularity sufficient to cover the cost, but by then you'll have more of a sense of how efficient your program is, and what instance size (and/or how many instances) you'll need.
Yes, it's obviously a big company, but can you imagine "hitting the
lottery" with an app everyone loves then before you know it, AWS hits
you with a bill of $1,000,000
AWS protects you from yourself here a bit - they have limits which generally restrict you from running more than 20 instances at a time - unless you ask for permission. So, by default, your instance won't go multiplying like rabbits on it's own - unless you set it up to. And even if you have set it up to, it won't be able to grow beyond 20 instances unless you have asked amazon to let you. So, worst case is 20 x $9.67/mth - $197/mth.
But - that's just the instance cost. Amazon charges you for lots of things including data traffic in and out, RDS instance costs, and if you start using other service such as S3 buckets and/or elastic load balancers, they all attract their own costs.
But hopefully, if you hit the lottery with an app everyone loves, you've worked out how to convert that love into dollars and cents so you can pay for all those instances you're going to need :)

Can I improve performance of my GCE small instance?

I'm using cloud VPS instances to host very small private game servers. On Amazon EC2, I get good performance on their micro instance (1 vCPU [single hyperthread on a 2.5GHz Intel Xeon], 1GB memory).
I want to use Google Compute Engine though, because I'm more comfortable with their UX and billing. I'm testing out their small instance (1 vCPU [single hyperthread on a 2.6GHz Intel Xeon], 1.7GB memory).
The issue is that even when I configure near-identical instances with the same game using the same settings, the AWS EC2 instances perform much better than the GCE ones. To give you an idea, while the game isn't Minecraft I'll use that as an example. On the AWS EC2 instances, succeeding world chunks would load perfectly fine as players approach the edge of a chunk. On the GCE instances, even on more powerful machine types, chunks fail to load after players travel a certain distance; and they must disconnect from and re-login to the server to continue playing.
I can provide more information if necessary, but I'm not sure what is relevant. Any advice would be appreciated.
Diagnostic protocols to evaluate this scenario may be more complex than you want to deal with. My first thought is that this shared core machine type might have some limitations in consistency. Here are a couple of strategies:
1) Try backing into the smaller instance. Since you only pay for 10 minutes, you could see if the performance is better on higher level machines. If you have consistent performance problems no matter what the size of the box, then I'm guessing it's something to do with the nature of your application and the nature of their virtualization technology.
2) Try measuring the consistency of the performance. I get that it is unacceptable, but is it unacceptable based on how long it's been running? The nature of the workload? Time of day? If the performance is sometimes good, but sometimes bad, then it's probably once again related to the type of your work load and their virtualization strategy.
Something Amazon is famous for is consistency. They work very had to manage the consistency of the performance. it shouldn't spike up or down.
My best guess here without all the details is you are using a very small disk. GCE throttles disk performance based on the size. You have two options ... attach a larger disk or use PD-SSD.
See here for details on GCE Disk Performance - https://cloud.google.com/compute/docs/disks
Please post back if this helps.
Anthony F. Voellm (aka Tony the #p3rfguy)
Google Cloud Performance Team

What configurations need to be set for a LAMP server for heavy traffic?

I was contracted to make a groupon-clone website for my client. It was done in PHP with MYSQL and I plan to host it on an Amazon EC2 server. My client warned me that he will be email blasting to about 10k customers so my site needs to be able to handle that surge of clicks from those emails. I have two questions:
1) Which Amazon server instance should I choose? Right now I am on a Small instance, I wonder if I should upgrade it to a Large instance for the week of the email blast?
2) What are the configurations that need to be set for a LAMP server. For example, does Amazon server, Apache, PHP, or MySQL have a maximum-connections limit that I should adjust?
Thanks
Technically, putting the static pages, the PHP and the DB on the same instance isn't the best route to take if you want a highly scalable system. That said, if the budget is low and high availablity isn't a problem then you may get away with it in practise.
One option, as you say, is to re-launch the server on a larger instance size for the period you expect heavy traffic. Often this works well enough. You problem is that you don't know the exact model of the traffic that will come. You will get a certain percentage who are at their computers when it arrives and they go straight to the site. The rest will trickle in over time. Having your client send the email whilst the majority of the users are in bed, would help you somewhat, if that's possible, by avoiding the surge.
If we take the case of, say, 2,000 users hitting your site in 10 minutes, I doubt a site that hasn't been optimised would cope, there's very likely to be a silly bottleneck in there. The DB is often the problem, a good sized in-memory cache often helps.
This all said, there are a number of architectural design and features provided by the likes of Amazon and GAE, that enable you, with a correctly designed back-end, to have to worry very little about scalability, it is handled for you on the most part.
If you split the database away from the web server, you would be able to put the web server instances behind an elastic load balancer and have that scale instances by demand. There also exist standard patterns for scaling databases, though there isn't any particular feature to help you with that, apart from database instances.
You might want to try Amazon mechanical turk, which basically lots of people who'll perform often trivial tasks (like navigate to a web page click on this, etc) for a usually very small fee. It's not a bad way to simulate real traffic.
That said, you'd probably have to repeat this several times, so you're better off with a load testing tool. And remember, you can't load testing a time-slicing instance with another time-slicing instance...

Migrate hosted LAMP site to AWS

Is there an easy way to migrate a hosted LAMP site to Amazon Web Services? I have hobby sites and sites for family members where we're spending far too much per month compared to what we would be paying on AWS.
Typical el cheapo example of what I'd like to move over to AWS:
GoDaddy domain
site hosted at 1&1 or MochaHost
a handful of PHP files within a certain directory structure
a small MySQL database
.htaccess file for URL rewriting and the like
The tutorials I've found online necessitate PuTTY, Linux commands, etc. While these aren't the most cumbersome hurdles imaginable, it seems overly complicated. What's the easiest way to do this?
The ideal solution would be something like what you do to set up a web host: point GoDaddy to it, upload files, import database, done. (Bonus points for phpMyAdmin being already installed but certainly not necessary.)
It would seem the amazon AWS marketplace has now got a solution for your problem :
https://aws.amazon.com/marketplace/pp/B0078UIFF2/ref=gtw_msl_title/182-2227858-3810327?ie=UTF8&pf_rd_r=1RMV12H8SJEKSDPC569Y&pf_rd_m=A33KC2ESLMUT5Y&pf_rd_t=101&pf_rd_i=awsmp-gateway-1&pf_rd_p=1362852262&pf_rd_s=right-3
Or from their own site
http://www.turnkeylinux.org/lampstack
A full LAMP stack including PHPMyAdmin with no setup required.
As for your site and database migration itself (which should require no more than file copies and a database backup/restore) the only way to make this less cumbersome is to have someone else do it for you...
Dinah,
As a Web Development company I've experienced an unreal number of hosting companies. I've also been very closely involved with investigating cloud hosting solutions for sites in the LAMP and Windows stacks.
You've quoted GoDaddy, 1And1 and Mochahost for micro-sized Linux sites so I'm guessing you're using a benchmark of $2 - $4 per month, per site. It sounds like you have a "few" sites (5ish?) and need at least one database.
I've yet to see any tool that will move more than the most basic (i.e. file only, no db) websites into Cloud hosting. As most people are suggesting, there isn't much you can do to avoid the initial environment setup. (You should factor your time in too. If you spend 10 hours doing this, you could bill clients 10 x $hourly-rate and have just bought the hosting for your friends and family.)
When you look at AWS (or anyone) remember these things:
Compute cycles is only where it starts. When you buy hosting from traditional ISPs they are selling you cycles, disk space AND database hosting. Their default levels for allowed cycles, database size and traffic is also typically much higher before you are stopped or charged for "overage", or over-usage.
Factor in the cost of your 1 database, and consider how likely it will be that you need more. The database hosting charges can increase Cloud costs very quickly.
While you are likely going to need few CCs (compute cycles) for your basic sites, the free tier hosting maximums are still pretty low. Anticipate breaking past the free hosting and being charged monthly.
Disk space it also billed. Factor in your costs of CCs, DB and HDD by using their pricing estimator: http://calculator.s3.amazonaws.com/calc5.html
If your friends and family want to have access to the system they won't get it unless you use a hosting company that allows "white labeling" and provides a way to split your main account into smaller mini-hosting accounts. They can even be setup to give self-admin and direct billing options if you went with a host like www.rackspace.com. The problem is you don't sound like you want to bill anyone and their minimum account is likely way too big for your needs.
Remember that GoDaddy (and others) frequently give away a year of hosting with even simple domain registrations. Before I got my own servers I used to take HUGE advantage of these. I've probably been given like 40+ free hosting accounts, etc. in my lifetime as a client. (I still register a ton of domain through them. I also resell their hosting.)
If you aren't already, consider the use of CMS systems that support portaling (one instance, many websites under different domains). While I personally prefer DotNetNuke I'm sure that one of its LAMP stack competitors can do the same for you. This will keep you using only one database and simplify your needs further.
I hope this helps you make a well educated choice. I think it'll be a fine-line between benefits and costs. Only knowing the exact size of every site, every database and the typical traffic would allow this to be determined in advance. Database count and traffic will be your main "enemies". Optimize files to reduce disk-space needs AND your traffic levels in terms of data transferred.
Best of luck.
Actually it depends upon your server architecture, whether you want to migrate whole of your LAMP stack to Amazon EC2.
Or use different Amazon web services for different server components like Amazon S3 for storage and Amazon RDS for mysql database and so.
In case if you are going with LAMP on EC2: This tutorial will atleast give you a head up.
Anyways you still have to go with essential steps of setting up the AMI and installing LAMP through SSH.