How Dedicated Server Could Help Your Business Grow - dedicated-server

How India Dedicated Servers Could Help Your Business Grow Rapidly
I want exact answer

Related

Hosting several "in development"-sites on AWS

I've been trying to wrap my head around the best solution for hosting development sites for our company lately.
To be completely frank I'm new to AWS and it's architecture, so more then anything I just want to know if I should keep learning about it, or find another more suitable solution.
Right now we have a dedicated server which hosts our own website, our intranet, and a lot of websites we've developed for clients.
Our own web and the intranet isn't an issue, however I'm not quite sure about the websites we produced for our clients.
There are about 100 of them right now, these sites are only used pre-launch so our clients can populate the sites with content. As soon as the content is done we host the website somewhere else. And the site that is still on our developer server is no longer used at all, but we keep them there if the client wants a new template/function so we can show it there before sending it to production.
This means the development sites have almost zero traffic, with perhaps at most 5 or so people adding content to them at any given time (5 people for all 100 sites, not 5 per site).
These sites needs to be available at all times, and should always feel snappy.
These are not static sites, they all require a database connection.
Is AWS (ES2, or any other kind of instance, lightsail?) a valid solution for hosting these sites. Or should I just downgrade our current dedicated server to a VPS, and just worry about hosting our main site on AWS?
I'll put this in an answer because it's too long, but it's just advice.
If you move those sites to AWS you're likely to end up paying (significantly) more than you do now. You can use the Simple Monthly Calculator to get an idea.
To clarify, AWS is cost-effective for certain workloads. It is cost effective because it can scale automatically when needed so you don't have to provision for peak traffic all the time. And because it's easy to work with, so it takes fewer people and you don't have to pay a big ops team. It is cost effective for small teams that want to run production workloads with little operational overhead, up to big teams that are not yet big enough to build their own cloud.
Your sites are development sites that just sit there and see very little activity. Which means those sites are probably under the threshold of cost effective.
You should clarify why you want to move. If the reason is that you want as close to 100% uptime as possible, then AWS is a good choice. But it will cost you, both in terms of bill paid to Amazon and price of learning to set up such infrastructure. If cost is a primary concern, you might want to think it over.
That said, if your requirements for the next year or more are predictable enough and you have someone who knows what they are doing in AWS, there are ways to lower the cost, so it might be worth it. But without further detail it's hard for anyone to give you a definitive answer.
However. You also asked if you should keep learning AWS. Yes. Yes, you should. If not AWS, one of the other major clouds. Cloud and serverless[1] are the future of much of this industry. For some that is very much the present. Up to you if you start with those dev sites or something else.
[1] "Serverless" is as misleading a name as NoSQL. It doesn't mean no servers.
Edit:
You can find a list of EC2 (Elastic Cloud Compute) instance types here. That's CPU and RAM. Realistically, the cheapest instance is about $8 per month. You also need storage, which is called EBS (Elastic Block Store). There are multiple types of that too, you probably want GP2 (General Purpose SSD).
I assume you also have one or more databases behind those sites. You can either set up the database(s) on EC2 instance(s), or use RDS (Relational Database Service). Again, multiple choices there. You probably don't want Multi-AZ there for dev. In short, Multi-AZ means two RDS instances so that if one crashes the other one takes over, but it's also double the price. You also pay for storage there, too.
And, depending on how you set things up you might pay for traffic. You pay for traffic between zones, but if you put everything in the same zone traffic is free.
Storage and traffic are pretty cheap though.
This is only the most basic of the basics. As I said, it can get complicated. It's probably worth it, but if you don't know AWS you might end up paying more than you should. Take it slow and keep reading.

AWS web application - rough estimate

I'm trying to make a rough estimate to how much it would cost per hour when running a web application within AWS. I understand that this depends on the type of web application, network capacity, throughput etc. Very roughly, how many concurrent sessions can a medium or large server manage? Let's say that the number of clients is at any time 8000 - roughly, what would that cost?
Thank you!
Nobody can answer your question asked like that. A rough estimate can be between 10 and 10k USD. What I suggest is to use the AWS calculator to calculate your estimate. Add some EC2 machines, and transfer out, and you should see some kind of estimate.
https://calculator.s3.amazonaws.com/index.html

Concurrent connections and hosting a website "at home"

I have searched for an already answered question regarding this topic, but I can hardly find what I seek.
My question is simple and straightforward: I have a blog on .com domain which uses wordpress software and is currently hosted at a company in my country. It currently supports only 30 concurrent connections. I'm pretty familiar with those terms but if the traffic on my website will go very high, I'm considering to buy servers and host it at home instead of getting a more expensive hosting plan. If I do so, what do I need? For example: how many concurrent connections will a server (PC) handle? How many and powerful servers you need for, say, 1 milion daily unique visits?
Depending on your website optimization and caching and hardware requirements, any moderate pc today can handle 1 million daily visits easily.
1m requests per day = ~11 requests per second, which really isn't that much, assuming the sql optimization is NOT really bad.
I would suggest that you start hosting up the site on the hardware you have, whatever it is, and monitor real stats/statistics, and see for yourself what kind of hardware you might need, this experiment will tell you whether you want to optimize more or not, and if you need a fast cpu, or more ram.
You need to consider few important things on the long run, like power bills and backups.
Some draft hardware requirements:
CPU: any Core i3 2xxx or Core i3 3xxx or faster.
RAM: ram is really cheap these days, go for 8gb or 16gb, I do not see one site needing that much, but extra ram will help with harddisk IO read caching.
Storage: All storage devices need to be in RAID1 or any other RAID that provide some redundancy, plus some daily/weekly backups, .. consider getting SSDs from a reputable brand (Intel or Samsung) with 5 years warranty.
Power Backup: Get a UPS
Those might be an overkill (depending on the amount of optimization you will have), but they should give you a nice and relatively fast server.

What configurations need to be set for a LAMP server for heavy traffic?

I was contracted to make a groupon-clone website for my client. It was done in PHP with MYSQL and I plan to host it on an Amazon EC2 server. My client warned me that he will be email blasting to about 10k customers so my site needs to be able to handle that surge of clicks from those emails. I have two questions:
1) Which Amazon server instance should I choose? Right now I am on a Small instance, I wonder if I should upgrade it to a Large instance for the week of the email blast?
2) What are the configurations that need to be set for a LAMP server. For example, does Amazon server, Apache, PHP, or MySQL have a maximum-connections limit that I should adjust?
Thanks
Technically, putting the static pages, the PHP and the DB on the same instance isn't the best route to take if you want a highly scalable system. That said, if the budget is low and high availablity isn't a problem then you may get away with it in practise.
One option, as you say, is to re-launch the server on a larger instance size for the period you expect heavy traffic. Often this works well enough. You problem is that you don't know the exact model of the traffic that will come. You will get a certain percentage who are at their computers when it arrives and they go straight to the site. The rest will trickle in over time. Having your client send the email whilst the majority of the users are in bed, would help you somewhat, if that's possible, by avoiding the surge.
If we take the case of, say, 2,000 users hitting your site in 10 minutes, I doubt a site that hasn't been optimised would cope, there's very likely to be a silly bottleneck in there. The DB is often the problem, a good sized in-memory cache often helps.
This all said, there are a number of architectural design and features provided by the likes of Amazon and GAE, that enable you, with a correctly designed back-end, to have to worry very little about scalability, it is handled for you on the most part.
If you split the database away from the web server, you would be able to put the web server instances behind an elastic load balancer and have that scale instances by demand. There also exist standard patterns for scaling databases, though there isn't any particular feature to help you with that, apart from database instances.
You might want to try Amazon mechanical turk, which basically lots of people who'll perform often trivial tasks (like navigate to a web page click on this, etc) for a usually very small fee. It's not a bad way to simulate real traffic.
That said, you'd probably have to repeat this several times, so you're better off with a load testing tool. And remember, you can't load testing a time-slicing instance with another time-slicing instance...

Good Service Tier Dev & Design: What are the common bad practices in communications tier development?

I am currently researching the best practises (at a reasonably high level) for application design for highly maintainable systems which result in minimal friction to change. By "Communications Tier" I mean web services, service buses and general network transmission technologies.
From your experiences, what have you found to be the common mistakes & bad practises when it comes to communications tier development and what measures have you taken / put in place / or can recommend to make the communications tier a better place to be from a developer perspective?
An example answer may include: What is the most common causes of a poorly scalable and extendible communications tiers? + What measures can be taken (be it in design or refactoring) to cure this issue?
I am looking for war stories here and some real world advice that I can build into publicly available guidance documents and samples.
The problem is in the question itself - the "communication tier".
Communication should not be a tier by itself, at most you can think of it as a layer. It should not be physically separate.
Hope that helps.