Lately I see this buzzword phrase in job requirements in position descriptions of skills needed from an applicant (knowledge of):
"Horizontally scalable RESTful services"...
What exactly is it? I cannot google anything that would really explain the notion.
I expect that horizontal scaling is adding more servers to handle more load, rather than adding more memory and cpus to a server, as that is vertical scaling.
So, you could have a docker container that has your REST service, which should be stateless. There are many ways to scale in production.
On each connection you could then create a new container, and once that service is done you remove it, so every connection has its own server.
If you are running something like nodejs, which is very light, then you could get away with this, but if you are using a heavier web server then you may want to look at something like autoscaling, from AWS, and as the load on each container goes up, create a new one, so you don't overload any particular server.
You don't have to use Docker, but it wouldn't hurt for you to learn about it.
running multiple instance of application in multiple machines with a load balancer, We traditionally call it as web farming.
If your API is stateless, then you can directly go to load balancer without docker.
Related
I am trying to build a web application with Kubernetes and Amazon Web Service.
I believe there are many different approaches but I would like to ask for your opinions!
To make it simple, the app would be a simple web page that display information. User logs in, and based on filters, get specific information.
My reflection is on the k8s inside architecture:
Can I put my whole application as a Pod? That way, cluster and node scalability would make the app available for each user by allocating each of them 1 pod. Is that good practice?
By following that logic, every different elements of my app would be a container. For instance, in a simple way, 1 container that contains the front of the app, 1 container that has the data access/management, 1 container for backend/auth etc
So 1 user would "consume" 1 pod which containers' are discussing together to give the required data from the user. k8s would create a pod for every users, scaling up/down the nodes number etc...
But then, except for the data itself, everything would be dockerized and stored on ECR (Elastic Container Registry) right? And so no need for any S3/EBS/EFS in my opinion.
I am quite new at AWS and k8s, so please feel free to give honest opinions :) Feedback, good or bad, is always good to take.
Thanks in advance!
I'd recommend a layout where:
Any container can perform its unit of work for any user
One container per pod
Each component of your application has its own (stateless) deployment
It probably won't work well to try to put your entire application into a single multi-container pod. This means the entire application would need to fit on a single node; some applications are larger than that, and even if it fits, it can lead to trouble with scheduling. It also means that, if you update the image for any single container, you need to delete and recreate all of the containers, which could be more disruptive than you want.
Trying to create a pod per user also will present some practical problems. You need to figure out how to route inbound requests to a particular user's pod, and keep requests within that user's set of containers; Kubernetes doesn't have any sort of native support for this. A per-user pod will also keep using resources even if it's overnight or the weekend for that user and they're not using the application. You would also need something with access to the Kubernetes API to create and destroy resources as new users joined your platform.
In an AWS-specific context you might consider using RDS (hosted PostgreSQL/MySQL) or S3 (object storage) for data storage (and again one database or S3 bucket shared across all customers). ECR is useful, but as a place to store your Docker images; that is, your built code, but not any of the persisted data or running containers.
This is my first attempt at looking into cloud hosting and I'm feeling like a complete idiot. I have always had my own dedicated server with which I would would remote in and install/manage everything myself. So this cloud thing is completely new for me. I just can't seem to grasp basic things... like how I would get Tomcat and PostgreSQL installed in a way that they could talk to each other or get my domain and SSL cert on there, etc.
If I could just get a feel for where I should start, then I could probably calculate my costs and jump into the free trial where hopefully things will click for me.
Here are my basic, high-level requirements...
My web app running in Tomcat over HTTPS
Let's say approximately 1,000 page views per day
PostgreSQL supporting my web app.
Let's say approximately 10GB database storage
Throughout the day, a fairly steady stream of inbound SFTP data (~ 100MB per day)
The processing load on the app server side should be fairly light. The heaving lifting will be on the DB side sorting through and processing lots of data.
I'm having trouble figuring out which options I would install and calculating costs. If someone could help me get started by saying something like "You would start with a std-xyz-med server, install ABC located here at http://blahblah, then install XYZ located at http://XYZ.... etc.. etc. You can expect to pay somewhere around $100-$200 per month"....
Thoughts?
I would be eternally grateful. It seems like they should have some free sales support channel to ask someone at Google about this, but I don't see it.
Thank You!
I'll try to give you some tips where to start looking.
I will be referring to some products, here are the links
If you want to stick to your old ways, you can always spin up an instance on Compute Engine and set it up the same way you did before, these are just regular virtual machines. For some use cases this is completely valid.
You can split different components of your stack to different products:
For example, if your app is fine with postgresql, you can spin up a fully managed service in Cloud SQL, which might make it easier to manage backup or have several apps access the same db.
Alternatively, have a look at the different DB offerings to see if any of them matches your needed workload better. Perhaps have a look at BigQuery?
If you want to turn your app into a microservice, which is then easier to autoscale and is more fault tolerant, have a look at App Engine. That way you don't need to manage a virtual machine. The docs here will lead you through some easy to follow examples on how to set up SSL.
For the services to talk to each other, refer to docs of the individual components. It's usually very simple.
With pricing, try https://cloud.google.com/products/calculator/
Things like BigQuery have different pricing models - you don't pay for server uptime, but for amounts of data stored & processed with your queries.
I have a web app running on php, mysql, apache on a virtual windows server. I want to redesign it so it is scalable (for fun so I can learn new things) on AWS.
I can see how to setup an EC2 and dump it all in there but I want to make it scalable and take advantage of all the cool features on AWS.
I've tried googling but just can't find a simple guide (note - I have no command line experience of Linux)
Can anyone direct me to detailed resources that can lead me through the steps and teach me? Or alternatively, summarise the steps in an answer so I can research based on what you say.
Thanks
AWS is growing and changing all the time, so there aren't a lot of books to help. Amazon offers training that's excellent. I took their three day class on Architecting with AWS that seems to be just what you're looking for.
Of course, not everyone can afford to spend the travel time and money to attend a class. The AWS re:Invent conference in November 2012 had a lot of sessions related to what you want, and most (maybe all) of the sessions have videos available online for free. Building Web Scale Applications With AWS is probably relevant (slides and video available), as is Dissecting an Internet-Scale Application (slides and video available).
A great way to understand these options better is by fiddling with your existing application on AWS. It will be easy to just move it to an EC2 instance in AWS, then start taking more advantage of what's available. The first thing I'd do is get rid of the MySql server on your own machine and use one offered with RDS. Once that's stable, create one or more read replicas in RDS, and change your application to read from them for most operations, reading from the main (writable) database only when you need completely current results.
Does your application keep any data on the web server, other than in the database? If so, get rid of all local storage by moving that data off the EC2 instance. Some of it might go to the database, some (like big files) might be suitable for S3. DynamoDB is a good place for things like session data.
All of the above reduces the load on the web server to just your application code, which helps with scalability. And now that you keep no state on the web server, you can use ELB and Auto-scaling to automatically run multiple web servers (and even automatically launch more as needed) to handle greater load.
Does the application have any long running, intensive operations that you now perform on demand from a web request? Consider not performing the operation when asked, but instead queueing the request using SQS, and just telling the user you'll get to it. Now have long running processes (or cron jobs or scheduled tasks) check the queue regularly, run the requested operation, and email the result (using SES) back to the user. To really scale up, you can move those jobs off your web server to dedicated machines, and again use auto-scaling if needed.
Do you need bigger machines, or perhaps can live with smaller ones? CloudWatch metrics can show you how much IO, memory, and CPU are used over time. You can use provisioned IOPS with EC2 or RDS instances to improve performance (at a cost) as needed, and use difference size instances for more memory or CPU.
All this AWS setup and configuration can be done with the AWS web console, or command-line tools, or SDKs available in many languages (Python's boto library is great). After learning the basics, look into CloudFormation to automate it better (I've written a couple of posts about that so far).
That's a bit of the 10,000 foot high view of one approach. You'll need to discover the details of each AWS service when you try to use them. AWS has good documentation about all of them.
Depending on how you look at it, this is more of a comment than it is an answer, but it was too long to write as a comment.
What you're asking for really can't be answered on SO--it's a huge, complex question. You're basically asking is "How to I design a highly-scalable, durable application that can be deployed on a cloud-based platform?" The answer depends largely on:
The specifics of your application--what does it do and how does it work?
Your tolerance for downtime balanced against your budget
Your present development and deployment workflow
The resources/skill sets you have on-staff to support the application
What your launch time frame looks like.
I run a software consulting company that specializes in consulting on Amazon Web Services architecture. About 80% of our business is investigating and answering these questions for our clients. It's a multi-week long project each time.
However, to get you pointed in the right direction, I'd recommend that you look at Elastic Beanstalk. It's a PaaS-like service that abstracts away the underlying AWS resources, making AWS easier to use for developers who don't have a lot of sysadmin experience. Think of it as "training wheels" for designing an autoscaling application on AWS.
I was contracted to make a groupon-clone website for my client. It was done in PHP with MYSQL and I plan to host it on an Amazon EC2 server. My client warned me that he will be email blasting to about 10k customers so my site needs to be able to handle that surge of clicks from those emails. I have two questions:
1) Which Amazon server instance should I choose? Right now I am on a Small instance, I wonder if I should upgrade it to a Large instance for the week of the email blast?
2) What are the configurations that need to be set for a LAMP server. For example, does Amazon server, Apache, PHP, or MySQL have a maximum-connections limit that I should adjust?
Thanks
Technically, putting the static pages, the PHP and the DB on the same instance isn't the best route to take if you want a highly scalable system. That said, if the budget is low and high availablity isn't a problem then you may get away with it in practise.
One option, as you say, is to re-launch the server on a larger instance size for the period you expect heavy traffic. Often this works well enough. You problem is that you don't know the exact model of the traffic that will come. You will get a certain percentage who are at their computers when it arrives and they go straight to the site. The rest will trickle in over time. Having your client send the email whilst the majority of the users are in bed, would help you somewhat, if that's possible, by avoiding the surge.
If we take the case of, say, 2,000 users hitting your site in 10 minutes, I doubt a site that hasn't been optimised would cope, there's very likely to be a silly bottleneck in there. The DB is often the problem, a good sized in-memory cache often helps.
This all said, there are a number of architectural design and features provided by the likes of Amazon and GAE, that enable you, with a correctly designed back-end, to have to worry very little about scalability, it is handled for you on the most part.
If you split the database away from the web server, you would be able to put the web server instances behind an elastic load balancer and have that scale instances by demand. There also exist standard patterns for scaling databases, though there isn't any particular feature to help you with that, apart from database instances.
You might want to try Amazon mechanical turk, which basically lots of people who'll perform often trivial tasks (like navigate to a web page click on this, etc) for a usually very small fee. It's not a bad way to simulate real traffic.
That said, you'd probably have to repeat this several times, so you're better off with a load testing tool. And remember, you can't load testing a time-slicing instance with another time-slicing instance...
At my day job we have load balanced web servers which talk to load balanced app servers via web services (and lately WCF). At any given time, we have 4-6 different teams that have the ability to add new web sites or services or consume existing services. We probably have about 20-30 different web applications and corresponding services.
Unfortunately, given that we have no centralized control over this due to competing priorities, org structures, project timelines, financial buckets, etc., it is quite a mess. We have a variety of services that are reused, but a bunch that are specific to a front-end.
Ideally we would have better control over this situation, and we are trying to get control over it, but that is taking a while. One thing we would like to do is find out more about what all of the inter-relationships between web sites and the app servers.
I have used Reflector to find dependencies among assemblies, but would like to be able to see the traffic patterns between services.
What are the options for trying to map out web service relationships? For the most part, we are mainly talking about internal services (web to app, app to app, batch to app, etc.). Off the top of my head, I can think of two ways to approach it:
Analyze assemblies for any web references. The drawback here is that not everything is a web reference and I'm not sure how WCF connections are listed. However, this would at least be a start for finding 80% of the connections. Does anyone know of any tools that can do that analysis? Like I said, I've used Reflector for assembly references but can't find anything for web references.
Possibly tap into IIS and passively monitor the traffic coming in and out and somehow figure out what is being called and where from. We are looking at enterprise tools that could help but it would be a while before they are implemented (and cost a lot). But is there anything out there that could help out quickly and cheaply? One tool in particular (AmberPoint) can tap into IIS on the servers and monitor inbound and outbound traffic, adds a little special sauce and begin to build a map of the traffic. Very nice, but costs a bundle.
I know, I know, how the heck did you get into this mess in the first place? Beats me, just trying to help us get control of it and get out of it.
Thanks,
Matt
The easiest way is to look through the logs, but if that doesn't include the referrer than you may also want to monitor what is going out from your web to the app server. You can use tools like Wireshark or Microsoft Network Monitor to see this traffic.
The other "solution" and I use this loosely is to bind a specific web server to app server and then run through a bundle and see what it is hitting on the app server. You could probably do this in a test environment to lesson the effects on the users of the site.
You need a service registry (UDDI??)... If you had a means to catalog these services and their consumers, it would make this job of dependency discovery a lot easier. That is not an easy solution, though. It takes time and documentation to get a catalog in place.
I think the quickest solution would be to query your IIS logs and find source URLs which originate from your own servers. You would at least be able to track down which servers your consumers are coming from.
Also, if you already have some kind of authentication mechanism in place, you could trace who is using a particular service based on login.
You are right about AmberPoint. There are other tools that catalog the service traffic and provide reports showing what is happening to your services. Systinet, SOA Software and Actional also has a products similar to Amberpoint but Amberpoint has a free-ware version, I believe.