I'm new to Cloud Foundry and started wondering which CF components are critical
to keep application running
to allow applications to receive/send back traffic
to persist the application logs
As far as I understand
to keep application running: cell
to allow applications to receive/send back traffic: router
to persist the application logs: loggregator
to keep application running: cell
Technically just the Cell, but a couple things to consider. Traffic can't get to your Cells if your Gorouters and external load balancers are not working correctly. In addition, if your apps crash or need to be restarted that can only happen if Cloud Controller and the rest of the Diego components are running.
Tangentially related, apps also tend to use services. If those are down or not working, then your apps will be too. Just something else to consider.
to allow applications to receive/send back traffic: router
Most deployments have an external load balancer which balances traffic across your Gorouters. This would also need to be up to receive traffic.
to persist the application logs: loggregator
Technically there is no persistence of logs in Loggregator. It temporarily buffers logs in memory, but does not store them. You would need to have Loggregator running and sending logs to somewhere else like a syslog service that actually stores and persists the logs.
These two pages provide additional details:
https://docs.pivotal.io/pivotalcf/2-6/concepts/high-availability.html
https://docs.pivotal.io/pivotalcf/2-6/concepts/maintaining-high-availability.html
Hope that helps!
Related
I couldn't find anything in the documentation but still writing to make sure I did not miss it. I want all connections from different clients with the same value for a certain request parameter to end up on the same upstream host. With ELB sticky session, you can have the same client connect to the same host but no guarantees across different clients.
This is possible with Envoy proxy, see: https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/load_balancing/load_balancers#ring-hash
We already use ELB so if the above is possible with ELB then we can avoid introducing another layer in between with envoy.
UPDATE:
Use-case - in a multi-tenant cloud solution, we want all clients from a given customer account to connect to the same upstream host.
Unfortunately this is not possible to be performed in an ALB.
An application load balancer controls all the logic over which host receives the traffic with features such as ELB sticky sessions and pattern based routing.
If there is no work around then you could look at a Classic Loadbalancer which has support for the application setting the sticky session cookie name and value.
From best practice ideally your application should be stateless, is it possible to look at rearchitecting your app instead of trying work around. Some suggestions I would have are:
Using DynamoDB to store any session based data, moving from a disk based session (if that's what your application does).
Any disk based files that need to persist could be shared between all hosts either using EFS for your Linux based hosts, or FSX on Windows.
Medium/Long term persisting files could be migrated to S3, any assets that rarely change could be stored here and then your application could use S3 rather than disk.
It's important to remember that as I stated above, you should keep your application as stateless as you can. Assume that your EC2 instances could fail, by preparing for this it will make it easier to recover.
I have simple server now (some xeon cpu hosted somewhere), running apache/php/mysql (no docker, but its a possibility) and Im expecting some heavy traffic and I need my server to handle that.
Currently the server can handle about 100 users at once, I need it to handle couple thousands possibly.
What would be easiest and fastest solution to move my app to some scalable hosting?
I have no experience with AWS or something like that.
I was reading about AWS and similar, but Im mostly confused and not sure what should I choose.
The basic choice is:
Scale vertically by using a bigger computer. However, you will eventually hit a limit and you will have a single-point of failure (one server!), or
Scale horizontally by adding more servers and spreading the traffic across the servers. This has the added advantage of handling failure because, if one server fails, the others can continue serving traffic.
A benefit of doing horizontal scaling in the cloud is the ability to add/remove servers based on workload. When things are busy, add more servers. When things are quiet, remove servers. This also allows you to lower costs when things are quiet (which is not possible on-premises when you own your own equipment).
The architecture involves putting multiple servers behind a Load Balancer:
Traffic comes into a Load Balancer
The Load Balancer sends the request to a server (often based upon some measure of how "busy" each server is)
The server processes the request and sends a response back to the Load Balancer
The Load Balancer sends the response to the original requester
AWS has several Load Balancers available, which vary by need. If you are simply sending traffic to a single application that is installed on all servers, a Network Load Balancer should be sufficient. For situations where different parts of the application are on different servers (eg mobile interface vs web interface), you could use a Application Load Balancer.
AWS also assists with horizontal scaling by providing the Amazon EC2 Auto Scaling service. This allows you to specify details of the servers to launch (disk image, instance type, network settings) and Auto Scaling can then automatically launch new servers when required and terminate ones that aren't required. (Note that they launch and terminate, not start and stop.)
You can further define scaling policies that tell Auto Scaling when to launch/terminate instances by measuring metrics such as CPU Utilization. This way, the number of servers can approximately match the volume of traffic.
It should be mentioned that if you have a database, it should be stored separately to the application servers so that it does not get terminated. You could use the Amazon Relational Database Service (RDS) to run a database for you, or you could run one on a separate Amazon EC2 instance.
If you want to find out more about any of the above technologies, there are plenty of talks on YouTube or blog posts that can explain and demonstrate their use.
I've just finished setting up my site on a free Amazon Web Services EC2 Ubuntu server.
I'm not very knowledgeable in deployment, and I'm not 100% clear on what Nginx or gunicorn even is, but I'm following a tutorial to launch a Django project.
While doing things the same exact way, having no errors, I have noticed that sometimes I will go to my site and get 'refused to connect' or 'taking too long to respond.'
One of my previous projects had no issue, one of them never loaded the page, and the last one I did gave me this problem which was cured by rebooting the server.
I've rebooted the server several times as well as deactivated and reactivated the venv (as a classmate suggested) but it isn't working. I noticed that last night my terminal just kept taking forever to load and the Amazon web services site was just being slow as well.
Is this just Amazon's fault? Is there anything I can do?
You are spinning up your server. You are responsible to manage it.
There are a couple of things you need to check. The problem could be service may not be listening on a different port (check on IP as well), inbound and outbound security groups might not be configured right.
Amazon is not responsible for anything you do with their resources. It is a company to provide resources to simplify your business.
You can read AWS SLA here,
https://aws.amazon.com/s3/sla/
I'm building a Sails app that is using socket.io and see that Sails offers a method for using multiple servers via redis:
http://sailsjs.org/documentation/concepts/realtime/multi-server-environments
Since I will be placing the app on AWS, preferably with ELB (elastic load balancer) and autoscale group with multiple EC2 instances was wondering how I can handle so it doesn't need a separate redis instance?
Maybe we can use AWS Elasticache? If so - how would this be done?
Now that AWS has released the new ALB application load balancer which has websockets, could this be used to help simplify things?
Thanks in advance
Updates for use-cases in application
Allow end-user to update data dynamically from their own dashboard
and display analytics/stats in real-time to an administrator
Application status' to change based on specific timings eg. at a
given start date/time the app allows users to update data.
Regarding your first question, you don't want to run Redis on the same servers that Sails is running on, especially if you are using AutoScaling. The Redis server needs to be a separate server that won't disappear if your environment experiences a "scale-in" event. So Redis is going to have to be on a separate "server" somewhere.
ElastiCache is just separate EC2 instances, running Redis, where AWS handles most of the management for you to the point that you can't even SSH into the instance. It's similar to how RDS works. ElastiCache will certainly work for your scenario. You might also want to look at the third-party service RedisLabs which also manages Redis instances on AWS for you.
Regarding your second question, an Application Load Balancer will have no bearing on your Redis usage. It will however bring actual support for WebSockets which it sounds like you are using. So yes, you should be using an ALB instead of an ELB.
I have created a LBCookieStickinessPolicy for my ELB.
But I can't seem to find on any AWS documentation a command that retrieves the instances that are currently 'sticked' (I mean, the actual instance that the ELB is sending load now).
I only find the commands that create the policy itself (create-lb-cookie-stickiness-policy & create-app-cookie-stickiness-policy) ...Any ideas?
Sticky sessions mean that a single user's web browser gets stuck to a single server instance (unless the server goes down or the user clears cookies). The ELB still distributes load across all the servers attached to it. The ELB would distribute multiple users across multiple server instances.
So there is no way to see what you are looking for because the ELB is always using all instances. Now if you just had a single user on your website, you could look at the server logs of each web server to determine which server that user is "stuck" to. In general you would need to look at the web server logs to see which servers are currently receiving traffic.