Is seems that Session Replication in ColdFusion servers less that 9 was considered not something to do on high scale apps. Instead the basic path would be to use round-robin and sticky sessions.
Is this still the case for CF9 or has Session Replication been improved.
I've used session replication on high scale apps with no problem. We have 2-4 instances of ColdFusion on a single server, then multiple physical servers. On top of that, we used sticky sessions to keep sessions on a single instance using round-robin on the load balancers.
If a session died, the session rolled over to another instance on the same physical server and the user was redirected to that instance, unknown to them. If the physical server died, then the load balancer would connect them to another physical server where they would most likely have to login again.
Now, we had some tricks up our sleeves that let us recreate a user session across physical servers too, but that required SiteMinder to manage the overall authentication situation.
The only issue with session replication prior to ColdFusion 9 was that any objects (CFCs) that were stored in session could not be replicated across instances. CF 9 fixed all that.
Related
This question is for the infrastructure pros, hope anyone reaches this text.
I’m currently using a setup with one EC2 instance behind a classic load balancer on AWS running a websocket express based server. I always planed to scale my application so I started it behind a LB.
Now I’m on time to startup another instance, but I have this major problem: My websocket leaves a program running on the server - even when the user is out of the website - and return to show the program log to the user when he comes back to the website.
Of course if the user connects to another instance on the load balancer, he will not be able to access a program running on another instance. So the only solution is to connect a user to the same EC2 instance, always.
I searched a lot but I didn’t find anything related, besides sticky sessions based on cookies. The problem of this solution is that it expires after sometime, and I want my user to access the program log again no matter how much time he spent without doing it.
So my question is: Is there a way to sticky a user connection with the same EC2 instance using a AWS classic load balancer?
In a way that new users follow the standard algorithm, going to be connected to the lower used instance, and old users keeps going to the same EC2 every new connection. Is that possible?
Otherwise I’ll not be able to scale my application delivering, because the main purpose of this server is to connect this running program with a specific user.
I don't think you can customize CLB for that. But ALB just recently introduced Application Cookie Stickiness:
Application Load Balancer (ALB) now supports Application-based cookie stickiness. This new feature helps customers ensure that clients connect to the same load balancer target for the duration of their session using application cookies. This enables customers to achieve a consistent client-server experience with greater controls such as the flexibility to set custom cookie names and criteria for client-target stickiness within a target group.
Thus maybe, if you can migrate from CLB into ALB, the application-level cookies could be solution to your issue.
I am having difficulty understanding the nomenclature for this scenario:
Say I have one web server, Server A, in an ALB target group, and users hitting that server.
I would like to take that server offline, and replace it with Server B, without too much interruption to the existing user sessions.
So, I would plan to add Server B to the target group, and hopefully route all NEW sessions to Server B. All existing sessions (and no new sessions) would continue to hit Server A. I could then decide an appropriate time to remove Server A, once old user activity has slowed or ceased on Server A.
It doesn't seem that deregistration is used for this purpose. I don't see settings for sticky-sessions that refer only to NEW sessions.
What would be the best approach for this scenario?
All sessions would have to be sticky, not just NEW sessions. In your description, the old sessions are "stuck" to the old server, and the new sessions are "stuck" to the new server. The closest you can get with ALB settings is to enable sticky sessions, and set an appropriate Deregistration Delay setting.
To have more control of this switch over you will need to use other AWS services besides ALB, such as AWS Global Accelerator.
I couldn't find anything in the documentation but still writing to make sure I did not miss it. I want all connections from different clients with the same value for a certain request parameter to end up on the same upstream host. With ELB sticky session, you can have the same client connect to the same host but no guarantees across different clients.
This is possible with Envoy proxy, see: https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/load_balancing/load_balancers#ring-hash
We already use ELB so if the above is possible with ELB then we can avoid introducing another layer in between with envoy.
UPDATE:
Use-case - in a multi-tenant cloud solution, we want all clients from a given customer account to connect to the same upstream host.
Unfortunately this is not possible to be performed in an ALB.
An application load balancer controls all the logic over which host receives the traffic with features such as ELB sticky sessions and pattern based routing.
If there is no work around then you could look at a Classic Loadbalancer which has support for the application setting the sticky session cookie name and value.
From best practice ideally your application should be stateless, is it possible to look at rearchitecting your app instead of trying work around. Some suggestions I would have are:
Using DynamoDB to store any session based data, moving from a disk based session (if that's what your application does).
Any disk based files that need to persist could be shared between all hosts either using EFS for your Linux based hosts, or FSX on Windows.
Medium/Long term persisting files could be migrated to S3, any assets that rarely change could be stored here and then your application could use S3 rather than disk.
It's important to remember that as I stated above, you should keep your application as stateless as you can. Assume that your EC2 instances could fail, by preparing for this it will make it easier to recover.
I have heard about two approaches to store user session in Amazon AWS. One approach is to use cookies stickiness with Load Balancer and the other is to store user session to ElastiCache. What are the advantages and disadvantages if I want to use the EC2 Load Balancer as well as ElastiCache? Where should I store the user session?
AWS LB stickiness is something else, you can not store thing in LB stickiness, this is controlled by AWS underlying service. The load balancer uses a special cookie to track the instance for each request to each listener. When the load balancer receives a request, it first checks to see if this cookie is present in the request. If so, the request is sent to the instance specified in the cookie. If there is no cookie, the load balancer chooses an instance based on the existing load balancing algorithm.
you can use the sticky session feature (also known as session
affinity), which enables the load balancer to bind a user's session to
a specific instance. This ensures that all requests from the user
during the session are sent to the same instance.
LB sticky sessions just route the subsequent request same ec2 instance from the same user, it will help application like WebSocket.
lb-sticky-sessions
So if you are looking for a way to management and store sensitive data and that data should be available across multiple nodes then you need
Distributed Session Management using Redis or Memcached. if you use case is just to stick the subsequent request to the same EC2 instance then LB stickiness is enough.
There are many ways of managing user sessions in web applications,
ranging from cookies-only to distributed key/value databases,
including server-local caching. Storing session data in the web server
responding to a given request may seem convenient, as accessing the
data incurs no network latency. The main drawback is that requests
have to be routed carefully so that each user interacts with one
server and one server only. Another drawback is that once a server
goes down, all the session data is gone as well. A distributed,
in-memory key/value database can solve both issues by paying the small
price of a tiny network latency. Storing all the session data in
cookies is good enough most of the time; if you plan to store
sensitive data, then using server-side sessions is preferable.
building-fast-session-caching-with-amazon-elasticache-for-redis
I'm currenty upscaling from 1xEC2 server to:
1xLoad Balancer
2xEC2 servers
I have quiet a lot of customers, each running our service on their own domain.
We have a webfront and admin-interface and use a lot of caching. When something is changed on the admin-part, the server calls eg.: customer.net/cacheutil.ashx?f=delete&obj=objectname to remove the object on crossdomains.
Hence the new setup, I don't know how to do this with multiple servers, ensuring that the cached objects is deleted on both servers (or more, if we choose to launch more).
I think that it is a "bit much" to require our customers to add eg. "web1.customer.net", "web2.customer.net" and "customer.net" to point at 3 different DNS CNAMEs, since they are not that IT experienced.
How does anyone else do this?
When scaling horizontally, it is recommended to keep your web servers stateless. That is, do not store data on a specific server. Instead, store the information in a database or cache that can be accessed by all servers. (eg DynamoDB, ElastiCache)
Alternatively, use the Sticky Sessions feature of the Elastic Load Balancing service, which uses a cookie to always redirect a user's connection back to the same server.
See documentation: Configure Sticky Sessions for Your Load Balancer