Is there a way to persist an ELB stickiness session even if the instance its connected to fails? - amazon-web-services

Just curious if this is possible or how you would accomplish this.
Regardless if I use duration based stickiness or application based, when the instance a user is connected to fails their session gets reset because they have to connect to a new server.
Is there a way to not have this happen? To be able to have that session persist even if the instance they are connected to dies? Im also using SSL with a cert if that changes things.

The only way to accomplish that is persisting your session state in some Storage service, could be a database table, s3, Caching service, NoSQL table, Etc.
These are some approaches
Session state Inside Your Database
Saving session state inside the database is common in lightweight web frameworks like Django. That way you can add as many front servers as you like without having to worry about session replication and other difficult stuff. You don’t tie yourself to a certain web server and you get persistence and all other features databases provide for free. As far as I can tell, this works rather nicely for small to medium size websites.
The problem is the usual: The database server may become your bottleneck. In that case your best bet may be to take a suitcase full of money to Oracle or IBM and buy yourself a database cluster.
Reference: Saving Session Data in Web Applications
Session state inside a Caching service
Amazon ElastiCache offers fully managed Redis and Memcached. Seamlessly deploy, operate, and scale popular open source compatible in-memory data stores. Build data-intensive apps or improve the performance of your existing apps by retrieving data from high throughput and low latency in-memory data stores.
DynamoDB
Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model, reliable performance, and automatic scaling of throughput capacity.
Regardless the approach you use, a middleware must be deployed along with your app to manage the stored session state.
Middleware: Could be either a thrid-party solution or your own solution.
Resources
AWS Session Management
Amazon ElastiCache
Amazon DynamoDB
Middleware for session management (Google results)

Related

Is there an easy way to understand the difference between AWS Elasticache and RDS?

I'm learning AWS, kind of confusing about Elasticache & RDS, I read the article in this link, but still confused, can someone explain a little bit? Many thanks.
This is a general question about storage technologies: "how does a cache differ from a database?"
A cache is not (typically) a persistent data store. Its data is ephemeral. The purpose of the cache is to increase the perceived performance of an actual database, sitting behind the cache. The database stores the actual data persistently, and is the authoritative source of data. The cache sits in front of the database and tries to improve the performance of your application by detecting queries that it already knows the answer to and serving up cached results directly to your application, to save having to go to the database.
Of course, the cache will get out of date over time and so you need a process for expiring data from the cache when it becomes inaccurate, thus causing the next query for that piece of data to go to the actual database, and that new data can be cached until it expires.
RDS stands for relational database service. If you need managed instances of relational databases like Oracle, MS-SQL server, MySQL, MariaDB, or PostgreSQL then you need to use RDS.
Elasticache however is caching db as a service. It supports two popular engines memcache and redis.
DynamoDB is no-sql DB as a service.
Use cases for RDS and elasticache are very different.
Use RDS When,
there is a need to persist data
needs ACID compliance
require oltp db engine
Use in-memory distributed cache such as elasticache when,
reduce latency
Offload db pressure
handle transient data

Architecture Questions to Autoscale Moodle on Google Cloud Platform

We're setting up a Moodle for our LMS and we're designing it to autoscale.
Here are the current stack specifications:
-Moodle Application (App + Data) baked into an image and launched into a Managed Instance Group
-Cloud SQL for database (MySQL 5.7 connected through Cloud SQL Proxy)
-Cloud Load Balancer - HTTPS load balancing with the managed instance group as backend + session affinity turned on
Questions:
Do I still need Redis/Memcached for my session? Or is the load balancer session affinity enough?
I'm thinking of using Cloud Filestore for the Data folder. Is this recommendable vs another Compute Engine?
I'm more concerned of the session cache and content cache for future user increase. What would you recommend adding into the mix? Any advise on the CI/CD would also be helpful.
So, I can't properly answer these questions without more information about your use case. Anyway, here's my best :)
How bad do you consider to be forcing the some users to re-login when a machine is taken down from the managed instance group? Related to this, how spiky you foresee your traffic will be? How many users will can a machine serve before forcing the autoscaler to kick in and more machines will be added or removed to/from the pool (ie, how dynamic do you think your app will need to be)? By answering these questions you should get an idea. Also, why not using Datastore/Firestore for user sessions? The few 10s of millisecond of latency shouldn't compromise the snappy feeling of your app.
Cloud Filestore uses NFS and you might hit some of the NFS idiosyncrasies. Will you be ok hitting and dealing with that? Also, what is an acceptable latency? How big is the blobs of data you will be saving? If they are small enough, you are very latency sensitive, and you want atomicity in the read/write operations you can go for Cloud BigTable. If latency is not that critical Google Cloud Storage can do it for you, but you also lose atomicity.
Google Cloud CDN seems what you want, granted that you can set up headers correctly. It is a managed service so it has all the goodies without you lifting a finger and it's cheap compared to serving stuff from your application/Google Cloud Storage/...
Cloud Builder for seems the easy option, unless you want to support more advanced stuff that are not yet supported.
Please provide more details so I can edit and focus my answer.
there is study for the autoscaling, using redis memory store show large network bandwidth from cache server, compare than compute engine with redis installed.
moodle autoscaling on google cloud platform
regarding moodle data, it show compute engine with NFS should have enough performance compare than filestore, much more expensive, as the speed also depend on the disk size.
I use this topology for the implementation
Autoscale Topology Moodle on GCP

Issue with Governance/Configuration Registry and User Store databases being SPOF's

If my Governance/Configuration Registry and User Store databases that hook into my WSO2 Identity Server go down, will that bring down my entire cluster since each "IS" node wouldn't be able to share/replicate data between one another since the Registry and User Store are no longer available? How would I load-balance the Registry and User Store so that the unavailability of these databases don't impair the operations of my entire cluster?
Relevant links:
https://docs.wso2.com/display/CLUSTER44x/Setting+up+the+Database
https://docs.wso2.com/display/CLUSTER44x/Governance+Registry+Deployment+Patterns#GovernanceRegistryDeploymentPatterns-MinimumdistributedHAsetup(withSSO)
High availability on the data persistence layer usually addressed by the Database of you selected. One example is Mysql Master/Slave Replication
This is a JDBC level, client side load balancing, and supported by almost all major JDBC vendors.

State management in amazon web services

How is state managed between sessions? I know that in Azure, client-specific states are stored in SQL Azure. I'm wondering if this is done similarly in AWS?
Do the various instances of your application all access a DB somewhere where the state is stored? Is state management much different depending on which technologies you are using?
At a 'homework' level, Amazon Web Services is loosely comprised of two different sets of things:
infrastructure services (EC2, EBS), which you manage yourself
higher level services (S3, DynamoDB, ELB), which Amazon manage for you
When you upload a file to S3, it is stored across a number of machines in a number of different data centers, and Amazon is responsible for finding and returning the file when you request it (as well as making sure it doesn't get erased by a machine failure.)
With something built on top of one of the infrastructure services, such as an application running on EC2, you are on your own as to how you store and synchronize state:
One server, state in memory (bad)
Load balancing with no state handling (very bad!)
Load balancing with sticky sessions (sensible, but not enough by itself; if that server falls out of the pool, the other servers have no idea of who you are)
Load balancing with servers with a common state server
How do you store state? Traditionally a database (possibly Amazon RDS) with a memory cache (such as Elasticache - Amazon's managed memcached-compatible cache). Amazon's new DynamoDB service is a good fit for this use, as a fast, redundant, key-value store.

need some guidance on usage of Amazon AWS

every once in a while i read/hear about AWS and now i tried reading the docs.
But such docs seem to be written for people who already know which AWS they need to use and only search for how it can be used.
So, for myself, to understand AWS better i try to sketch a hypothetical Webapplication with a few questions.
The apps purpose is to modify content like videos or images. So a user has some kind of webinterface where he can upload his files, do some settings and a server grabs the file and modifies it (e.g. reencoding). The Service also extracts the audio track of a video and trys to index the spoken words so the customer can search within his videos. (well its just hypothetical)
So my questions:
given my own domain 'oneofmydomains.com' is it possible to host the complete webinterface on AWS? i thought about using GWT to create the interface and just deliver the JS/images via AWS, but which one, simple storage? what about some kind of index.html, is there an EC2 instance needed to host a webserver which has to run 24/7 causing costs?
now the user has the interface with a login form, is it possible to manage logins with an AWS? here i also think about an EC2 instance hosting a database, but it would also cause costs and im not sure if there is a better way?
the user has logged in and uploads a file. which storage solution could be used to save the customers original and modified content?
now the user wants to browse the status of his uploads, this means i need some kind of ACL, so that the customer only sees his own files. do i need to use a database (e.g. EC2) for this, or does amazon provide some kind of ACL, so the GWT webinterface will be secure without any EC2?
the customers files are reencoded and the audio track is indexed. so he wants to search for a video. Which service could be used to create and maintain the index for each customer?
hope someone can give a few answers so i understand AWS better on how one could use it
thx!
Amazon AWS offers a whole ecosystem of services which should cover all aspects of a given architecture, from hosting to data storage, or messaging, etc. Whether they're the best fit for purpose will have to be decided on a case by case basis. Seeing as your question is quite broad I'll just cover some of the basics of what AWS has to offer and what the different types of services are for:
EC2 (Elastic Cloud Computing)
Amazon's cloud solution, which is basically the same as older virtual machine technology but the 'cloud' offers additional knots and bots such as automated provisioning, scaling, billing etc.
you pay for what your use (by hour), for the basic (single CPU, 1.7GB ram) would prob cost you just under $3 a day if you run it 24/7 (on a windows instance that is)
there's a number of different OS to choose from including linux and windows, linux instances are cheaper to run without the license cost associated with windows
once you're set up the server to be the way you want, including any server updates/patches, you can create your own AMI (Amazon machine image) which you can then use to bring up another identical instance
however, if all your html are baked into the image it'll make updates difficult, so normal approach is to include a service (windows service for instance) which will pull the latest deployment package from a storage (see S3 later) service and update the site at start up and at intervals
there's the Elastic Load Balancer (which has its own cost but only one is needed in most cases) which you can put in front of all your web servers
there's also the Cloud Watch (again, extra cost) service which you can enable on a per instance basis to help you monitor the CPU, network in/out, etc. of your running instance
you can set up AutoScalers which can automatically bring up or terminate instances based on some metric, e.g. terminate 1 instance at a time if average CPU utilization is less than 50% for 5 mins, bring up 1 instance at a time if average CPU goes beyond 70% for 5 mins
you can use the instances as web servers, use them to run a DB, or a Memcache cluster, etc. choice is yours
typically, I wouldn't recommend having Amazon instances talk to a DB outside of Amazon because of the round trip is much longer, the usual approach is to use SimpleDB (see below) as the database
the AmazonSDK contains enough classes to help you write some custom monitor/scaling service if you ever need to, but the AWS console allows you to do most of your configuration anyway
SimpleDB
Amazon's non-relational, key-value data store, compared to a traditional database you tend to pay a penalty on per query performance but get high scalability without having to do any extra work.
you pay for usage, i.e. how much work it takes to execute your query
extremely scalable by default, Amazon scales up SimpleDB instances based on traffic without you having to do anything, AND any control for that matter
data are partitioned in to 'domains' (equivalent to a table in normal SQL DB)
data are non-relational, if you need a relational model then check out Amazon RDB, I don't have any experience with it so not the best person to comment on it..
you can execute SQL like query against the database still, usually through some plugin or tool, Amazon doesn't provide a front end for this at the moment
be aware of 'eventual consistency', data are duplicated on multiple instances after Amazon scales up your database, and synchronization is not guaranteed when you do an update so it's possible (though highly unlikely) to update some data then read it back straight away and get the old data back
there's 'Consistent Read' and 'Conditional Update' mechanisms available to guard against the eventual consistency problem, if you're developing in .Net, I suggest using SimpleSavant client to talk to SimpleDB
S3 (Simple Storage Service)
Amazon's storage service, again, extremely scalable, and safe too - when you save a file on S3 it's replicated across multiple nodes so you get some DR ability straight away.
you only pay for data transfer
files are stored against a key
you create 'buckets' to hold your files, and each bucket has a unique url (unique across all of Amazon, and therefore S3 accounts)
CloudBerry S3 Explorer is the best UI client I've used in Windows
using the AmazonSDK you can write your own repository layer which utilizes S3
Sorry if this is a bit long winded, but that's the 3 most popular web services that Amazon provides and should cover all the requirements you've mentioned. We've been using Amazon AWS for some time now and there's still some kinks and bugs there but it's generally moving forward and pretty stable.
One downside to using something like aws is being vendor locked-in, whilst you could run your services outside of amazon and in your own datacenter or moving files out of S3 (at a cost though), getting out of SimpleDB will likely to represent the bulk of the work during migration.