Replace HTTP sessions in Cloud using REDIS cache - amazon-web-services

We have HTTP sessions in on-premise application. We want to migrate application to Cloud. We got the direction to use REDIS cache implementation in Cloud to replace HTTP sessions.
Do we save user specific(HTTP Session) data in REDIS? Is there any other elegant way to handle this scenario?
Thanks in advance.

Assuming you're talking about a legacy app, you can set Redis (Azure Redis Cache) as your State Provider.
Here's a link about it:
https://learn.microsoft.com/en-us/azure/azure-cache-for-redis/cache-aspnet-session-state-provider

Yes it is possible and Redis is one of the pinpoint solutions for this kind of requirements. It is super fast in-memory key/value store just like sessions(get/set). Most of the modern frameworks come along with built-in session support for Redis. Even it is a legacy app, you may integrate easily(there could some libraries that do that). You may just use commands such as SET, GET, EXPIRE, EXISTS, DEL for a session store.
If it is going to be just string/string you may go with string, if you have some json values you may use hash. Both solutions provide EXPIRE option for you to not store forever and manage your memory.
I am not familiar with Azure side but AWS has ElastiCache service that supports Redis. Another option could be installing one in a EC2 instance for on-prem.

Related

Do I need to use a caching technology like memcached or redis?

I am new to web development in general.
I am developing a social media website (very much like twitter) using django rest framework as backend and react as front end. I am going to deploy the app to Heroku.
Now, I just heard about this thing called memcached and redis. So, what is the usecase here? Should I use it or It's just for high traffic websites?
Cache in generally called in-memory cache, which store data primarily in memory(like memcached and Redis), and will provide faster way for data access in heavy traffic case.
And Cache-database consistency is always been an issue as you do have multiple different data sources. There are some good solutions to improve it but it still not perfect in sync.
So based on your read/write traffic, if db can handle the traffic perfectly and no performance issue, you don't need to consider cache(most of the productive database also have caching, like MySQL, or DynamoDB). And if db cannot handle your traffic, you should consider using cache.

How to configure WSO2 identity server to avoid single point of failure?

My Company wants to setup wso2 identity server cluster on 3 machines such that if one machine fails, the cluster still works.
All the wso2 documentation shows clustering with shared user store and database but does not mention how to avoid single point of failure.
As per my understanding, the only way to do the same is to form an external ldap cluster as user store and an external database cluster. But that would be much complex and hard to manage.
Can we configure the wso2's embedded ldap to replicate and sync with other node's embedded ldap?
Is there any other way to avoid single point of failure in wso2?
No, you can't use embedded LDAP.
You should avoid using embedded LDAP in production at all costs. It will sure get corrupted with concurrent requests and growth the of data. And you will not be able to recover at all. It's just there for testing purposes.
If you want to avoid any single point of failures due to DB or LDAPs, you should be using DB and LDAP clustered as instructed by the respective provider. And point the common LB URL to the WSO2 server.

Design service on GCP

In google cloud platform i want to write one application that will take http request , hit apis in chain and then show a template based on the response received from the api and populate them with data received from apis . There are many templates .
What is the best way to design on GCP considering the below.
1. The application will received huge traffic.
2. Some apis will return dynamic urls that template needs.
I was thinking of wrinting in java and putting that on Kubernetes , that will manage the traffic . But what should be the choice of database to be used ?
The data is mostly key value pairs and should be highly available , in case it is down some backup should be there
Yes, Kubernetes is one option, something else that you may want to consider to handle huge app traffic is Google App Engine (GAE), since you mentioned Java development you can use the GAE Standard environment which is easy to build, deploy and runs reliably even under heavy load (fully managed).
You may want to consider using Cloud Datastore since based on your description, it is the best fit for the application needs (NoSQL database and automatically handles sharding and replication). You can also use the diagram to choose the best storage option.

Redis -- how does it improve performance?

i'm relatively new to the world of web-development and have only recently learned memory hierarchies in computer systems. I recently came across Redis and am itching to try it out in a small web-app. But before I do, I was wondering how is Redis going to improve performance? From what i've read so far, it seems that Redis is an "in-memory" data store, so does that mean that whenever a user requests a data from the server, instead of fetching from the database (given that the Redis data store is already populated with the needed data) the request can be fulfilled by accessing the data directly from the server's memory? To be specific, say if i have a web-app which back-end server is hosted on AWS, and the database is stored on MLAB, then whenever a user requests a data, instead of querying to the server which redirects the request to MLAB, it can now directly fetch the data from the server without going to MLAB ? Also, by in-memory, does that mean that the data is stored in the RAM on my AWS server?
Finally, how is this different from a cache?
Thank you so much!!
Well, Redis is used as a cache, the difference with most of the traditional cache is that you have other nice structures like hashes, sets, lists, TTL on keys, hyperlologs and so on, not only pair key:value.
You are right what you define about Redis, is but take into account that if you want to move your data from MLAB database to Redis you have to design some process to keep Redis update in each update that happens in your database. So every query from your application will use Redis to get data but apart from that you will need a process to keep update Redis with changes on your database, so if you use your application to update the database (and there are no other external parts which update your DB), every time you get an update from your web-app you have to update the DB and also Redis or having a command/script which detect every time an updated happened in the DB and update Redis properly.
AWS also provides Redis services, like ElasticCache https://aws.amazon.com/elasticache/?nc1=h_ls so basically the AWS ECS instance where you have your application doesn't use the RAM but this ElasticCache service which can live on another physical machine.
Finally, Redis store on memory the data though, it uses a dump file to save partial data in case of crashes and it also offers a persistence mode

Using memcached or Redis on aws-elasticache

I am working on an application on AWS and I am using AWS elasticache for caching.
I am confused between using memcached or redis.
I read the about the redis 3.0.2 update and how it is equivalent to memchached now.
https://groups.google.com/forum/#!msg/redis-db/dO0bFyD_THQ/Uoo2GjIx6qgJ
But I read on the amazon aws faq page that amazon elasticache dows not support 3.0.2. They currently support Redis 2.6.13, 2.8.6 and 2.8.19.
http://aws.amazon.com/elasticache/faqs/ (Date June 10,2015)
I have read AWS white papers on elsticache. But they have not specified for which version of redis they are providing the suggestions.
How should I decide between the use of memcached or redis for any application I may create ? What are the points one needs to keep in mind before using redis or memcached ? Should I consider that amazon will update the redis version soon and go on with redis ?
p.s. I am a novice developer.
Actually depends upon use case
Select Memcached if you have these requirements:
You want the simplest model possible.
You need to run large nodeswith multiple cores or threads.
You need the ability to scale out/in,
Adding and removing nodes as demand on your system increases and decreases.
You want to partition your data across multiple shards.
You need to cache objects, such as a database.
Select Redis if you have these requirements:
You need complex data types, such as strings, hashes, lists, and sets.
You need to sort or rank in-memory data-sets.
You want persistence of your key store.
You want to replicate your data from the primary to one or more read replicas for read intensive applications.
You need automatic failover if your primary node fails.
You want publish and subscribe (pub/sub) capabilities—to inform clients about events on the server.
You want backup and restore capabilities.
Here is interesting article by aws https://d0.awsstatic.com/whitepapers/performance-at-scale-with-amazon-elasticache.pdf
This is the main discussion of comparing Memcached and Redis Memcached vs. Redis?
Both AWS and Azure for sure will upgrade in the future to the newer versions of Redis, but when and how they will roll out it will depend only on them. Meanwhile you could install Redis 3.0.2 yourself, but you need to see if you really need Redis 3 which actually gives you the cluster support. And if you don't need the cluster then you can go with 2.8 from Elasticache.