So I'm currently developing a messaging application to learn the process and I'm actually using Redis as a cache and use it with websockets to push real-time messages.
And then, this question popped in my mind:
Is it possible, to use Redis only to run a whole service (like a messaging application for example) ?
NOTE : This imply removing any form of database (we're only keeping strings)
I know you can set-up Redis to be persistent, but is it enough ? Is it robust enough ? Would it be a safe enough move ? Or totally insane ?
What are you thoughts ? I'd really like to know, and if you think it is possible, I'll give it a shot.
Thanks !
A few companies use Redis as their unique or primary database, so it is definitely not insane.
You can develop and run a full service using Redis as a backend, as long as you understand and accept the tradeoffs it implies.
By this I mean:
that you can use a Redis server as a high performance database as long as your whole data can reside in memory. It may imply that you reduce the size of your data, or choose not to store some of them which may be computed by your app on read access or imported from another source;
that if you can't store all of your data in the memory of a single server, you can use a Redis cluster, but it will limit the available Redis features (see implemented subset
that you have to think about the potential data losses when a server crashes, and determine if they are acceptable or not. It may be OK to lose some data if the process which produced them is robust and will create them again when the database restart (by example when the data stored in Redis come from an import process, which will start again from the last imported item). You can also use several Redis instances, with different persistency configuration: one which writes on disk each time a key is modified, avoiding potential data loss, but with much lower performances; and another one to store non critical data, which are written on disk every couple of seconds.
Redis may be used to store structured data, not only strings, using hashes. Each time you would create an index in a relational model, you have to create a data structure in Redis. By example if you want to store Person objects, you create a HASH for each of them, to store their properties, including a unique ID. If you want to be able to get people by city, you create a SET for each city, and you insert the ID of each newly created Person in the corresponding SET. So you will be able to get the list of persons in a given city. It's just an example, you have to define the model and data structures to be used according to your application.
Related
I have a question about when it makes sense to use a Redis cluster versus standalone Redis.
Suppose one has a real-time gaming application that will allow multiple instances of the game and wish to implement
real time leaderboard for each instance. (Games are created by communities of users).
Suppose at any time we have say 100 simultaneous matches running.
Based on the use cases outlined here :
https://d0.awsstatic.com/whitepapers/performance-at-scale-with-amazon-elasticache.pdf
https://redislabs.com/solutions/use-cases/leaderboards/
https://aws.amazon.com/blogs/database/building-a-real-time-gaming-leaderboard-with-amazon-elasticache-for-redis/
We can implement each leaderboard using a Sorted Set dataset in memory.
Now I would like to implement some sort of persistence where leaderboard state is saved at the end of each
game as a snapshot. Thus each of these independent Sorted Sets are saved as a snapshot file.
I have a question about design choices:
Would a redis cluster make sense for this scenario ? Or would it make more sense to have standalone redis instances and create a new database for each game ?
As far as I know there is only a single database 0 for a single redis cluster.(https://redis.io/topics/cluster-spec)
In that case, how would one be able to snapshot datasets for each leaderboard at different times work ?
https://redis.io/topics/cluster-spec
From what I can see using a Redis cluster only makes sense for large-scale monolithic applications and may not be the best approach for the scenario described above. Is that the case ?
Or if one goes with AWS Elasticache for Redis Cluster mode can I configure snapshotting for individual datasets ?
You are correct, clustering is a way of scaling out to handle really high request loads and store tons of data.
It really doesn't quite sound like you need to bother with a cluster.
I'd quite be very surprised if a standalone Redis setup would be your bottleneck before having several tens of thousands of simultaneous players.
If you are unsure, you can probably mock some simulated load and see what it can handle. My guess is that you are better off focusing on other complexities of your game until you start reaching quite serious usage. Which is a good problem to have. :)
You might however want to consider having one or two replica instances, which is a different thing.
Secondly, regardless of cluster or not, why do you want to use snap-shots (SAVE or BGSAVE) to persist your scoreboard?
If you want to have individual snapshots per game, and its only a few keys per game, why don't you just have your application read and persist those keys when needed to a traditional db? You can for example use MULTI, DUMP and RESTORE to achieve something that is very similar to snapshotting, but on the specific keys you want.
It doesn't sound like multiple databases is warranted for this.
Multiple databases on clustered Redis is only supported in the Enterprise version, so not on ElastiCache. But the above mentioned approach should work just fine.
I am figuring out my options for storing hierarchical data (parent - child relationships).
Since a tree is a graph and a forest (of trees) is also technically a graph, a graph database seems to fit the bill much better than a RDBMS esp. since I am concerned with optimizing both read and write operations.
Optimizing writes implies changes in hierarchy require minimal writes.
Optimizing reads implies materializing the full path to a particular node consumers minimal read operations.
My use case is:
A tree per user. Should I store and use one graph across the user space or one graph per user?
Path queries starting at any node and back to root of tree for a user.
Child nodes store links to parent nodes
Since all of my resources are in AWS, being able to use the Titan DynamoDB backend seems ideal.
My real problem is in understanding how to scale and manage Titan though.
Do I need a gremlin server instance? In other words, do I need to stand up a EC2 instance with gremlin server in order to do anything with Titan? Or can I use the Java Titan API to work with graph data directly?
Do I need to explicitly shard the data? In other words, do I need to stand up more gremlin servers as usage increases and the amount of data and the amount of operations increase? When the number of servers scale out, do I need to consistent hash across those servers from the client in order to perform operations?
Do I need to setup an elastic search cluster to be able to start traversals from any node? Or is using vertices to represent objects and edges to represent parent relationships sufficient at this point? I can guarantee that vertex ID's are unique across the user-space ; I can also decorate each vertex with the unique user ID as well. In that case, do I need elastic search? My hope is that elastic search is for free form or more complex search type queries and not for exact queries!
As the number of front-ends increase, can each front-end open the graph (single graph across user space)? If a graph per user, then since front-ends have no affinity, the same graph may be opened for each user; is that OK?
I wasn't able to find much documentation on any of this. Thank you!
I will try to answer your questions in the following:
Both solutions are possible, it is highly depend on your application to decide between choosing gremlin server or having a customized data access layer with customized queries through other secondary data stores. Although I would prefer having customized data access layer, it is possible to response all gremlin query requirements through gremlin server.
Gremlin server is just an interface between your application and data stores, and due to the caching mechanism it is memory-intensive. Data can be stored in different machines for example a cluster of DynamoDB machines. It depend on the number of concurrent users, but I think vertical scaling is more than enough for most of the applications. If you are going to use titan in a highly concurrent environment, beyond resource of single machine, probably you have to create different gremlin-servers on different machines and handle the load balancing mechanism. The problem is you have to control sending requests in a way that similar queries hit the same gremlin-server from the cache efficiency point of view.
Yes, indexing backend is just useful for more complicated queries other than simple retrieval. Secondary index backend like Solr/ Elastic or Lucene is useful if you want to have conditional search or text search by similarity. It is because that indexer like Lucene can provide a reverse index structure that can be helpful for similar searches. If you are going to search for all parents/children with having "foo" in their names you have to use indexing backend. If you are going to search for all parents/children with age less than 40, you have to use indexing backend too.
More information about indexing backends could be accessed via these link.
http://s3.thinkaurelius.com/docs/titan/1.0.0/indexes.html
http://s3.thinkaurelius.com/docs/titan/1.0.0/index-parameters.html
It is highly recommended to limit the number of open graphs to one for the entire application. Titan uses some caching mechanisms that encourage you to have a single graph instance in the entire application for the sake of performance. Since uncommitted data is just visible on a single graph instance and transaction, if you want real-time application it is suggested that use single graph instance and single transaction. However, using more than 1 graph instance in the entire application for a read-only transaction is not wrong, but it is not efficient.
You can find lots of information about Titan graph database in the following links:
Main Titan documentaion: http://s3.thinkaurelius.com/docs/titan/1.0.0/
An old but really useful document about how Titan works: https://github.com/elffersj/delftswa-aurelius-titan/tree/master/SA-doc
We have an iOS app that talks to a django server via REST API. Most of the data consists of rather large Item objects that involve a few related models that render into single flat dictionary, and this data changes rarely.
We've found, that querying this is not a problem for Postgres, but generating JSON responses takes a noticeable amount of time. On the other hand, item collections vary per-user.
I thought about a rendering system, where we just build a dictionary for Item object and save it into redis as JSON string, this way we can serve API directly from redis (e.g. HMGET(id of items in user library), which is fast, and makes it relatively easy to regenerate "rendered instances", basically just a couple of post_save signals.
I wonder how good this design is, are there any major flaws in it? Maybe there's a better way for the task?
Sure, we do the same at our firm, using Redis to store not JSON but large XML strings which are generated from backend databases for RESTful requests, and it saves lots of network hops and overhead.
A few things to keep in mind if this is the first time you're using Redis...
Dedicated Redis Server
Redis is single-threaded and should be deployed on a dedicated server with sufficient CPU power. Don't make the mistake of deploying it on your app or database server.
High Availability
Set up Redis with Master/Slave replication for high availability. I know there's been lots of progress with Redis cluster, so you may want to check on that too for HA.
Cache Hit/Miss
When checking Redis for a cache "hit", if the connection is dead or any exception occurs, don't fail the request, just default to the database; caching should always be 'best effort' since the database can always be used as a last resort.
I am currently in the initial design phase of my first app.
In my app there will be individual sessions containing 1-5 users.
I need to be able to keep track of each users gps location and be able to push and pull them to each of the users. Each user will have the most recently reported location of every other user in the session.
There will be other calculations done on the data set but that will be client side, the server should only need to handle pushing and pulling of user locations (and the usernames).
I'm predicting due to the nature of the app 90% of sessions should not last more than 2 hours, with the possibility of the server ending sessions that are older then 24-48 hours (once real world testing of the app begins I would have a better idea of how long sessions should last).
I was thinking of using django to build an API, and to store all the data in the program itself and not to use a database as this should be faster and I don't think it is necessary to store the data since it has such a short lifetime.
Is this a good starting point? Is there anything I should be thinking about or considering? I'm completely new to designing backend software.
While performance might not even be an issue in the beginning, there are some things you can do once you hit a certain load:
Keep all your session data in one model, even if you're denormalizing (putting redundant information into your database) your database a bit. That way you only have to do one read to the database and no expensive JOINs
Use the Django caching framework (https://docs.djangoproject.com/en/dev/topics/cache/) to cache views, so multiple reads of the same data don't have to hit the database
Before you start optimizing, profile your code to see where your performance bottlenecks really are. Sometimes you'll be surprised which operations are expensive, and which aren't.
I have 2 django servers, with their own database, I want to exchange some specific objects between them over the http protocol.
Actually, I planed to create some views to generate XML output on one side to be imported on the other side. Is there a nicer way ?
Is there a reason this needs to happen through http?
If you just want to read data from one server to be used on the other, you could create a simple API that returns a representation of the object you queried for (in xml/json or whatever other format you wanted).
If there is going to be a decent amount of processing going on, or slow communication, and you don't need it to happen real time (in the request/response cycle), you could look at a message queue. Something like RabbitMQ for instance.
If you want both servers to have direct access to both databases, you could try to take advantage of Django's multiple database support.
If it's more of a one-off copy of data, just write a small (non-Django) script to do it.