I've setup a AWS instance with cassandra on it and then also setup an auto scaling group to spin up another 4-8 instances depending on alarma. But how does Cassandra know when auto scaling kicks in? How does it know what other nodes to connect to? Do I need to configure something in Cassandra in order for it to sniff the nodes?
when I run node tool, the auto scaling nodes don't show up...
[root#ip-10-205-119-104 bin]# sh nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 127.0.0.1 107.12 MB 256 ? a50294ac-2150-4d9e-9dd2-0a56906e9531 rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
The best option for auto-discovery in Cassandra are seed nodes, which are 'anchor' nodes supposed to be always there when a new one shows up, and can be queried for cluster's node list every time it is needed.
So, you deliver every node with a list of seed nodes in its config file (including the seeds themselves), and once it goes up, it will get the nodes list from a seed. This, off course, demands seed nodes to be static and always running (off course, for redundancy, you must have more than just one seed node). Cassandra demands it to be listed by their IP as well (to avoid having problems with DNS).
Nonetheless, I don't think auto-scaling Cassandra would be a good thing. Cassandra partitions its data (rows) across nodes, and every time you add or remove a node, it needs to repartition and redistribute rows, which, depending on how big are you data, takes quite long (and may demand other administrative actions, like repairing, etc). Even if you have enough replicas to afford a sudden node loss (which is what WILL occur using auto-scaling), that's messy. First, because Cassandra won't automatically decomission nodes - the cluster will know the node is unavailable, but it just waits for it to come back, and try to keep the cluster as healthy as possible (including a mechanism that saves the writes to the unavailable node in other nodes for some period).
So, you would need to watch your nodes and manage those ups and downs from outside. And, you may not even have time for decomission one node and set everything (your data) in place again before another one comes up, and down again, and all that could really screw your cluster totally up.
Well, maybe there's some people out there doing this, but according to my knowledge and experience with Cassandra, it's not so simple and magic as that to be auto-scaled like you would do with a web application, and you would probably end up losing data and having a very inconsistent and unstable system.
Another issue with using auto scaling is that, there is no instant gratification. You cannot really see the benefit of the new node till the cluster rebalances, and this could take long depending on your cluster.
While rebalance is in-progress, you end up putting additional load on the original nodes, which would defeat the purpose of adding capacity.
Related
I have a lot of worker nodes in my akka-cluster, which cause Down all when unstable decision due to their instability; But they don't have SBR's role.
Why Down all when unstable decision in not taken based on SBR's
role?
To solve this problem, should i have distinct clusters or use Multi-DC cluster?
The primary constraint a split-brain resolver has to meet is that every node in the cluster reaches the same decision about which nodes need to be downed (including downing themselves). In the presence of different decisions being made, the guarantees of Cluster Sharding and Cluster Singleton no longer apply: there may be two incarnations of the same sharded entity or the singleton might not be a singleton.
Because there's latency inherent to disseminating reachability observations around the cluster, the less time has elapsed since seeing a change in reachability observations, the more likely it is that there's a node in the cluster which would disagree with our node about which nodes are reachable. That disagreement opens the door that node to make a different SBR decision than the one our node would make. The only strategy the SBR has which guarantees that every node makes the same decision even if there's a disagreement about membership or reachability is down-all.
Accordingly, SBR delays making a decision until there's been a long enough time since a cluster membership or reachability change has happened. In a particularly unstable cluster, if too much time has passed without achieving stability, the SBR will then apply the down-all strategy, which does not take cluster roles into account.
If you're not using cluster sharding or cluster singleton (and haven't implemented something with similar constraints...), you might be able to get away with disabling this fallback to down-all (if every bit of distributed state in your system forms a CRDT, for instance, you might be able to get away with this; if you know what a CRDT is, you know and if you don't, that almost certainly means not all distributed state in your system is a CRDT) with the configuration setting
akka.cluster.split-brain-resolver.down-all-when-unstable = off
Think very carefully about this in the context of your application. I would suspect that at least 99.9% of Akka clusters out there would violate correctness guarantees with this setting.
From your question about distinct clusters or Multi-DC, I take it you are spreading your cluster across multiple datacenters. In that case, note that inter-datacenter networking is typically less reliable than intra-datacenter networking. So that means that you basically have three options:
have separate clusters for each datacenter and use "something(s) else" to coordinate between them
use Multi-DC cluster which takes some account of the difference between inter- and intra-datacenter networking (e.g. that while it's possible for node A in some datacenter and node B in that datacenter to disagree on the reachability of a node C in that datacenter, it's highly likely that node A and node B will agree that node D in a different datacenter is reachable or not)
configuring the failure detector for the reliability of the inter-datacenter link (this is effectively treating even nodes in the same rack (or even running on the same physical host or even VM...) as if they were in separate datacenters). This will mean being very slow to declare that a node has crashed (and giving that node a lot of time to say "no, I'm not dead, I was just being quiet/sleepy/etc."). For some applications, this might be a viable strategy.
Which of those 3 is the right option? I think completely separate clusters communicating and coordinating over some separate channel(s) and modeling this in the domain is often useful (for instance, you might be able to balance traffic to the datacenters in such a way that it's highly unlikely you'd need your west coast datacenter to know what's happening on the east coast). Multi-DC might allow for a more consistency than separate clusters. It's probably unlikely that your application requirements are such that multiple DCs within a vanilla single cluster will work well.
My use case is as follow:
We have about 500 servers running in an autoscaling EC2 cluster that need to access the same configuration data (layed out in a key/value fashion) several million times per second.
The configuration data isn't very large (1 or 2 GBs) and doesn't change much (a few dozen updates/deletes/inserts per minute during peak time).
Latency is critical for us, so the data needs to be replicated and kept in memory on every single instance running our application.
Eventual consistency is fine. However we need to make sure that every update will be propagated at some point. (knowing that the servers can be shutdown at any time)
The update propagation across the servers should be reliable and easy to setup (we can't have static IPs for our servers, or we don't wanna go the route of "faking" multicast on AWS etc...)
Here are the solutions we've explored in the past:
Using regular java maps and use our custom built system to propagate updates across the cluster. (obviously, it doesn't scale that well)
Using EhCache and its replication feature. But setting it up on EC2 is very painful and somehow unreliable.
Here are the solutions we're thinking of trying out:
Apache Ignite (https://ignite.apache.org/) with a REPLICATED strategy.
Hazelcast's Replicated Map feature. (http://docs.hazelcast.org/docs/latest/manual/html-single/index.html#replicated-map)
Apache Geode on every application node. (http://geode.apache.org/)
I would like to know if each of those solutions would work for our use case. And eventually, what issues I'm likely to face with each of them.
Here is what I found so far:
Hazelcast's Replicated Map is somehow recent and still a bit unreliable (async updates can be lost in case of scaling down)
It seems like Geode became "stable" fairly recently (even though it's supposedly in development since the early 2000s)
Ignite looks like it could be a good fit, but I'm not too sure how their S3 discovery based system will work out if we keep adding / removing node regularly.
Thanks!
Geode should work for your use case. You should be able to use a Geode Replicated region on each node. You can choose to do synchronous OR asynchronous replication. In case of failures, the replicated region gets an initial copy of the data from an existing member in the system, while making sure that no in-flight operations are lost.
In terms of configuration, you will have to start a couple/few member discovery processes (Geode locators) and point each member to these locators. (We recommend that you start one locator/AZ and use 3 AZs to protect against network partitioning).
Geode/GemFire has been stable for a while; powering low latency high scalability requirements for reservation systems at Indian and Chinese railways among other users for a very long time.
Disclosure: I am a committer on Geode.
Ignite provides native AWS integration for discovery over S3 storage: https://apacheignite-mix.readme.io/docs/amazon-aws. It solves main issue - you don't need to change configuration when instances are restarted. In a nutshell, any nodes that successfully joins topology writes its coordinates to a bucket (and removes them when fails or leaves). When you start a new node, it reads this bucket and connects to one the listed addresses.
Hazelcast's Replicated Map will not work for your use-case. Note that it is a map that is replicated across all it's nodes not on the client nodes/servers. Also, as you said, it is not fully reliable yet.
Here is the Hazelcast solution:
Create a Hazelcast cluster with a set of nodes depending upon the size of data.
Create a Distributed map(IMap) and tweak the count & eviction configurations based on size/number of key/value pairs. The data gets partitioned across all the nodes.
Setup Backup count based on how critical the data is and how much time it takes to pull the data from the actual source(DB/Files). Distributed maps have 1 backup by default.
In the client side, setup a NearCache and attach it to the Distributed map. This NearCache will hold the Key/Value pair in the local/client side itself. So the get operations would end up in milliseconds.
Things to consider with NearCache solution:
The first get operation would be slower as it has to go through network to get the data from cluster.
Cache invalidation is not fully reliable as there will be a delay in synchronization with the cluster and may end reading stale data. Again, this is same case across all the cache solutions.
It is client's responsibility to setup timeout and invalidation of Nearcache entries. So that the future pulls would get fresh data from cluster. This depends on how often the data gets refreshed or value is replaced for a key.
I intend to setup spark cluster on EC2. How much resources spark master instance actually needs? Since master is not involved in processing any of the tasks can it be the smallest EC2 instance?
This obviously depends on what kinds of jobs you're planning to run, how big is the cluster etc, so in that sense the advice to simply try different configurations is good. However, in my purely personal experience the driver instance should be at least at the level of the slave instances. This is mainly due to two reasons.
First of all, there are times when you need the result of the job in a single place. Maybe you just don't want to spend time combining files, maybe you need the results in some specific order which would be hard to achieve in a distributed way etc. but this means the driver should be able to hold all the data (as rdd.collect gathers the results to the driver instance).
Second of all, many of the shuffle-based operations seem to require a lot of memory from the driver. I'm not exactly sure about the details of why this happens (if anyone knows, please do share) but I can't count the number of times I've seen reduceyKey causing an out of memory error from the driver.
Edit: I have assumed you were using Spark's spark-ec2 script, which I believe does install the NameNode in the master instance. If the NameNode is not installed at the master intance, however, my answer has no validity as correctly pointed by #DemetriKots in the comments.
Although the master instance is not involved in data processing, it plays a major role during the management of the workload and resource allocation, e.g (all info is taken from the sources):
NameNode
The NameNode is the centerpiece of an HDFS file system. It keeps the directory tree of all files in the file system, and tracks where across the cluster the file data is kept. It does not store the data of these files itself.
Client applications talk to the NameNode whenever they wish to locate a file, or when they want to add/copy/move/delete a file. The NameNode responds the successful requests by returning a list of relevant DataNode servers where the data lives.
This (look for Hardware Recommendations for Hadoop on the left index) Hortonworks document specifies some recommendations for the master instance in a Hadoop cluster. While it might not be adequate for the slave instances (due to Spark's memory usage), I would say it can be useful in the case of the master instance in a Spark cluster.
This question has a conceptual and practical parts.
Conceptually I'd like to know if using the autoscaling functionality is equivalent to simply increasing the compute power by a factor of the number of added instances?
Practically ... how does this work? I have one running instance, its database sitting on an LVM composed of multiple EBS volumes, similarly with all website data. Judging from the load on the instance I either need to upgrade to a more powerful instance or introduce this autoscaling. Is it a copy of the running server? If so, how is the database (etc) kept consistent?
I've read through the AWS documentation, and still haven't got the picture yet - I could set one autoscaling group up which would probably clear my doubts, but I am very leery to do this with a production server.
Any nudges in the right direction would be welcome.
Normally if you have a solution that also uses a database, and several machines in the solution, the database is typically not on any of the machines but is instead hosted seperately with each worker machine pointing to the same database - if you are on AWS platform already, then DynamoDB or RDS are both good solutions for this.
In theory, for some applications, upgrading the size of the single machine will give you the same power as adding several smaller machines, but increasing the size of the single machine, while usually these easiest thing to do at first, should not be considered autoscaling and has its own drawbacks. Here are some things to consider:
Using multiple machines instead of one big one gives you some fault tolerance. One or more machines can go down and if your solution is properly designed new machines will spin up to replace them.
Increasing the size of a single machine solution means you are probably paying too much. If you size that single machine big enough to handle peak workloads, that means at other times (maybe most of the time), you are paying for a bigger machine than you need. If you setup your autoscaling solution properly more machines come on line in response to increasing demand, and then they terminate when that demand decreases - you only pay for the power you need when you need it.
When your solution is designed in this manner, you need to think of all of the worker machines as ephermal - likely to disappear at any time, so you need to build your solution differently. Besides using a hosted database (like on DynamoDB or AWS RDS), you also should not store any data on the machines in your auto-scaling group that doesn't also live somewhere else. For example, if part of your app allows users to upload images, you don't store them on the instances, you store them in S3. Same would apply to any other new data that comes in.
You need to be able to figuratively 'pull the plug' at any instant on any of the machines in your ASG without losing data.
Ultimately a properly setup auto-scaling solution will likely serve you better, but without doubt it is simpler to just 'buy a bigger machine' and the extra money you spend on running that bigger machine may be more than offset by the time and effort you don't have to spend re-architecting your solution to properly run in an autoscaling environment. The unique requirements of your solution will ultimately decide which approach is better.
I'm looking for key-val store that will be used to share some state between multiple hosts.
Goal:
- Achive high availability for limited set of data, that need to be accesible on every host/node
Requirements:
put/get/incr/decr operations
simple numeric data - int/float values, nothing more, no JSON, blobs and so on
full copy of dataset on every node or automated failure tolerance
automatic adding/removing of hosts with no need to reconfigure application
small dataset - only a few megabytes of shared data
node traffic is load balanced with user-to-node sticking, so only one node at once will change data related to users that are sticked to that node. This will only change on node failure, but constraint of one master for a set of keys will be keeped, so many readers, one master for own small dataset
multiple small VM instances will be used, so it should be lightweight in terms of required memory
automated operation - configure once and forget
I've looked at Riak and CouchDB, but they look like too complicated and too heavy
Any suggestions?
After more research I'm heading toward Hazelcast, it provides memcache-like interface and it's easy to configure a simple cluster with automated failover