Rebalance Akka Cluster if One Of Shard Is not Resolving - akka

Intermittently we are receiving following errors
2022-05-25 08:32:30,691 ERROR app=abc a.c.s.DDataShardCoordinator - The ShardCoordinator was unable to update a distributed state within ‘updating-state-timeout’: 2000 millis (retrying). Perhaps the ShardRegion has not started on all active nodes yet? event=ShardRegionRegistered(Actor[akka://application#10.52.174.4:25520/system/sharding/abcapp#-1665332307])
2022-05-25 08:32:31,348 WARN app=abc a.c.s.ShardRegion - abcapp: Trying to register to coordinator at [ActorSelection[Anchor(akka://application#10.52.103.132:25520/), Path(/system/sharding/abcappCoordinator/singleton/coordinator)]], but no acknowledgement. Total [22] buffered messages. [Coordinator [Member(address = akka://application#10.52.103.132:25520, status = Up)] is reachable.]
While we check cluster members by using /cluster/members we got “10.52.174.4:25520” this as
{
“node”: “akka://application#10.52.252.4:25520”,
“nodeUid”: “7353086881718190138”,
“roles”: [
“dc-default”
],
“status”: “Up”
},
Which says its healthy but problem resolves while we remove this node from the cluster using
/cluster/members/{address} (leave operation to remove 10.52.252.4 from cluster, once it’s removed cluster will create new pod and rebalance.
Need help to understand the best way of handling this error.
Thanks

You can of course implement an external control plane to parse logs and take a node exhibiting this error out of the cluster.
That said, it's better to understand what's happening here. The ShardCoordinator runs on the oldest node in the cluster, and needs to ensure that there's agreement on things like which nodes own which shards. It accomplishes this by requiring that updates be acknowledged by a majority of nodes in the cluster. If a state update isn't acknowledged, then further updates to the state (e.g. rebalances) are delayed.
I said "majority", but because in clusters where there's substantial node turnover relative to the size of the cluster simple majorities can lead to data loss, it becomes more complex. Consider a cluster of 3 nodes, N1, N2, N3. N1 (the ShardCoordinator) updates state and considers it successful when it and N3 have updated state. N1 is dropped from the cluster and replaced by N4; N2 becomes the shard coordinator (being the next oldest node) and requests state from itself and the other nodes; N4 responds first. The result becomes that the state update N1 made is lost. So two other settings come into play:
akka.cluster.coordinator-state.write-majority-plus (default 3) which adds that to the majority write requirement (rounding down)
akka.cluster.distributed-data.majority-min-cap (default 5) which requires that the majority plus the added nodes must be at least this
If the computed majority is greater than the number of nodes, the majority becomes all nodes. So in a cluster with fewer than 9 nodes with the defaults these become effectively all nodes (and the actual timeout when updating is a quarter of the configured timeout, to allow for three retries).
You don't say what your cluster size is, but if running in a cluster with fewer than 9 nodes, it can be a good idea to increase the akka.cluster.sharding.updating-state-timeout from the default 5 seconds to allow for the increased consistency level. Decreasing write-majority-plus and majority-min-cap can be an option, if you're willing to take the risks of violating cluster sharding's guarantees (e.g. multiple instances of the same entity running and potentially destroying their persistent state). Increasing the cluster size can also be helpful, paradoxically, if the reason other nodes are slow to respond is overload.

Related

Erlang - How is the creation integer (a part of a distributed pid representation ) actually created?

In a distributed Erlang system pids can have two different representations: i) internal; ii) external.
The internal representation has the following shape: < A.B.C >
The external representation, used for instance when a message has to travel across different nodes, is instead composed of the following elements: < node_id, ID, serial, creation > according to the official documentation.
Where node_id is the name of the node, ID and serial identify the process on node_id and creation is an integer used to distinguish the node from past (crashed) version of itself.
What I could not find is how the creation integer is created by the VM.
By setting a small experiment on my PC, I have seen that if I create and kill the same node several times the counter is always increased by 1, and by creating the same node on different machines, the creation integers are different, but have some similarities in their structure, for instance:
machine 1 -> creation integer = 1647595383
machine 2 -> creation integer = 1647596018
Do any of you have any knowledge about how this integer is created? If so could you please explain it to me and possibly reference some (more or less) official documentation?
The creation is sent as a part of the response to node registration in epmd, see details on that protocol.
If you have a custom erl_epmd module, you can also provide your own way of creating the creation-value.
The original creation is the local time of when the node with that name is first registered, and then it is bumped once for each time the name is re-registered.

How does the CAP Theorem apply on HDFS?

I just started reading about Hadoop and came across the CAP Theorem. Can you please throw some light on which two components of CAP would be applicable to a HDFS system?
Argument for Consistency
The document very clearly says:
"The consistency model of a Hadoop FileSystem is one-copy-update-semantics; that of a traditional local POSIX filesystem."
(One-copy update semantics means the file contents seen by all of the processes accessing or updating a given file would see as if only a single copy of the file existed.)
Moving forward, the document says:
"Create. Once the close() operation on an output stream writing a newly created file has completed, in-cluster operations querying the file metadata and contents MUST immediately see the file and its data."
"Update. Once the close() operation on an output stream writing a newly created file has completed, in-cluster operations querying the file metadata and contents MUST immediately see the new data.
"Delete. once a delete() operation on a path other than “/” has completed successfully, it MUST NOT be visible or accessible. Specifically, listStatus(), open() ,rename() and append() operations MUST fail."
The above mentioned characteristics point towards the presence of "Consistency" in the HDFS.
Source: https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/filesystem/introduction.html
Argument for Partition Tolerance
HDFS provides High Availability for both Name Nodes and Data Nodes.
Source: https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html
Argument for Lack of Availability
It is very clearly mentioned in the documentation(under the section "Operations and failures"):
"The time to complete an operation is undefined and may depend on the implementation and on the state of the system."
This indicates that the "Availability" in the context of CAP is missing in HDFS.
Source:
https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/filesystem/introduction.html
Given the above mentioned arguments, I believe HDFS supports "Consistency and Partition Tolerance" and not "Availability" in the context of
CAP theorem.
C – Consistency (All nodes see the data in homogeneous form i.e. every node has the same knowledge of data at any instant of time)
A – Availability (A guarantee that every request receives a response which may be processed or failed)
P – Partition Tolerance (The system continues to operate even if a message is lost or part of the system fails)
Talking about Hadoop , it supports the Availability and Partition Tolerance property. The Consistency property is not supported because only namenode has the information of where the replicas are placed. This information is not available with each and every node of the cluster.

What determines AWS Redis' usable memory? (OOM issue)

I am using AWS Redis for a project and ran into an Out of Memory (OOM) issue. In investigating the issue, I discovered a couple parameters that affect the amount of usable memory, but the math doesn't seem to work out for my case. Am I missing any variables?
I'm using:
3 shards, 3 nodes per shard
cache.t2.micro instance type
default.redis4.0.cluster.on cache parameter group
The ElastiCache website says cache.t2.micro has 0.555 GiB = 0.555 * 2^30 B = 595,926,712 B memory.
default.redis4.0.cluster.on parameter group has maxmemory = 581,959,680 (just under the instance memory) and reserved-memory-percent = 25%. 581,959,680 B * 0.75 = 436,469,760 B available.
Now, looking at the BytesUsedForCache metric in CloudWatch when I ran out of memory, I see nodes around 457M, 437M, 397M, 393M bytes. It shouldn't be possible for a node to be above the 436M bytes calculated above!
What am I missing; Is there something else that determines how much memory is usable?
I remember reading it somewhere but I can not find it right now. I believe BytesUsedForCache is a sum of RAM and SWAP used by Redis to store data/buffers.
As Elasticache's docs suggest that SWAP should not go higher than 300 MB.
I would suggest checking the SWAP metric at that time.

Aerospike error: All batch queues are full

I am running an Aerospike cluster in Google Cloud. Following the recommendation on this post, I updated to the last version (3.11.1.1) and re-created all servers. In fact, this change cause my 5 servers to operate in a much lower CPU load (it was around 75% load before, now it is on 20%, as show in the graph bellow:
Because of this low load, I decided to reduce the cluster size to 4 servers. When I did this, my application started to receive the following error:
All batch queues are full
I found this discussion about the topic, recommending to change the parameters batch-index-threads and batch-max-unused-buffers with the command
asadm -e "asinfo -v 'set-config:context=service;batch-index-threads=NEW_VALUE'"
I tried many combinations of values (batch-index-threads with 2,4,8,16) and none of them solved the problem, and also changing the batch-index-threads param. Nothing solves my problem. I keep receiving the All batch queues are full error.
Here is my aerospace.conf relevant information:
service {
user root
group root
paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
paxos-recovery-policy auto-reset-master
pidfile /var/run/aerospike/asd.pid
service-threads 32
transaction-queues 32
transaction-threads-per-queue 4
batch-index-threads 40
proto-fd-max 15000
batch-max-requests 30000
replication-fire-and-forget true
}
I use 300GB SSD disks on these servers.
A quick note which may or may not pertain to you:
A common mistake we have seen in the past is that developers decide to use 'batch get' as a general purpose 'get' for single and multiple record requests. The single record get will perform better for single record requests.
It's possible that you are being constrained by the network between the clients and servers. Reducing from 5 to 4 nodes reduced the aggregate pipe. In addition, removing a node will start cluster migrations which adds additional network load.
I would look at the batch-max-buffer-per-queue config parameter.
Maximum number of 128KB response buffers allowed in each batch index
queue. If all batch index queues are full, new batch requests are
rejected.
In conjunction with raising this value from the default of 255 you will want to also raise the batch-max-unused-buffers to batch-index-threads x batch-max-buffer-per-queue + 1 (at least). If you do not do that new buffers will be created and destroyed constantly, as the amount of free (unused) buffers is smaller than the ones you're using. The moment the batch response is served the system will strive to trim the buffers down to the max unused number. You will see this reflected in the batch_index_created_buffers metric constantly rising.
Be aware that you need to have enough DRAM for this. For example if you raise the batch-max-buffer-per-queue to 320 you will consume
40 (`batch-index-threads`) x 320 (`batch-max-buffer-per-queue`) x 128K = 1600MB
For the sake of performance the batch-max-unused-buffers should be set to 13000 which will have a max memory consumption of 1625MB (1.59GB) per-node.

AWS EMR Parallel Mappers?

I am trying to determine how many nodes I need for my EMR cluster. As part of best practices the recommendations are:
(Total Mappers needed for your job + Time taken to process) / (per instance capacity + desired time) as outlined here: http://www.slideshare.net/AmazonWebServices/amazon-elastic-mapreduce-deep-dive-and-best-practices-bdt404-aws-reinvent-2013, page 89.
The question is how to determine how many parallel mappers the instance will support since AWS don't publish? https://aws.amazon.com/emr/pricing/
Sorry if i missed something obvious.
Wayne
To determine the number of parallel mappers , you will need to check this documentation from EMR called Task Configuration where EMR had a predefined mapping set of configurations for every instance type which would determine the number of mappers/reducers.
http://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hadoop-task-config.html
For example : Lets say you have 5 m1.xlarge core nodes. According to the default mapred-site.xml configuration values for that instance type from EMR docs, we have
mapreduce.map.memory.mb = 768
yarn.nodemanager.resource.memory-mb = 12288
yarn.scheduler.maximum-allocation-mb = 12288 (same as above)
You can simply divide the later with former setting to get the maximum number of mappers supported by one m1.xlarge node = (12288/768) = 16
So, for the 5 node cluster , it would a max of 16*5 = 80 mappers that can run in parallel (considering a map only job). The same is the case with max parallel Reducers(30). You can do similar math for a combination of mappers and reducers.
So, If you want to run more mappers in parallel , you can either re-size the cluster or reduce the mapreduce.map.memory.mb(and its heap mapreduce.map.java.opts) on every node and restart NM to
To understand what the above mapred-site.xml properties mean and why you do need to do those calculations , you can refer it here :
https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-common/yarn-default.xml
Note : The above calculations and statements are true if EMR stays in its default configuration using YARN capacity scheduler with DefaultResourceCalculator. If for example , you configure your capacity scheduler to use DominantResourceCalculator, it will consider VCPU's + Memory on every nodes (not just memory's) to decide on parallel number of mappers.