Corda Node shuts down with following error - blockchain

[ERROR] 10:20:24+0530 [main] internal.NodeStartupLogging.invoke - Exception during node startup: Node uses parameters with hash: 1DE6AD5577CA71D9307FC244B59418B09DACCF5F0C14B96C7D31E099A6C362C8 but network map is advertising: 4038BC9139ECE1C7B8E9E3F2029FFF5FB9FD0A9303F477EABFBBF98D134A2CF6. Please update node to use correct network parameters file. [errorCode=10znq16, moreInformationAt=https://errors.corda.net/OS/4.0/10znq16]

The node's network parameters file is outdated. How was the node built? If in dev mode using network bootstrapper, redeploying the node will generate the correct files. If using a network map server, simply request a new network parameter file from the network map service.

Related

Bootstrap IP to internal address conversion

I am new to Kafka and I setup an instance in aws. runs well.
then I created another aws instance and run the codes:
See image here
it can print out messages that I published to kafka
If I ran the same codes in the kafka server itself, I can also get messages.
However, if I run the same codes in my own laptop, I cant get anything.
I thought it might be the codes so I used kafka's own client in my laptop:
bin/kafka-console-consumer.sh --topic test22 --bootstrap-server 34.215.180.111:9092
Now I got an error:
2021-05-11 16:21:32,252] WARN [Consumer clientId=consumer-console-consumer-94326-1, groupId=console-consumer-94326] Error connecting to node ip-172-31-29-222.us-west-2.compute.internal:9092 (id: 0 rack: null) (org.apache.kafka.clients.NetworkClient)
ip-172-31-29-222.us-west-2.compute.internal
this piece of name is actually the AWS instance's internal address:
See image here
Then I thought it might be Amazon's issue so I repeated the whole process in Google Cloud and got the same results:
[2021-05-11 17:15:34,840] WARN [Consumer clientId=consumer-console-consumer-2377-1, groupId=console-consumer-2377] Error connecting to node instance-1.us-central1-a.c.seventh-seeker-267203.internal:9092 (id: 0 rack: null) (org.apache.kafka.clients.NetworkClient)
These internal addresses can not be accessed from external computers at all.
Can anybody help? thanks!
The logs are showing you the advertised.listeners of the brokers. If you want that to be different in order to connect, you'll need to modify that property such that the brokers have resolvable addresses for the clients
https://www.confluent.io/blog/kafka-listeners-explained/

HdfsRpcException: Failed to invoke RPC call "getFsStats" on server

I've installed a single node Hadoop Cluster on EC2 instance. I then stored some test data on hdfs and I'm trying to load the hdfs data to SAP Vora. I'm using SAP Vora 2.0 for this project.
To create the table and load the data to Vora, this is the query I'm running:
drop table if exists dims;
CREATE TABLE dims(teamid int, team string)
USING com.sap.spark.engines.relational
OPTIONS (
hdfsnamenode "namenode.example.com:50070",
files "/path/to/file.csv",
storagebackend "hdfs");
When I run the above query, I get this error message:
com.sap.vora.jdbc.VoraException: HL(9): Runtime error.
(could not handle api call, failure reason : execution of scheduler plan failed:
found error: :-1, CException, Code: 10021 : Runtime category : an std::exception wrapped.
Next level: v2 HDFS Plugin: Exception at opening
hdfs://namenode.example.com:50070/path/to/file.csv:
HdfsRpcException: Failed to invoke RPC call "getFsStats" on server
"namenode.example.com:50070" for node id 20
with error code 0, status ERROR_STATUS
Hadoop and Vora are running on different nodes.
You should specify the HDFS Namenode port, which is typically 8020. 50700 is the port of the WebUI. See e.g. Default Namenode port of HDFS is 50070.But I have come across at some places 8020 or 9000

Hazelcast cluster over AWS using Docker

Hi am trying to configure hazelcast cluster over AWS.
I am running hazelcast in docker container and using --net=host to use host network config.
when i look at hazelcast logs, I see
[172.17.0.1]:5701 [herald] [3.8] Established socket connection between /[node2]:5701 and /[node1]:47357
04:24:22.595 [hz._hzInstance_1_herald.IO.thread-out-0] DEBUG c.h.n.t.SocketWriterInitializerImpl - [172.17.0.1]:5701 [herald] [3.8] Initializing SocketWriter WriteHandler with Cluster Protocol
04:24:22.595 [hz._hzInstance_1_herald.IO.thread-in-0] WARN c.h.nio.tcp.TcpIpConnectionManager - [172.17.0.1]:5701 [herald] [3.8] Wrong bind request from [172.17.0.1]:5701! This node is not requested endpoint: [node2]:5701
04:24:22.595 [hz._hzInstance_1_herald.IO.thread-in-0] INFO c.hazelcast.nio.tcp.TcpIpConnection - [172.17.0.1]:5701 [herald] [3.8] Connection[id=40, /[node2]:5701->/[node1]:47357, endpoint=null, alive=false, type=MEMBER] closed. Reason: Wrong bind request from [172.17.0.1]:5701! This node is not requested endpoint: [node2]:5701
I can see error saying bind request is coming from 172.17.0.1 to node1, and node1 is not accepting this request.
final Config config = new Config();
config.setGroupConfig(clientConfig().getGroupConfig());
final NetworkConfig networkConfig = new NetworkConfig();
final JoinConfig joinConfig = new JoinConfig();
final TcpIpConfig tcpIpConfig = new TcpIpConfig();
final MulticastConfig multicastConfig = new MulticastConfig();
multicastConfig.setEnabled(false);
final AwsConfig awsConfig = new AwsConfig();
awsConfig.setEnabled(true);
// awsConfig.setSecurityGroupName("xxxx");
awsConfig.setRegion("xxxx");
awsConfig.setIamRole("xxxx");
awsConfig.setTagKey("type");
awsConfig.setTagValue("xxxx");
awsConfig.setConnectionTimeoutSeconds(120);
joinConfig.setAwsConfig(awsConfig);
joinConfig.setMulticastConfig(multicastConfig);
joinConfig.setTcpIpConfig(tcpIpConfig);
networkConfig.setJoin(joinConfig);
final InterfacesConfig interfaceConfig = networkConfig.getInterfaces();
interfaceConfig.setEnabled(true).addInterface("172.29.238.71");
config.setNetworkConfig(networkConfig);
above is the code to configure AWSConfig
Please help me resolve this issue.
Thanks
You are experiencing an issue (#11795) in default Hazelcast bind address selection mechanism.
There are several workarounds available:
Workaround 1: System property
You can set the bind address by providing correct IP address as a hazelcast.local.localAddress system property:
java -Dhazelcast.local.localAddress=[yourCorrectIpGoesHere]
or
System.setProperty("hazelcast.local.localAddress", "[yourCorrectIpGoesHere]")
Read details in System properties chapter of Hazelcast Reference Manual.
Workaround 2: Hazelcast Network configuration
Hazelcast Network configuration allows you to specify which IP addresses can be used to bind the server.
Declarative in hazelcast.xml:
<hazelcast>
...
<network>
...
<interfaces enabled="true">
<interface>10.3.16.*</interface>
<interface>10.3.10.4-18</interface>
<interface>192.168.1.3</interface>
</interfaces>
</network>
...
</hazelcast>
Programmatic:
Config config = new Config();
NetworkConfig network = config.getNetworkConfig();
InterfacesConfig interfaceConfig = network.getInterfaces();
interfaceConfig.setEnabled(true).addInterface("192.168.1.3");
HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance(config);
Read details in Interfaces section of Hazelcast Reference Manual.
Update:
With the earlier steps you are able to set a proper bind address - the local one returned by ip addr show for instance. Nevertheless, it could be insufficient if you run Hazelcast in an environment where local IP and public IP differs (clouds, docker).
Next Step: Configure public address
This step is necessary in environments, where cluster nodes doesn't see each other under the reported local address of the other node. You have to set the public address - it's the one which nodes are able to reach (optionally with port specified).
networkConfig.setPublicAddress("172.29.238.71");
// or if a non-default Hazelcast port is used - e.g.9991
networkConfig.setPublicAddress("172.29.238.71:9991");

akka cluster slave node not joining seed node

I am working with akka distributed worker template available on typesafe. I am using it to write a backend job which takes data from siebel using soap calls and inserts in mongo. This job is supposed to run once a week for few hours.
Based on the cluster-usage and other documentation on AKKA website, I imported akka-cluster.jar and configured the application configuration file with SEED nodes (akka.cluster.seed-nodes). But when I start the first node (MASTER NODE) with the configuration I mentioned (seed nodes etc), I start getting errors on the server console saying failed to join the seed node which is obvious (as it is the first node and there is nothing to join). Now I start the second node with akka.cluster.seed-nodes configured with the ip-address and port of the process where master node is running. I once again get the errors on the server console.
Now what I do next is - take the first join address of the master actor from the MASTER NODE and set it dynamically in the slave node in the code (construct an Address object and pass it to the actors on the slave node). THIS WORKS!!! If I take the same join address and configure it in the application configuration akka.cluster.seed-nodes, it throws me error and slave doesn't join the cluster.
So I have following questions :-
1. How to configure the akka.cluster.seed-node configuration in application. I could never make it work/count in the configuration.
2. Is there any way to pre-configure the seed nodes in the configuration. As per me trying it out, it looks like the configuration is dynamic i.e. to take the join address of actor on the master node from the logs and configure the slave's seed-node configuration with that address ?
I've had similar problems which were the result of a mismatch between the actor system name in the seed nodes configuration and the actual actor system name created in my code.

How to run data node block scanner on data node in a cluster from a remote machine

How to run data node block scanner on data node in a cluster from a remote machine.
By default data node executes block scanner in 504 hours. This is the default value of dfs.datanode.scan.period . If I want to run the data node block scanner then one way is to configure the property of dfs.datanode.scan.period in hdfs-site.xml but is there any other other way. Is it possible to run data node block scanner on data node either through command or pragmatically.
Currently there is no way to issue a command from the command-line to trigger the block scanner to run on a Datanode. As you mentioned it your question, the only way to control the frequency of the block scanner process is by changing the value of the 'dfs.datanode.scan.period' setting in your hdfs-site.xml file.
It's possible. Remove the cursor file.
See https://javagc.leponceau.org/2017/10/hdfs-tricks.html