PutHDFS processor fails to write to kerberised and TLS/SSL enabled HDFS - hdfs

I get the below error message in my NiFi logs when I tried to write a file to HDFS, but when i try to write a file with in the hadoop cluster it works fine, but from NiFi it fails with the below message.
2020-08-10 13:41:59,519 INFO [NiFi Web Server-32056] o.a.n.c.s.StandardProcessScheduler Starting LogMessage[id=b4bc6d2c-0173-1000-0000-00002905a41b]
2020-08-10 13:41:59,519 INFO [NiFi Web Server-32056] o.a.n.controller.StandardProcessorNode Starting LogMessage[id=b4bc6d2c-0173-1000-0000-00002905a41b]
2020-08-10 13:41:59,519 INFO [NiFi Web Server-32056] o.a.n.c.s.StandardProcessScheduler Starting LogMessage[id=b4bd264b-0173-1000-0000-000018f91304]
2020-08-10 13:41:59,519 INFO [NiFi Web Server-32056] o.a.n.controller.StandardProcessorNode Starting LogMessage[id=b4bd264b-0173-1000-0000-000018f91304]
2020-08-10 13:41:59,519 INFO [NiFi Web Server-32056] o.a.n.c.s.StandardProcessScheduler Starting GetFile[id=b4d14ae8-0173-1000-ffff-ffffe680a6a0]
2020-08-10 13:41:59,519 INFO [NiFi Web Server-32056] o.a.n.controller.StandardProcessorNode Starting GetFile[id=b4d14ae8-0173-1000-ffff-ffffe680a6a0]
2020-08-10 13:41:59,519 INFO [NiFi Web Server-32056] o.a.n.c.s.StandardProcessScheduler Starting PutHDFS[id=4d34342b-2901-125d-917f-567e466964c8]
2020-08-10 13:41:59,519 INFO [NiFi Web Server-32056] o.a.n.controller.StandardProcessorNode Starting PutHDFS[id=4d34342b-2901-125d-917f-567e466964c8]
2020-08-10 13:41:59,519 INFO [Timer-Driven Process Thread-6] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled GetFile[id=b4d14ae8-0173-1000-ffff-ffffe680a6a0] to run with 1 threads
2020-08-10 13:41:59,519 INFO [Timer-Driven Process Thread-2] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled LogMessage[id=b4bc6d2c-0173-1000-0000-00002905a41b] to run with 1 threads
2020-08-10 13:41:59,519 INFO [Timer-Driven Process Thread-5] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled LogMessage[id=b4bd264b-0173-1000-0000-000018f91304] to run with 1 threads
2020-08-10 13:41:59,543 INFO [Timer-Driven Process Thread-10] o.a.hadoop.security.UserGroupInformation Login successful for user abc#UX.xyzCORP.NET using keytab file /home/abc/confFiles/abc.keytab
2020-08-10 13:41:59,544 INFO [Timer-Driven Process Thread-10] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled PutHDFS[id=4d34342b-2901-125d-917f-567e466964c8] to run with 1 threads
2020-08-10 13:41:59,595 INFO [Thread-9481] o.a.h.h.p.d.sasl.SaslDataTransferClient SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2020-08-10 13:41:59,599 INFO [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Exception in createBlockOutputStream blk_1075334640_1594409
java.io.EOFException: null
at java.io.DataInputStream.readByte(DataInputStream.java:267)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
at org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFieldsLegacy(BlockTokenIdentifier.java:240)
at org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFields(BlockTokenIdentifier.java:221)
at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:200)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:530)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:342)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:276)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1731)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1679)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
2020-08-10 13:41:59,599 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Abandoning BP-1824237254-0.00.64.55-1545405130172:blk_1075334640_1594409
2020-08-10 13:41:59,601 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Excluding datanode DatanodeInfoWithStorage[0.00.64.57:50010,DS-d6f56418-6e18-4317-a8ec-4a5b15757728,DISK]
2020-08-10 13:41:59,605 INFO [Thread-9481] o.a.h.h.p.d.sasl.SaslDataTransferClient SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2020-08-10 13:41:59,606 INFO [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Exception in createBlockOutputStream blk_1075334641_1594410
java.io.EOFException: null
at java.io.DataInputStream.readByte(DataInputStream.java:267)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
at org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFieldsLegacy(BlockTokenIdentifier.java:240)
at org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFields(BlockTokenIdentifier.java:221)
at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:200)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:530)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:342)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:276)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1731)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1679)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
2020-08-10 13:41:59,606 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Abandoning BP-1824237254-0.00.64.55-1545405130172:blk_1075334641_1594410
2020-08-10 13:41:59,608 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Excluding datanode DatanodeInfoWithStorage[0.00.64.56:50010,DS-286b28e8-d035-4b8c-a2dd-aabb08666234,DISK]
2020-08-10 13:41:59,612 INFO [Thread-9481] o.a.h.h.p.d.sasl.SaslDataTransferClient SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2020-08-10 13:41:59,612 INFO [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Exception in createBlockOutputStream blk_1075334642_1594411
java.io.EOFException: null
at java.io.DataInputStream.readByte(DataInputStream.java:267)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
at org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFieldsLegacy(BlockTokenIdentifier.java:240)
at org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFields(BlockTokenIdentifier.java:221)
at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:200)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:530)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:342)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:276)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1731)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1679)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
2020-08-10 13:41:59,612 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Abandoning BP-1824237254-0.00.64.55-1545405130172:blk_1075334642_1594411
2020-08-10 13:41:59,614 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Excluding datanode DatanodeInfoWithStorage[0.00.64.58:50010,DS-53536364-33f4-40d6-85c2-508abf7ff023,DISK]
2020-08-10 13:41:59,618 INFO [Thread-9481] o.a.h.h.p.d.sasl.SaslDataTransferClient SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2020-08-10 13:41:59,619 INFO [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Exception in createBlockOutputStream blk_1075334643_1594412
java.io.EOFException: null
at java.io.DataInputStream.readByte(DataInputStream.java:267)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
at org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFieldsLegacy(BlockTokenIdentifier.java:240)
at org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFields(BlockTokenIdentifier.java:221)
at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:200)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:530)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:342)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:276)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1731)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1679)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
2020-08-10 13:41:59,619 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Abandoning BP-1824237254-0.00.64.55-1545405130172:blk_1075334643_1594412
2020-08-10 13:41:59,621 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Excluding datanode DatanodeInfoWithStorage[0.00.64.84:50010,DS-abba7d97-925a-4299-af86-b58fef9aaa12,DISK]
2020-08-10 13:41:59,621 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer DataStreamer Exception
java.io.IOException: Unable to create new block.
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1694)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
*2020-08-10 13:41:59,621 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Could not get block locations. Source file "/user/abc/puthdfs_test/.test.txt" - Aborting...block==null*
2020-08-10 13:41:59,626 ERROR [Timer-Driven Process Thread-2] o.apache.nifi.processors.hadoop.PutHDFS PutHDFS[id=4d34342b-2901-125d-917f-567e466964c8] Failed to write to HDFS due to org.apache.nifi.processor.exception.ProcessException: IOException thrown from PutHDFS[id=4d34342b-2901-125d-917f-567e466964c8]: java.io.IOException: Could not get block locations. Source file "/user/abc/puthdfs_test/.test.txt" - Aborting...block==null: org.apache.nifi.processor.exception.ProcessException: IOException thrown from PutHDFS[id=4d34342b-2901-125d-917f-567e466964c8]: java.io.IOException: Could not get block locations. Source file "/user/abc/puthdfs_test/.test.txt" - Aborting...block==null
org.apache.nifi.processor.exception.ProcessException: IOException thrown from PutHDFS[id=4d34342b-2901-125d-917f-567e466964c8]: java.io.IOException: Could not get block locations. Source file "/user/abc/puthdfs_test/.test.txt" - Aborting...block==null
Any help would be highly appreciated, i have eliminated that it has nothing to do with HDFS
Thanks
David

Related

How to send an email in AWS MWAA (Apache Airflow) using EmailOperator

I am working with AWS MWAA (Apache Airflow). I want to send an email in MWAA upon completion of my pipeline.I have set the following configuration
Now when I run my dag using an Email Operator, it gives me an error.
File "/usr/lib64/python3.7/socket.py", line 707, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File "/usr/lib64/python3.7/socket.py", line 752, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
[2022-05-19, 11:11:21 UTC] {{local_task_job.py:154}} INFO - Task exited with return code 1
[2022-05-19, 11:11:21 UTC] {{local_task_job.py:264}} INFO - 0 downstream tasks scheduled from follow-on schedule check
Then I changed my configuration to
It now gives the following error
File "/usr/lib64/python3.7/smtplib.py", line 642, in auth
raise SMTPAuthenticationError(code, resp)
smtplib.SMTPAuthenticationError: (530, b'Must issue a STARTTLS command first')
[2022-05-19, 12:22:39 UTC] {{local_task_job.py:154}} INFO - Task exited with return code 1
[2022-05-19, 12:22:39 UTC] {{local_task_job.py:264}} INFO - 0 downstream tasks scheduled from follow-on schedule check
Can you please tell me where I am doing wrong or how should I configure this to send an email to a particular email address from any domain?
Your smtp host variable is an email address and not a host.
It should be smtp.gmail.com not smtp#gmail.com
You've hopefully also changed your password as you have shared it publicly in that screenshot and anyone could use it now.

Kafka and zookeeper won't connect in a stable fashion on AWS ECS

I'm trying to run kafka and zookeeper containers on AWS ECS but I'm stuggling to get them into a stable state.
Zookeeper - i'm using the official zookeeper:3.5 image and having ECS service discovery service give it an ip address at zk0.ecs.demo.com
Kafka - i'm using the latest wurstmeister/kafka image and having ECS service discovery service give it an ip address at kafka1.ecs.demo.com
configuring the KAFKA_ZOOKEEPER_CONNECT env var to look at zk0.ecs.demo.com and KAFKA_ADVERTISED_HOST_NAME is kafka1.ecs.demo.com:9092
Zookeeper starts up and seems to do so in a stable way, kafka then starts up and once it recognises zk0.ecs.demo.com as a host it will connect causing some errors in the zookeeper logs.
They seem to connect to each other until about a minute later when zookeeper dies and then kafka can't find it at the old IP address and dies and this all happens in a circular fashion.
The networking is wide open with all traffic allowed so i'm not sure that this is the problem.
I've tried running them on t2.large machines with 2048 memory so i'm not sure it's a resources problem.
I've tried
- running ZK as a cluster of 3
- mapping the ports mapped to the hosts and unmapped
- using the ECS EC2 instance ip address instead of private dns
Can't seem to get it to work.
I've unfactored the terraform script i'm using to a single one if anyone wants to try to recreate it. (will need an ssh key at ~/.ssh/id_rsa.pub) https://gist.github.com/chestercodes/480dda384229adbd75fefea8fdf483d4
ZK logs
15:53:03 ZooKeeper JMX enabled by default
15:53:04 Using config: /conf/zoo.cfg
53:04,044 [myid:] - INFO [main:QuorumPeerConfig#117] - Reading configuration from: /conf/zoo.cfg
53:04,051 [myid:] - INFO [main:QuorumPeerConfig#317] - clientPort is not set
53:04,051 [myid:] - INFO [main:QuorumPeerConfig#331] - secureClientPort is not set
53:04,058 [myid:] - WARN [main:QuorumPeerConfig#590] - No server failure will be tolerated. You need at least 3 servers.
53:04,072 INFO [main:DatadirCleanupManager#78] - autopurge.snapRetainCount set to 3
53:04,072 INFO [main:DatadirCleanupManager#79] - autopurge.purgeInterval set to 0
53:04,072 INFO [main:DatadirCleanupManager#101] - Purge task is not scheduled.
53:04,073 INFO [main:ManagedUtil#46] - Log4j found with jmx enabled.
53:04,085 INFO [main:QuorumPeerMain#138] - Starting quorum peer
53:04,104 INFO [main:NIOServerCnxnFactory#673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 4 worker threads, and 64 kB direct buffers.
53:04,112 INFO [main:NIOServerCnxnFactory#686] - binding to port /0.0.0.0:2181
53:04,149 INFO [main:Log#186] - Logging initialized #753ms
53:04,204 WARN [main:ContextHandler#1339] - o.e.j.s.ServletContextHandler#8bd1b6a{/,null,null} contextPath ends with /*
53:04,204 WARN [main:ContextHandler#1350] - Empty contextPath
53:04,225 INFO [main:QuorumPeer#1349] - Local sessions disabled
53:04,225 INFO [main:QuorumPeer#1360] - Local session upgrading disabled
53:04,226 INFO [main:QuorumPeer#1327] - tickTime set to 2000
53:04,227 INFO [main:QuorumPeer#1371] - minSessionTimeout set to 4000
53:04,227 INFO [main:QuorumPeer#1382] - maxSessionTimeout set to 40000
53:04,227 INFO [main:QuorumPeer#1397] - initLimit set to 5
53:04,243 INFO [main:FileTxnSnapLog#320] - Snapshotting: 0x0 to /data/version-2/snapshot.0
53:04,246 INFO [main:QuorumPeer#798] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
53:04,249 INFO [main:QuorumPeer#813] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
53:04,256 INFO [main:Server#327] - jetty-9.2.18.v20160721
53:04,289 INFO [main:ContextHandler#744] - Started o.e.j.s.ServletContextHandler#8bd1b6a{/,null,AVAILABLE}
53:04,311 INFO [main:AbstractConnector#266] - Started ServerConnector#14ab98c3{HTTP/1.1}{0.0.0.0:8080}
53:04,312 INFO [main:Server#379] - Started #921ms
53:04,312 INFO [main:JettyAdminServer#112] - Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands
53:04,319 INFO [QuorumPeerListener:QuorumCnxManager$Listener#636] - My election bind port: localhost/127.0.0.1:3888
53:04,323 INFO [QuorumPeer[myid=1](plain=/0.0.0.0:2181)(secure=disabled):QuorumPeer#1055] - LOOKING
53:04,324 INFO [QuorumPeer[myid=1](plain=/0.0.0.0:2181)(secure=disabled):FastLeaderElection#894] - New election. My id = 1, proposed zxid=0x0
53:04,326 INFO [WorkerReceiver[myid=1]:FastLeaderElection#688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version)
53:04,527 INFO [...)(secure=disabled):MBeanRegistry#128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.1,name2=LeaderElection]
53:04,528 INFO [...)(secure=disabled):QuorumPeer#1143] - LEADING
53:04,531 INFO [...)(secure=disabled):Leader#63] - TCP NoDelay set to: true
53:04,531 INFO [...)(secure=disabled):Leader#83] - zookeeper.leader.maxConcurrentSnapshots = 10
53:04,532 INFO [...)(secure=disabled):Leader#85] - zookeeper.leader.maxConcurrentSnapshotTimeout = 5
53:04,538 INFO [...)(secure=disabled):Environment#109] - Server environment:zookeeper.version=3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60, built on 04/03/2017 16:19 GMT
53:04,539 INFO [...)(secure=disabled):Environment#109] - Server environment:host.name=ip-10-0-1-117.eu-west-1.compute.internal
53:04,539 INFO [...] - Server environment:java.version=1.8.0_151
53:04,539 INFO [...] - Server environment:java.vendor=Oracle Corporation
53:04,539 INFO [...] - Server environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre
53:04,539 INFO [...] - Server environment:java.class.path=/zookeeper-3.5.3-beta/bin/../build/classes:/zookeeper-3.5.3-beta/bin/../build/lib/*.jar:/zookeeper-3.5.3-beta/bin/../lib/slf4j-log4j12-1.7.5.jar:/zookeeper-3.5.3-beta/bin/../lib/slf4j-api-1.7.5.jar:/zookeeper-3.5.3-beta/bin/../lib/netty-3.10.5.Fin
53:04,539 INFO [...] - Server environment:java.library.path=/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
53:04,539 INFO [...] - Server environment:java.io.tmpdir=/tmp
53:04,539 INFO [...] - Server environment:java.compiler=<NA>
53:04,539 INFO [...] - Server environment:os.name=Linux
53:04,540 INFO [...] - Server environment:os.arch=amd64
53:04,540 INFO [...] - Server environment:os.version=4.9.85-38.58.amzn1.x86_64
53:04,540 INFO [...] - Server environment:user.name=zookeeper
53:04,540 INFO [...] - Server environment:user.home=/home/zookeeper
53:04,540 INFO [...] - Server environment:user.dir=/zookeeper-3.5.3-beta
53:04,540 INFO [...] - Server environment:os.memory.free=53MB
53:04,540 INFO [...] - Server environment:os.memory.max=889MB
53:04,540 INFO [...] - Server environment:os.memory.total=59MB
53:04,541 INFO [...:ZooKeeperServer#907] - minSessionTimeout set to 4000
53:04,541 INFO [...:ZooKeeperServer#916] - maxSessionTimeout set to 40000
53:04,542 INFO [...:ZooKeeperServer#159] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /datalog/version-2 snapdir /data/version-2
53:04,543 INFO [...:Leader#414] - LEADING - LEADER ELECTION TOOK - 15 MS
53:04,547 INFO [...:FileTxnSnapLog#320] - Snapshotting: 0x0 to /data/version-2/snapshot.0
53:04,564 INFO [...:Leader#1258] - Have quorum of supporters, sids: [ [1],[1] ]; starting up and setting last processed zxid: 0x100000000
53:04,578 INFO [...:CommitProcessor#255] - Configuring CommitProcessor with 2 worker threads.
53:04,587 INFO [...:ContainerManager#64] - Using checkIntervalMs=60000 maxPerMinute=10000
54:21,619 INFO [NIOServerCxnFactory.AcceptThread:/0.0.0.0:2181:NIOServerCnxnFactory$AcceptThread#296] - Accepted socket connection from /10.0.1.91:44016
54:21,627 INFO [NIOWorkerThread-1:ZooKeeperServer#1013] - Client attempting to establish new session at /10.0.1.91:44016
54:21,630 INFO [SyncThread:1:FileTxnLog#204] - Creating new log file: log.100000001
54:21,638 INFO [CommitProcWorkThread-1:ZooKeeperServer#727] - Established session 0x1000001141b0000 with negotiated timeout 6000 for client /10.0.1.91:44016
54:21,708 INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor#880] - Got user-level KeeperException when processing sessionid:0x1000001141b0000 type:create cxid:0x2 zxid:0x100000003 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers
54:21,725 INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor#880] - Got user-level KeeperException when processing sessionid:0x1000001141b0000 type:create cxid:0x6 zxid:0x100000007 txntype:-1 reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config
54:21,733 INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor#880] - Got user-level KeeperException when processing sessionid:0x1000001141b0000 type:create cxid:0x9 zxid:0x10000000a txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin
54:22,012 INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor#880] - Got user-level KeeperException when processing sessionid:0x1000001141b0000 type:create cxid:0x15 zxid:0x100000015 txntype:-1 reqpath:n/a Error Path:/cluster Error:KeeperErrorCode = NoNode for /cluster
54:22,818 INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor#880] - Got user-level KeeperException when processing sessionid:0x1000001141b0000 type:setData cxid:0x22 zxid:0x10000001b txntype:-1 reqpath:n/a Error Path:/controller_epoch Error:KeeperErrorCode = NoNode for /controller_epoch
54:23,031 INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor#880] - Got user-level KeeperException when processing sessionid:0x1000001141b0000 type:delete cxid:0x32 zxid:0x10000001e txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
Kafka logs
15:54:22 log.roll.hours = 168
15:54:22 log.roll.jitter.hours = 0
15:54:22 log.roll.jitter.ms = null
15:54:22 log.roll.ms = null
15:54:22 log.segment.bytes = 1073741824
15:54:22 log.segment.delete.delay.ms = 60000
15:54:22 max.connections.per.ip = 2147483647
15:54:22 max.connections.per.ip.overrides =
15:54:22 max.incremental.fetch.session.cache.slots = 1000
15:54:22 message.max.bytes = 1000012
15:54:22 metric.reporters = []
15:54:22 metrics.num.samples = 2
15:54:22 metrics.recording.level = INFO
15:54:22 metrics.sample.window.ms = 30000
15:54:22 min.insync.replicas = 1
15:54:22 num.io.threads = 8
15:54:22 num.network.threads = 3
15:54:22 num.partitions = 1
15:54:22 num.recovery.threads.per.data.dir = 1
15:54:22 num.replica.alter.log.dirs.threads = null
15:54:22 num.replica.fetchers = 1
15:54:22 offset.metadata.max.bytes = 4096
15:54:22 offsets.commit.required.acks = -1
15:54:22 offsets.commit.timeout.ms = 5000
15:54:22 offsets.load.buffer.size = 5242880
15:54:22 offsets.retention.check.interval.ms = 600000
15:54:22 offsets.retention.minutes = 1440
15:54:22 offsets.topic.compression.codec = 0
15:54:22 offsets.topic.num.partitions = 50
15:54:22 offsets.topic.replication.factor = 1
15:54:22 offsets.topic.segment.bytes = 104857600
15:54:22 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
15:54:22 password.encoder.iterations = 4096
15:54:22 password.encoder.key.length = 128
15:54:22 password.encoder.keyfactory.algorithm = null
15:54:22 password.encoder.old.secret = null
15:54:22 password.encoder.secret = null
15:54:22 port = 9092
15:54:22 principal.builder.class = null
15:54:22 producer.purgatory.purge.interval.requests = 1000
15:54:22 queued.max.request.bytes = -1
15:54:22 queued.max.requests = 500
15:54:22 quota.consumer.default = 9223372036854775807
15:54:22 quota.producer.default = 9223372036854775807
15:54:22 quota.window.num = 11
15:54:22 quota.window.size.seconds = 1
15:54:22 replica.fetch.backoff.ms = 1000
15:54:22 replica.fetch.max.bytes = 1048576
15:54:22 replica.fetch.min.bytes = 1
15:54:22 replica.fetch.response.max.bytes = 10485760
15:54:22 replica.fetch.wait.max.ms = 500
15:54:22 replica.high.watermark.checkpoint.interval.ms = 5000
15:54:22 replica.lag.time.max.ms = 10000
15:54:22 replica.socket.receive.buffer.bytes = 65536
15:54:22 replica.socket.timeout.ms = 30000
15:54:22 replication.quota.window.num = 11
15:54:22 replication.quota.window.size.seconds = 1
15:54:22 request.timeout.ms = 30000
15:54:22 reserved.broker.max.id = 1000
15:54:22 sasl.enabled.mechanisms = [GSSAPI]
15:54:22 sasl.jaas.config = null
15:54:22 sasl.kerberos.kinit.cmd = /usr/bin/kinit
15:54:22 sasl.kerberos.min.time.before.relogin = 60000
15:54:22 sasl.kerberos.principal.to.local.rules = [DEFAULT]
15:54:22 sasl.kerberos.service.name = null
15:54:22 sasl.kerberos.ticket.renew.jitter = 0.05
15:54:22 sasl.kerberos.ticket.renew.window.factor = 0.8
15:54:22 sasl.mechanism.inter.broker.protocol = GSSAPI
15:54:22 security.inter.broker.protocol = PLAINTEXT
15:54:22 socket.receive.buffer.bytes = 102400
15:54:22 socket.request.max.bytes = 104857600
15:54:22 socket.send.buffer.bytes = 102400
15:54:22 ssl.cipher.suites = []
15:54:22 ssl.client.auth = none
15:54:22 ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
15:54:22 ssl.endpoint.identification.algorithm = null
15:54:22 ssl.key.password = null
15:54:22 ssl.keymanager.algorithm = SunX509
15:54:22 ssl.keystore.location = null
15:54:22 ssl.keystore.password = null
15:54:22 ssl.keystore.type = JKS
15:54:22 ssl.protocol = TLS
15:54:22 ssl.provider = null
15:54:22 ssl.secure.random.implementation = null
15:54:22 ssl.trustmanager.algorithm = PKIX
15:54:22 ssl.truststore.location = null
15:54:22 ssl.truststore.password = null
15:54:22 ssl.truststore.type = JKS
15:54:22 transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
15:54:22 transaction.max.timeout.ms = 900000
15:54:22 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
15:54:22 transaction.state.log.load.buffer.size = 5242880
15:54:22 transaction.state.log.min.isr = 1
15:54:22 transaction.state.log.num.partitions = 50
15:54:22 transaction.state.log.replication.factor = 1
15:54:22 transaction.state.log.segment.bytes = 104857600
15:54:22 transactional.id.expiration.ms = 604800000
15:54:22 unclean.leader.election.enable = false
15:54:22 zookeeper.connect = zk0.ecs.demo.com:2181
15:54:22 zookeeper.connection.timeout.ms = 6000
15:54:22 zookeeper.max.in.flight.requests = 10
15:54:22 zookeeper.session.timeout.ms = 6000
15:54:22 zookeeper.set.acl = false
15:54:22 zookeeper.sync.time.ms = 2000
15:54:22 (kafka.server.KafkaConfig)
15:54:22 [2018-05-04 15:54:22,162] INFO [ThrottledRequestReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
15:54:22 [2018-05-04 15:54:22,162] INFO [ThrottledRequestReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
15:54:22 [2018-05-04 15:54:22,163] INFO [ThrottledRequestReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
15:54:22 [2018-05-04 15:54:22,198] INFO Log directory '/kafka/kafka-logs-ip-10-0-1-91.eu-west-1.compute.internal' not found, creating it. (kafka.log.LogManager)
15:54:22 [2018-05-04 15:54:22,209] INFO Loading logs. (kafka.log.LogManager)
15:54:22 [2018-05-04 15:54:22,221] INFO Logs loading complete in 12 ms. (kafka.log.LogManager)
15:54:22 [2018-05-04 15:54:22,234] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
15:54:22 [2018-05-04 15:54:22,239] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
15:54:22 [2018-05-04 15:54:22,611] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
15:54:22 [2018-05-04 15:54:22,642] INFO [SocketServer brokerId=1001] Started 1 acceptor threads (kafka.network.SocketServer)
15:54:22 [2018-05-04 15:54:22,669] INFO [ExpirationReaper-1001-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
15:54:22 [2018-05-04 15:54:22,674] INFO [ExpirationReaper-1001-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
15:54:22 [2018-05-04 15:54:22,674] INFO [ExpirationReaper-1001-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
15:54:22 [2018-05-04 15:54:22,692] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
15:54:22 [2018-05-04 15:54:22,713] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
15:54:22 [2018-05-04 15:54:22,717] INFO Result of znode creation at /brokers/ids/1001 is: OK (kafka.zk.KafkaZkClient)
15:54:22 [2018-05-04 15:54:22,718] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka1.ecs.demo.com:9092,9092,ListenerName(PLAINTEXT),PLAINTEXT)) (kafka.zk.KafkaZkClient)
15:54:22 [2018-05-04 15:54:22,720] WARN No meta.properties file under dir /kafka/kafka-logs-ip-10-0-1-91.eu-west-1.compute.internal/meta.properties (kafka.server.BrokerMetadataCheckpoint)
15:54:22 [2018-05-04 15:54:22,789] INFO [ExpirationReaper-1001-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
15:54:22 [2018-05-04 15:54:22,789] INFO [ExpirationReaper-1001-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
15:54:22 [2018-05-04 15:54:22,801] INFO Creating /controller (is it secure? false) (kafka.zk.KafkaZkClient)
15:54:22 [2018-05-04 15:54:22,801] INFO [ExpirationReaper-1001-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
15:54:22 [2018-05-04 15:54:22,813] INFO Result of znode creation at /controller is: OK (kafka.zk.KafkaZkClient)
15:54:22 [2018-05-04 15:54:22,828] INFO [GroupCoordinator 1001]: Starting up. (kafka.coordinator.group.GroupCoordinator)
15:54:22 [2018-05-04 15:54:22,829] INFO [GroupCoordinator 1001]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
15:54:22 [2018-05-04 15:54:22,838] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 8 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
15:54:22 [2018-05-04 15:54:22,859] INFO [ProducerId Manager 1001]: Acquired new producerId block (brokerId:1001,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
15:54:22 [2018-05-04 15:54:22,929] INFO [TransactionCoordinator id=1001] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
15:54:22 [2018-05-04 15:54:22,936] INFO [TransactionCoordinator id=1001] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
15:54:22 [2018-05-04 15:54:22,937] INFO [Transaction Marker Channel Manager 1001]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
15:54:23 [2018-05-04 15:54:23,047] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
15:54:23 [2018-05-04 15:54:23,070] INFO Kafka version : 1.1.0 (org.apache.kafka.common.utils.AppInfoParser)
15:54:23 [2018-05-04 15:54:23,070] INFO Kafka commitId : fdcf75ea326b8e07 (org.apache.kafka.common.utils.AppInfoParser)
15:54:23 [2018-05-04 15:54:23,071] INFO [KafkaServer id=1001] started (kafka.server.KafkaServer)
15:56:37 [2018-05-04 15:56:37,770] INFO Unable to read additional data from server sessionid 0x1000001141b0000, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:56:39 [2018-05-04 15:56:39,772] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:56:40 [2018-05-04 15:56:40,798] WARN Session 0x1000001141b0000 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:56:40 java.net.ConnectException: Connection refused
15:56:40 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
15:56:40 at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
15:56:40 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
15:56:40 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
15:56:42 [2018-05-04 15:56:42,475] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:56:48 [2018-05-04 15:56:48,482] WARN Client session timed out, have not heard from server in 7583ms for sessionid 0x1000001141b0000 (org.apache.zookeeper.ClientCnxn)
15:56:48 [2018-05-04 15:56:48,482] INFO Client session timed out, have not heard from server in 7583ms for sessionid 0x1000001141b0000, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:56:50 [2018-05-04 15:56:50,254] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:56:56 [2018-05-04 15:56:56,257] WARN Client session timed out, have not heard from server in 7674ms for sessionid 0x1000001141b0000 (org.apache.zookeeper.ClientCnxn)
15:56:56 [2018-05-04 15:56:56,258] INFO Client session timed out, have not heard from server in 7674ms for sessionid 0x1000001141b0000, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:56:57 [2018-05-04 15:56:57,591] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:57:03 [2018-05-04 15:57:03,596] WARN Client session timed out, have not heard from server in 7238ms for sessionid 0x1000001141b0000 (org.apache.zookeeper.ClientCnxn)
15:57:03 [2018-05-04 15:57:03,596] INFO Client session timed out, have not heard from server in 7238ms for sessionid 0x1000001141b0000, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:57:04 [2018-05-04 15:57:04,890] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:57:10 [2018-05-04 15:57:10,894] WARN Client session timed out, have not heard from server in 7198ms for sessionid 0x1000001141b0000 (org.apache.zookeeper.ClientCnxn)
15:57:10 [2018-05-04 15:57:10,894] INFO Client session timed out, have not heard from server in 7198ms for sessionid 0x1000001141b0000, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:57:12 [2018-05-04 15:57:12,388] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:57:18 [2018-05-04 15:57:18,392] WARN Client session timed out, have not heard from server in 7398ms for sessionid 0x1000001141b0000 (org.apache.zookeeper.ClientCnxn)
15:57:18 [2018-05-04 15:57:18,392] INFO Client session timed out, have not heard from server in 7398ms for sessionid 0x1000001141b0000, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:57:20 [2018-05-04 15:57:20,162] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:57:24 [2018-05-04 15:57:24,254] WARN Session 0x1000001141b0000 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:57:24 java.net.NoRouteToHostException: Host is unreachable
15:57:24 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
15:57:24 at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
15:57:24 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
15:57:24 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
15:57:25 [2018-05-04 15:57:25,482] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:57:50 [2018-05-04 15:57:50,783] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:57:51 [2018-05-04 15:57:51,902] WARN Session 0x1000001141b0000 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:57:51 java.net.NoRouteToHostException: Host is unreachable
15:57:51 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
15:57:51 at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
15:57:51 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
15:57:51 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
15:57:52 [2018-05-04 15:57:52,203] INFO Terminating process due to signal SIGTERM (kafka.Kafka$)
15:57:52 [2018-05-04 15:57:52,205] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer)
15:57:52 [2018-05-04 15:57:52,207] INFO [KafkaServer id=1001] Starting controlled shutdown (kafka.server.KafkaServer)
15:57:53 [2018-05-04 15:57:53,962] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:57:54 [2018-05-04 15:57:54,974] WARN Session 0x1000001141b0000 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:57:54 java.net.NoRouteToHostException: Host is unreachable
15:57:54 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
15:57:54 at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
15:57:54 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
15:57:54 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
15:57:55 [2018-05-04 15:57:55,076] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
15:57:56 [2018-05-04 15:57:56,978] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:57:58 [2018-05-04 15:57:58,046] WARN Session 0x1000001141b0000 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)

Issues while configuring the API Subscription BPS WSO2

So, i've my WSO2 BPS 3.6.0 configured to support SSL and a custom hostname i.e. mydomain.domain.com:9445 etc. and i'm trying to implement the API Subscription Workflow by following this documentation.
Now i've performed the following steps:
set the offset of wso2 bps to 2 and it is running fine with port: 9445
edited the wsa:Address tag in bothSubscriptionService.epr and SubscriptionCallbackService.epr located in API-M_HOME/business-processes/epr
as the bps server had a customized hostname instead of localhost (not sure if performing this step was right)
SubscriptionService.epr
SubscriptionCallBackService.epr
copy-pasted the epr folder from API-M_HOME/business-processes/epr to BPS_HOME/repository/conf/epr
Added the required BPEL package and human task accordingly
Navigated to the carbon console from APIM and edited the workflow-extensions.xml, here's how it looks like
set the TaskCoordinationEnabled tag of b4p-cordination-config.xml to true located in BPS_Home\repository\conf
Consider OTHER required configurations:
At API Manager End:
site.json file located at APIM_Home\repository\deployment\server\jaggeryapps\admin\site\conf
{
"theme": {
"base": "wso2",
"subtheme": "modern"
},
"context": "/admin",
"request_url": "READ_FROM_REQUEST",
"tasksPerPage": 10,
"allowedPermission": "/permission/admin/manage/apim_admin",
"workflows": {
"workFlowServerURL": "https://mydomain.domain.com:9445/services/",
},
"ssoConfiguration": {
"enabled": "false",
"issuer": "API_WORKFLOW_ADMIN",
"identityProviderURL": "https://localhost:9443/samlsso",
"keyStorePassword": "",
"identityAlias": "",
"keyStoreName": "",
"verifyAssertionValidityPeriod": "true",
"audienceRestrictionsEnabled": "true",
"responseSigningEnabled": "true",
"assertionSigningEnabled": "true",
"assertionEncryptionEnabled": "false",
"idpInit" : "false",
"idpInitSSOURL" : "https://localhost:9443/samlsso?spEntityID=API_WORKFLOW_ADMIN",
"externalLogoutPage" : "https://localhost:9443/samlsso?slo=true"
},
"reverseProxy": {
"enabled": false,
// values true , false , "auto" - will look for X-Forwarded-* headers
"host": "sample.proxydomain.com",
// If reverse proxy do not have a domain name use IP
"context": ""
//"regContext":"" // Use only if different path is used for registry
}
}
the workflowconfiguration in api-manager.xml
At BPS end:
carbon.xml
Issue: Now whenever a user navigates to APIM Store and subscribes to any API, the subscription request is listed at the APIM Admin console. When i select APPROVE from the provided ddl and click on the COMPLETE button, the record vanishes. However, this is the error that i see at WSO2's CMD windows:
APIM's cmd window
[2017-11-09 00:13:17,022] INFO - TimeoutHandler This engine will
expire all cal lbacks after GLOBAL_TIMEOUT: 120 seconds, irrespective
of the timeout action, af ter the specified or optional timeout
[2017-11-09 00:13:17,164] ERROR - TargetHandler I/O error: Host name
verificatio n failed for host : localhost javax.net.ssl.SSLException:
Host name verification failed for host : localhost
at org.apache.synapse.transport.http.conn.ClientSSLSetupHandler.verify(C
lientSSLSetupHandler.java:171)
at org.apache.http.nio.reactor.ssl.SSLIOSession.doHandshake(SSLIOSession
.java:308)
at org.apache.http.nio.reactor.ssl.SSLIOSession.isAppInputReady(SSLIOSes
sion.java:410)
at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(Abstra
ctIODispatch.java:119)
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor
.java:159)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(Abstr
actIOReactor.java:338)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(Abst
ractIOReactor.java:316)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIO
Reactor.java:277)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.
java:105)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.
run(AbstractMultiworkerIOReactor.java:586)
at java.lang.Thread.run(Thread.java:745)
[2017-11-09 00:13:17,188] WARN - EndpointContext Endpoint : AnonymousEndpoint w
ith address
https://localhost:9443/store/site/blocks/workflow/workflow-listener/
ajax/workflow-listener.jag will be marked SUSPENDED as it failed
[2017-11-09 00:13:17,193] WARN - EndpointContext Suspending endpoint
: Anonymou sEndpoint with address
https://localhost:9443/store/site/blocks/workflow/workflo
w-listener/ajax/workflow-listener.jag - current suspend duration is :
30000ms - Next retry after : Thu Nov 09 00:13:47 EST 2017
[2017-11-0900:13:17,201] INFO - LogMediator STATUS = Executing default 'fault'
sequence, ERROR_CODE = 101500, ERROR_MESSAGE = Error in Sender
[2017-11-09 00:14:17,238] INFO - SourceHandler Writer null when
calling informW riterError [2017-11-09 00:14:17,238] WARN -
SourceHandler Connection time out after reques t is read:
http-incoming-1 Socket Timeout : 60000 Remote Address : /10.10.30.130
:49249
[2017-11-09 00:14:24,671] ERROR - AxisEngine The endpoint
reference (EPR) for th e Operation not found is
/services/WorkflowCallbackService and the WSA Action = null. If this
EPR was previously reachable, please contact the server administra
tor. org.apache.axis2.AxisFault: The endpoint reference (EPR) for the
Operation not f ound is /services/WorkflowCallbackService and the WSA
Action = null. If this EPR was previously reachable, please contact
the server administrator.
at org.apache.axis2.engine.DispatchPhase.checkPostConditions(DispatchPha
se.java:102)
at org.apache.axis2.engine.Phase.invoke(Phase.java:329)
at org.apache.axis2.engine.AxisEngine.invoke(AxisEngine.java:261)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:167)
at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEn
closingRESTHandler(ServerWorker.java:325)
at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.j
ava:158)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(Native
WorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:617)
at java.lang.Thread.run(Thread.java:745) [2017-11-09 00:14:24,673] ERROR - ServerWorker Error processing GET request for :
/services/WorkflowCallbackService org.apache.axis2.AxisFault: The
endpoint reference (EPR) for the Operation not f ound is
/services/WorkflowCallbackService and the WSA Action = null. If this
EPR was previously reachable, please contact the server
administrator.
at org.apache.axis2.engine.DispatchPhase.checkPostConditions(DispatchPha
se.java:102)
at org.apache.axis2.engine.Phase.invoke(Phase.java:329)
at org.apache.axis2.engine.AxisEngine.invoke(AxisEngine.java:261)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:167)
at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEn
closingRESTHandler(ServerWorker.java:325)
at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.j
ava:158)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(Native
WorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:617)
at java.lang.Thread.run(Thread.java:745)
BPS's cmd window:
[2017-11-09 00:14:16,738] ERROR {org.wso2.carbon.bpel.core.ode.integration.Partn erService} - Error
sending message to Axis2 for ODE mex {PartnerRoleMex#hqejbhc
nphrcr2a32g83oh [PID
{http://workflow.subscription.apimgt.carbon.wso2.org}Subscr
iptionApprovalWorkFlowProcess-1] calling
org.apache.ode.bpel.epr.WSAEndpoint#705 fc38f.resumeEvent(...) Status
REQUEST} org.apache.axis2.AxisFault: Read timed out
at org.apache.axis2.AxisFault.makeFault(AxisFault.java:430)
at org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.jav
a:199)
at org.apache.axis2.transport.http.HTTPSender.send(HTTPSender.java:77)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.writeMessa
geWithCommons(CommonsHTTPTransportSender.java:451)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(Com
monsHTTPTransportSender.java:278)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
at org.apache.axis2.description.OutOnlyAxisOperationClient.executeImpl(O
utOnlyAxisOperation.java:297)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:
149)
at org.wso2.carbon.bpel.core.ode.integration.utils.AxisServiceUtils.invo
keService(AxisServiceUtils.java:323)
at org.wso2.carbon.bpel.core.ode.integration.PartnerService.invoke(Partn
erService.java:333)
at org.wso2.carbon.bpel.core.ode.integration.BPELMessageExchangeContextI
mpl.invokePartner(BPELMessageExchangeContextImpl.java:43)
at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.invoke(BpelRuntimeC
ontextImpl.java:897)
at org.apache.ode.bpel.runtime.INVOKE.run(INVOKE.java:130)
at sun.reflect.GeneratedMethodAccessor54.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.ode.jacob.vpu.JacobVPU$JacobThreadImpl.run(JacobVPU.java:4
51)
at org.apache.ode.jacob.vpu.JacobVPU.execute(JacobVPU.java:139)
at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.execute(BpelRuntime
ContextImpl.java:1002)
at org.apache.ode.bpel.engine.PartnerLinkMyRoleImpl.invokeInstance(Partn
erLinkMyRoleImpl.java:250)
at org.apache.ode.bpel.engine.BpelProcess$1.invoke(BpelProcess.java:288)
at org.apache.ode.bpel.engine.BpelProcess.invokeProcess(BpelProcess.java
:224)
at org.apache.ode.bpel.engine.BpelProcess.invokeProcess(BpelProcess.java
:279)
at org.apache.ode.bpel.engine.BpelProcess.handleJobDetails(BpelProcess.j
ava:434)
at org.apache.ode.bpel.engine.BpelEngineImpl.onScheduledJob(BpelEngineIm
pl.java:558)
at org.apache.ode.bpel.engine.BpelServerImpl.onScheduledJob(BpelServerIm
pl.java:467)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob$1.call(SimpleS
cheduler.java:633)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob$1.call(SimpleS
cheduler.java:627)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(Simpl
eScheduler.java:298)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(Simpl
eScheduler.java:253)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleSch
eduler.java:627)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleSch
eduler.java:611)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:617)
at java.lang.Thread.run(Thread.java:745) Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:150)
at java.net.SocketInputStream.read(SocketInputStream.java:121)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:961)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:918)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
at org.apache.commons.httpclient.HttpParser.readRawLine(HttpParser.java:
78)
at org.apache.commons.httpclient.HttpParser.readLine(HttpParser.java:106
)
at org.apache.commons.httpclient.HttpConnection.readLine(HttpConnection.
java:1116)
at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$Http
ConnectionAdapter.readLine(MultiThreadedHttpConnectionManager.java:1413)
at org.apache.commons.httpclient.HttpMethodBase.readStatusLine(HttpMetho
dBase.java:1973)
at org.apache.commons.httpclient.HttpMethodBase.readResponse(HttpMethodB
ase.java:1735)
at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.j
ava:1098)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(Htt
pMethodDirector.java:398)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMe
thodDirector.java:171)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.jav
a:397)
at org.apache.axis2.transport.http.AbstractHTTPSender.executeMethod(Abst
ractHTTPSender.java:659)
at org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.jav
a:195)
... 34 more
What could be the issue here? Any idea? do let me know. Thanks
Note that the bps workflow for API STATE CHANGE works just fine with the same configurations
Please note, that you are using calls with HTTPS with specific domain names
Host name verification failed for host : localhost at org.apache.synapse.transport.http.conn.ClientSSLSetupHandler.verify(ClientSSLSetupHandler.java:171) at org.apache.http.nio.reactor.ssl.SSLIOSession.doHandshake(SSLIOSession .java:308)
the certificate provided is CN=localhost, so indeed the host verification fails
what you can do about it
simplest way is switching to http when on secure network (behind firewall, vpn, ..)
update SSL certificates of BPS and APIM to match their hostnames and they have to trust each others certificate (or certificate issuer)
disable SSL hostname validation in axis2.xml (I do not recommend it, good for DEV, VERY BAD for PROD) - set <parameter name="HostnameVerifier">AllowAll</parameter>

Citrus rest client-server test fail with mvn clean verify

[enter image description here][1]I have created the following citrus test cases to test a basic connection between Rest client and server:
#Test
#CitrusTest
fun httpActionTest() {
variable("username", "user")
variable("password","password")
http().client("httpClient")
.send()
.post("/api/authenticate")
.messageType(MessageType.JSON)
.contentType("application/json")
.payload("{ \"username\": \"\${username}\", \"password\": \"\${password}\"}");
http().client("httpClient")
.receive()
.response(HttpStatus.OK)
.validate("$.token","asasasasas")
}
#CitrusTest
fun httpServerActionTest() {
http().server("httpServer")
.receive()
.post("/api/authenticate")
.payload("{ \"username\": \"\${username}\", \"password\": \"\${password}\"}")
.contentType("application/json")
.accept("application/json")
.extractFromPayload("username", "username")
.extractFromPayload("password", "password")
.validate("$.username", "user")
.validate("$.password","pass")
http().server("httpServer")
.send()
.response(HttpStatus.OK)
.payload("{\"token\": \"lsdkfjkh8sdfg98zsd\"}")
.version("HTTP/1.1")
.contentType("application/json")
}
I have defined the server and client endpoints in citrux-context.xml as follows:
<citrus-http:client id="httpClient"
request-url="http://localhost:8080"
request-method="GET"
content-type="application/json"
charset="UTF-8"
timeout="60000"/>
<citrus-http:server id="httpServer"
port="8080"
auto-start="true"
resource-base="src/test/resources"/>
While executing via IntelliJ, following logs are observed:
INFO: Loading XML bean definitions from URL [file:/home/jass/intersales/jk-magento/magento2-auth-service/target/test-classes/citrus-context.xml]
[main] INFO org.eclipse.jetty.util.log - Logging initialized #9851ms to org.eclipse.jetty.util.log.Slf4jLog
[main] INFO org.eclipse.jetty.server.Server - jetty-9.4.6.v20170531
[main] INFO org.eclipse.jetty.server.handler.ContextHandler.ROOT - Initializing Spring FrameworkServlet 'httpServer-servlet'
Oct 23, 2017 8:49:45 AM com.consol.citrus.http.servlet.CitrusDispatcherServlet initServletBean
INFO: FrameworkServlet 'httpServer-servlet': initialization started
Oct 23, 2017 8:49:45 AM org.springframework.web.context.support.XmlWebApplicationContext prepareRefresh
INFO: Refreshing WebApplicationContext for namespace 'httpServer-servlet-servlet': startup date [Mon Oct 23 08:49:45 CEST 2017]; root of context hierarchy
Oct 23, 2017 8:49:45 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from class path resource [com/consol/citrus/http/citrus-servlet-context.xml]
Oct 23, 2017 8:49:46 AM org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping register
...
INFO: Looking for #ControllerAdvice: WebApplicationContext for namespace 'httpServer-servlet-servlet': startup date [Mon Oct 23 08:49:45 CEST 2017]; root of context hierarchy
Oct 23, 2017 8:49:47 AM com.consol.citrus.http.servlet.CitrusDispatcherServlet initServletBean
INFO: FrameworkServlet 'httpServer-servlet': initialization completed in 1570 ms
[main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler#1bb1fde8{/,file:///home/jass/intersales/jk-magento/magento2-auth-service/src/test/resources/,AVAILABLE}
[main] INFO org.eclipse.jetty.server.AbstractConnector - Started ServerConnector#1286528d{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
[main] INFO org.eclipse.jetty.server.Server - Started #12166ms
[main] INFO com.consol.citrus.http.server.HttpServer - Started server: httpServer
[main] INFO com.consol.citrus.Citrus -
[main] INFO com.consol.citrus.Citrus - ------------------------------------------------------------------------
[main] INFO com.consol.citrus.Citrus - .__ __
[main] INFO com.consol.citrus.Citrus - ____ |__|/ |________ __ __ ______
[main] INFO com.consol.citrus.Citrus - _/ ___\| \ __\_ __ \ | \/ ___/
[main] INFO com.consol.citrus.Citrus - \ \___| || | | | \/ | /\___ \
[main] INFO com.consol.citrus.Citrus - \___ >__||__| |__| |____//____ >
[main] INFO com.consol.citrus.Citrus - \/ \/
[main] INFO com.consol.citrus.Citrus -
[main] INFO com.consol.citrus.Citrus - C I T R U S T E S T S 2.7.2
[main] INFO com.consol.citrus.Citrus -
[main] INFO com.consol.citrus.Citrus - ------------------------------------------------------------------------
[main] INFO com.consol.citrus.Citrus -
[main] INFO com.consol.citrus.Citrus -
[main] INFO com.consol.citrus.Citrus - BEFORE TEST SUITE: SUCCESS
[main] INFO com.consol.citrus.Citrus - ------------------------------------------------------------------------
[main] INFO com.consol.citrus.Citrus -
[main] INFO com.consol.citrus.actions.EchoAction - Today is: 23.10.2017
[main] INFO com.consol.citrus.Citrus -
[main] INFO com.consol.citrus.Citrus - TEST SUCCESS VerticleCitrusTest.echoToday (de.intersales.qbus2)
[main] INFO com.consol.citrus.Citrus - ------------------------------------------------------------------------
[main] INFO com.consol.citrus.Citrus -
[qtp191568263-12] INFO com.consol.citrus.channel.ChannelSyncProducer - Message was sent to channel: 'httpServer.inbound'
[qtp191568263-12] WARN com.consol.citrus.channel.ChannelEndpointAdapter - Reply timed out after 1000ms. Did not receive reply message on reply channel
[main] INFO com.consol.citrus.http.client.HttpClient - HTTP message was sent to endpoint: 'http://localhost:8080/magento2/authenticate'
[main] INFO com.consol.citrus.validation.xml.DomXmlMessageValidator - XML message validation successful: All values OK
[main] INFO com.consol.citrus.validation.DefaultMessageHeaderValidator - Message header validation successful: All values OK
[main] INFO com.consol.citrus.Citrus -
[main] INFO com.consol.citrus.Citrus - TEST SUCCESS VerticleCitrusTest.httpActionTest (de.intersales.qbus2)
[main] INFO com.consol.citrus.Citrus - ------------------------------------------------------------------------
[main] INFO com.consol.citrus.Citrus -
[main] INFO com.consol.citrus.Citrus -
[main] INFO com.consol.citrus.Citrus - ------------------------------------------------------------------------
[main] INFO com.consol.citrus.Citrus -
[main] INFO com.consol.citrus.Citrus -
[main] INFO com.consol.citrus.Citrus - AFTER TEST SUITE: SUCCESS
[main] INFO com.consol.citrus.Citrus - ------------------------------------------------------------------------
[main] INFO com.consol.citrus.Citrus -
[main] INFO com.consol.citrus.Citrus - ------------------------------------------------------------------------
[main] INFO com.consol.citrus.Citrus -
[main] INFO com.consol.citrus.Citrus - CITRUS TEST RESULTS
[main] INFO com.consol.citrus.Citrus -
[main] INFO com.consol.citrus.Citrus - VerticleCitrusTest.echoToday ................................... SUCCESS
[main] INFO com.consol.citrus.Citrus - VerticleCitrusTest.httpActionTest .............................. SUCCESS
[main] INFO com.consol.citrus.Citrus -
[main] INFO com.consol.citrus.Citrus - TOTAL: 2
[main] INFO com.consol.citrus.Citrus - FAILED: 0 (0.0%)
[main] INFO com.consol.citrus.Citrus - SUCCESS: 2 (100.0%)
[main] INFO com.consol.citrus.Citrus -
[main] INFO com.consol.citrus.Citrus - ------------------------------------------------------------------------
[main] INFO com.consol.citrus.report.HtmlReporter - Generated HTML test report
But getting an error when executing via mvn clean verify with the following error:
[main] ERROR com.consol.citrus.Citrus - TEST FAILED VerticleCitrusTest.httpActionTest <de.intersales.qbus2> Nested exception is:
org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'httpClient' available
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanDefinition(DefaultListableBeanFactory.java:687)
...
`
Any suggestions or help is greatly appreciated.
EDITED The following is my project structure
[Placement of resources] [1]: https://i.stack.imgur.com/aVabX.png
I see multiple issues in your code and setup. First of all the httpServerActionTest() is missing the #Test annotation. If not put on class level this annotation needs to be repeated on each method in your test class.
Secondly the complete test structure does not make much sense to me. In httpActionTest() test you send a client request to the server while in httpServerActionTest() you receive that very same request as a server and validate its contents with Citrus. Your test is both client and server at the same time. Feels wrong to me! In particular this test setup will never work as Http is a synchronous protocol by nature and httpActionTest() is not able to succeed without httpServerActionTest() performing. You will get timeout exceptions on client side then. This will only work in case both methods are executed in parallel to each other.
Regarding the Maven failure: citrux-context.xml is misspelled (citrux vs. citrus). Also it seems to me that the file is not properly added to the Maven project as a resource. Did you keep the default Maven directory layout?
Once again the complete test setup does not clarify its purpose to me.

Pig: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected

Installation details:
Pig Version: 0.16
Hadoop: 2.7.3
pig -h gives me results as expected.
I have tried : ant clean jar-all -Dhadoopversion=23 - but it didn't help.
My Hadoop installation folder is : /usr/local/bin/hadoop-2.7.3/
bashrc file:
export PIG_HOME="/usr/local/bin/pig/pig-0.16.0"
export PIG_CONF_DIR="$PIG_HOME/conf"
export PIG_CLASSPATH="/usr/local/bin/hadoop-2.7.3/etc/hadoop/"
export PATH=$PATH:$PIG_HOME/bin
export CLASSPATH=$CLASSPATH:/usr/local/bin/pig/lib/*:.
Program:
log = LOAD '/home/dhaval/Desktop/excite-small.log' AS (user:chararray,
time:long, query:chararray);
grpd = GROUP log BY user;
cntd = FOREACH grpd GENERATE group, COUNT(log);
DUMP cntd;
Error:
2017-04-20 23:38:39,761 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2017-04-20 23:38:39,831 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: GROUP_BY
2017-04-20 23:38:39,897 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2017-04-20 23:38:39,898 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2017-04-20 23:38:39,926 [main] INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
2017-04-20 23:38:39,995 [main] INFO org.apache.pig.impl.util.SpillableMemoryManager - Selected heap (PS Old Gen) of size 699400192 to monitor. collectionUsageThreshold = 489580128, usageThreshold = 489580128
2017-04-20 23:38:40,063 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2017-04-20 23:38:40,078 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.CombinerOptimizerUtil - Choosing to move algebraic foreach to combiner
2017-04-20 23:38:40,107 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
2017-04-20 23:38:40,107 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
2017-04-20 23:38:40,139 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2017-04-20 23:38:40,140 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2017-04-20 23:38:40,148 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - session.id is deprecated. Instead, use dfs.metrics.session-id
2017-04-20 23:38:40,149 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Initializing JVM Metrics with processName=JobTracker, sessionId=
2017-04-20 23:38:40,174 [main] WARN org.apache.pig.backend.hadoop20.PigJobControl - falling back to default JobControl (not using hadoop 0.20 ?)
java.lang.NoSuchFieldException: runnerState
at java.lang.Class.getDeclaredField(Class.java:2070)
at org.apache.pig.backend.hadoop20.PigJobControl.<clinit>(PigJobControl.java:51)
at org.apache.pig.backend.hadoop.executionengine.shims.HadoopShims.newJobControl(HadoopShims.java:109)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:314)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:196)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:308)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1474)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1459)
at org.apache.pig.PigServer.storeEx(PigServer.java:1118)
at org.apache.pig.PigServer.store(PigServer.java:1081)
at org.apache.pig.PigServer.openIterator(PigServer.java:994)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:747)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:376)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:231)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:206)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
at org.apache.pig.Main.run(Main.java:564)
at org.apache.pig.Main.main(Main.java:176)
2017-04-20 23:38:40,177 [main] INFO org.apache.pig.tools.pigstats.mapreduce.MRScriptState - Pig script settings are added to the job
2017-04-20 23:38:40,183 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.reduce.markreset.buffer.percent is deprecated. Instead, use mapreduce.reduce.markreset.buffer.percent
2017-04-20 23:38:40,183 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2017-04-20 23:38:40,183 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.output.compress is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
2017-04-20 23:38:40,184 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Reduce phase detected, estimating # of required reducers.
2017-04-20 23:38:40,185 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Using reducer estimator: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
2017-04-20 23:38:40,190 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator - BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=208348
2017-04-20 23:38:40,190 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting Parallelism to 1
2017-04-20 23:38:40,190 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
2017-04-20 23:38:40,201 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
2017-04-20 23:38:40,207 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
2017-04-20 23:38:40,207 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cacche
2017-04-20 23:38:40,207 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Distributed cache not supported or needed in local mode. Setting key [pig.schematuple.local.dir] with code temp directory: /tmp/1492745920207-0
2017-04-20 23:38:40,285 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
2017-04-20 23:38:40,285 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker.http.address is deprecated. Instead, use mapreduce.jobtracker.http.address
2017-04-20 23:38:40,294 [JobControl] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2017-04-20 23:38:40,302 [JobControl] ERROR org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl - Error while trying to run jobs.
java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.setupUdfEnvAndStores(PigOutputFormat.java:243)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.checkOutputSpecs(PigOutputFormat.java:191)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:240)
at org.apache.pig.backend.hadoop20.PigJobControl.run(PigJobControl.java:121)
at java.lang.Thread.run(Thread.java:745)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:276)
2017-04-20 23:38:40,302 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2017-04-20 23:38:40,309 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
2017-04-20 23:38:40,309 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job null has failed! Stop running all dependent jobs
2017-04-20 23:38:40,309 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2017-04-20 23:38:40,310 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Could not write to log file: /log/path :/log/path (No such file or directory)
2017-04-20 23:38:40,310 [main] ERROR org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Unexpected System Error Occured: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.setupUdfEnvAndStores(PigOutputFormat.java:243)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.checkOutputSpecs(PigOutputFormat.java:191)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:240)
at org.apache.pig.backend.hadoop20.PigJobControl.run(PigJobControl.java:121)
at java.lang.Thread.run(Thread.java:745)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:276)
2017-04-20 23:38:40,311 [main] ERROR org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil - 1 map reduce job(s) failed!
2017-04-20 23:38:40,313 [main] INFO org.apache.pig.tools.pigstats.mapreduce.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.5.1 0.16.0 dhaval 2017-04-20 23:38:40 2017-04-20 23:38:40 GROUP_BY
Failed!
Failed Jobs:
JobId Alias Feature Message Outputs
N/A cntd,grpd,log GROUP_BY,COMBINER Message: Unexpected System Error Occured: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.setupUdfEnvAndStores(PigOutputFormat.java:243)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.checkOutputSpecs(PigOutputFormat.java:191)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:240)
at org.apache.pig.backend.hadoop20.PigJobControl.run(PigJobControl.java:121)
at java.lang.Thread.run(Thread.java:745)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:276)
file:/tmp/temp1942265384/tmp-1728388493,
Input(s):
Failed to read data from "/home/dhaval/Desktop/excite-small.log"
Output(s):
Failed to produce result in "file:/tmp/temp1942265384/tmp-1728388493"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
null
2017-04-20 23:38:40,314 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
2017-04-20 23:38:40,317 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias cntd
2017-04-20 23:38:40,317 [main] WARN org.apache.pig.tools.grunt.Grunt - Could not write to log file: /log/path :/log/path (No such file or directory)
2017-04-20 23:38:40,317 [main] ERROR org.apache.pig.tools.grunt.Grunt - org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias cntd
at org.apache.pig.PigServer.openIterator(PigServer.java:1019)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:747)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:376)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:231)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:206)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
at org.apache.pig.Main.run(Main.java:564)
at org.apache.pig.Main.main(Main.java:176)
Caused by: java.io.IOException: Job terminated with anomalous status FAILED
at org.apache.pig.PigServer.openIterator(PigServer.java:1011)
... 7 more
The problem is a compatibility issue between pig-0.16 and hadoop 2.x
You can refer the pig version compatible with Hadoop-2.7.3 here -
https://pig.apache.org/releases.html#19+June%2C+2017%3A+release+0.17.0+available
That means you should use Pig 0.17 only with hadoop versions starting from 2.7.3
I know its an old question, but someone facing the same issue can benefit from this.