flink HA standalone cluster failed - akka

2 computers,203,204
both run jobmanager and taskmanager on every computer
masters
hz203:9081
hz204:9081
slaves
hz203
hz204
flink-conf.yaml
jobmanager.rpc.port: 6123
rest.port: 9081
blob.server.port: 6124
query.server.port: 6125
web.tmpdir: /home/ctu/flink/deploy/webTmp
web.log.path: /home/ctu/flink/deploy/log
taskmanager.tmp.dirs: /home/ctu/flink/deploy/taskManagerTmp
high-availability: zookeeper
high-availability.storageDir: file:///home/ctu/flink/deploy/HA
high-availability.zookeeper.quorum: 10.0.1.79:2181
high-availability.zookeeper.path.root: /flink
high-availability.cluster-id: /flink
run ./start-cluster.sh
Starting HA cluster with 2 masters.
Starting standalonesession daemon on host hz203.
Starting standalonesession daemon on host hz204.
Starting taskexecutor daemon on host hz203.
Starting taskexecutor daemon on host hz204.
logs
2018-12-20 20:44:03,843 INFO org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService - Starting ZooKeeperLeaderElectionService ZooKeeperLeaderElectionService{leaderPath='/leader/rest_server_lock'}.
2018-12-20 20:44:03,864 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - Web frontend listening at http://127.0.0.1:9081.
2018-12-20 20:44:03,875 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for org.apache.flink.runtime.resourcemanager.StandaloneResourceManager at akka://flink/user/resourcemanager .
2018-12-20 20:44:03,989 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for org.apache.flink.runtime.dispatcher.StandaloneDispatcher at akka://flink/user/dispatcher .
2018-12-20 20:44:03,999 INFO org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService - Starting ZooKeeperLeaderElectionService ZooKeeperLeaderElectionService{leaderPath='/leader/resource_manager_lock'}.
2018-12-20 20:44:04,008 INFO org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService - Starting ZooKeeperLeaderRetrievalService /leader/resource_manager_lock.
2018-12-20 20:44:04,009 INFO org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService - Starting ZooKeeperLeaderElectionService ZooKeeperLeaderElectionService{leaderPath='/leader/dispatcher_lock'}.
2018-12-20 20:44:04,010 INFO org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService - Starting ZooKeeperLeaderRetrievalService /leader/dispatcher_lock.
2018-12-20 20:44:04,206 WARN akka.remote.transport.netty.NettyTransport - Remote connection to [null] failed with java.net.ConnectException: Connection refused: /127.0.0.1:43012
2018-12-20 20:44:04,221 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink#127.0.0.1:43012] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink#127.0.0.1:43012]] Caused by: [Connection refused: /127.0.0.1:43012]
2018-12-20 20:44:04,301 WARN akka.remote.transport.netty.NettyTransport - Remote connection to [null] failed with java.net.ConnectException: Connection refused: /127.0.0.1:43012
2018-12-20 20:44:04,301 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink#127.0.0.1:43012] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink#127.0.0.1:43012]] Caused by: [Connection refused: /127.0.0.1:43012]
2018-12-20 20:44:04,378 WARN akka.remote.transport.netty.NettyTransport - Remote connection to [null] failed with java.net.ConnectException: Connection refused: /127.0.0.1:43012
2018-12-20 20:44:04,378 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink#127.0.0.1:43012] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink#127.0.0.1:43012]] Caused by: [Connection refused: /127.0.0.1:43012]
2018-12-20 20:44:04,451 WARN akka.remote.transport.netty.NettyTransport - Remote connection to [null] failed with java.net.ConnectException: Connection refused: /127.0.0.1:43012
2018-12-20 20:44:04,451 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink#127.0.0.1:43012] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink#127.0.0.1:43012]] Caused by: [Connection refused: /127.0.0.1:43012]
2018-12-20 20:44:04,520 WARN akka.remote.transport.netty.NettyTransport - Remote connection to [null] failed with java.net.ConnectException: Connection refused: /127.0.0.1:43012
questions
`akka.tcp://flink#127.0.0.1:33567/user/resourcemanager` --- Why the 127.0.0.1 instead of the `jobmanager` ip in the `masters's` config file?

The problem is a bug we fixed in version 1.6.1. In 1.6.0 we did not respect the --host command line option in the method ClusterEntrypoint#loadConfiguration as you can see here compared to the code of version 1.6.1.
Thus, upgrading to the latest 1.6.x version should fix the problem. In general I would always recommend upgrading to the latest bug fix version of a release if possible.

Related

Istio1.9 integration with virtual machine (aws ec2) getting host file as empty

I have installed mysql in a VM and wanted my EKS with istio 1.9 installed to talk with them, i am following this https://istio.io/latest/docs/setup/install/virtual-machine/ but when am doing this step the host file which getting generated is empty file.
With this empty host file i tried but when starting the vm with this command am getting
> sudo systemctl start istio
when tailed this file
*/var/log/istio/istio.log*
2021-03-22T18:44:02.332421Z info Proxy role ips=[10.8.1.179 fe80::dc:36ff:fed3:9eea] type=sidecar id=ip-10-8-1-179.vm domain=vm.svc.cluster.local
2021-03-22T18:44:02.332429Z info JWT policy is third-party-jwt
2021-03-22T18:44:02.332438Z info Pilot SAN: [istiod.istio-system.svc]
2021-03-22T18:44:02.332443Z info CA Endpoint istiod.istio-system.svc:15012, provider Citadel
2021-03-22T18:44:02.332997Z info Using CA istiod.istio-system.svc:15012 cert with certs: /etc/certs/root-cert.pem
2021-03-22T18:44:02.333093Z info citadelclient Citadel client using custom root cert: istiod.istio-system.svc:15012
2021-03-22T18:44:02.410934Z info ads All caches have been synced up in 82.7974ms, marking server ready
2021-03-22T18:44:02.411247Z info sds SDS server for workload certificates started, listening on "./etc/istio/proxy/SDS"
2021-03-22T18:44:02.424855Z info sds Start SDS grpc server
2021-03-22T18:44:02.425044Z info xdsproxy Initializing with upstream address "istiod.istio-system.svc:15012" and cluster "Kubernetes"
2021-03-22T18:44:02.425341Z info Starting proxy agent
2021-03-22T18:44:02.425483Z info dns Starting local udp DNS server at localhost:15053
2021-03-22T18:44:02.427627Z info dns Starting local tcp DNS server at localhost:15053
2021-03-22T18:44:02.427683Z info Opening status port 15020
2021-03-22T18:44:02.432407Z info Received new config, creating new Envoy epoch 0
2021-03-22T18:44:02.433999Z info Epoch 0 starting
2021-03-22T18:44:02.690764Z warn ca ca request failed, starting attempt 1 in 91.93939ms
2021-03-22T18:44:02.693579Z info Envoy command: [-c etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster istio-proxy --service-node sidecar~10.8.1.179~ip-10-8-1-179.vm~vm.svc.cluster.local --local-address-ip-version v4 --bootstrap-version 3 --log-format %Y-%m-%dT%T.%fZ %l envoy %n %v -l warning --component-log-level misc:error --concurrency 2]
2021-03-22T18:44:02.782817Z warn ca ca request failed, starting attempt 2 in 195.226287ms
2021-03-22T18:44:02.978344Z warn ca ca request failed, starting attempt 3 in 414.326774ms
2021-03-22T18:44:03.392946Z warn ca ca request failed, starting attempt 4 in 857.998629ms
2021-03-22T18:44:04.251227Z warn sds failed to warm certificate: failed to generate workload certificate: create certificate: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp: lookup istiod.istio-system.svc on 10.8.0.2:53: no such host"
2021-03-22T18:44:04.849207Z warn ca ca request failed, starting attempt 1 in 91.182413ms
2021-03-22T18:44:04.940652Z warn ca ca request failed, starting attempt 2 in 207.680983ms
2021-03-22T18:44:05.148598Z warn ca ca request failed, starting attempt 3 in 384.121814ms
2021-03-22T18:44:05.533019Z warn ca ca request failed, starting attempt 4 in 787.704352ms
2021-03-22T18:44:06.321042Z warn sds failed to warm certificate: failed to generate workload certificate: create certificate: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp: lookup istiod.istio-system.svc on 10.8.0.2:53: no such host"

Grunt - Mapreduce Mode: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 8 time(s); retry policy is RetryUpToMaximum

I'm running a mapreduce mode in Apache Pig version 0.17.0 to simply dump a few lines of text data from a file on HDFS Hadoop-2.7.2
When executing the dump command, the execution goes very slow, however it gets completed. I see some failures during execution shown below:
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Running jobs are [job_1589604570386_0002]
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 50% complete
[main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Running jobs are [job_1589604570386_0002]
[main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
[main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
[main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
[main] WARN org.apache.pig.tools.pigstats.mapreduce.MRJobStats - Failed to get map task report
java.io.IOException: java.net.ConnectException: Call From localhost/127.0.0.1 to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:343)
at org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:428)
at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:572)
at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:184)
at org.apache.pig.tools.pigstats.mapreduce.MRJobStats.getTaskReports(MRJobStats.java:528)
at org.apache.pig.tools.pigstats.mapreduce.MRJobStats.addMapReduceStatistics(MRJobStats.java:355)
at org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil.addSuccessJobStats(MRPigStatsUtil.java:232)
at org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil.accumulateStats(MRPigStatsUtil.java:164)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:379)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:290)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1475)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1460)
at org.apache.pig.PigServer.storeEx(PigServer.java:1119)
at org.apache.pig.PigServer.store(PigServer.java:1082)
at org.apache.pig.PigServer.openIterator(PigServer.java:995)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:782)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:383)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
at org.apache.pig.Main.run(Main.java:564)
at org.apache.pig.Main.main(Main.java:175)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Is there away to speed up the mapreduce job?

Selenium grid Kubernetes Could not start a new session. Possible causes are invalid address of the remote server or browser start-up failure

We have selenium hub deployed on kubernetes cliuster on AWS and used ingress-traefik to expose service.
We have also have selenium chrome node registered to this selenium hub on kubernetes.
When i see the grid console page i can see the chrome node attached to this hub.
But when i trigger my automation suite through Jenkins i am getting the below error message
org.openqa.selenium.remote.UnreachableBrowserException: Could not start a new session. Possible causes are invalid address of the remote server or browser start-up failure.
Build info: version: '3.141.59', revision: 'e82be7d358', time: '2018-11-14T08:17:03'
System info: host: 'xxxxx', ip: 'x.x.x.x', os.name: 'Linux', os.arch: 'amd64', os.version: '4.14.165-103.209.amzn1.x86_64', java.version: '1.8.0_221'
Driver info: driver.version: RemoteWebDriver
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:573)
at org.openqa.selenium.remote.RemoteWebDriver.startSession(RemoteWebDriver.java:213)
at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:131)
at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:144)
at stepDefns.SetUp.setUpBrowser(SetUp.java:145)
at stepDefns.OrderSpecTabSteps.user_sets_the_browser_to_and_version(OrderSpecTabSteps.java:25)
at ✽.Given user sets the browser to "chrome" and version "69"(/data/jenkins_home/workspace/FPSAutomation/src/test/java/features/NonRes.feature:4)
Caused by: java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
in Logs i can see Caused by: java.net.SocketTimeoutException: connect timed out
In My java code I am using node url as below which is "HTTPS"
String nodeURL = "https://<hostname>/wd/hub";
ChromeOptions remoteOptions = new ChromeOptions();
driver=new RemoteWebDriver(new URL(nodeURL), remoteOptions);
Please let me know how to resolve this issue.
Thanks in Advance.

WSO2 APIM Analytics server setup issue

In order to setup analytical server for API-Manager,i followed exactly same steps as specified in below wso2 documentation.
https://docs.wso2.com/display/AM220/Configuring+APIM+Analytics.
However i am facing below issues while running the API-M And API-Analytic instance.
WSO2 API-M:
[2018-05-08 02:52:05,378] ERROR - DataEndpointConnectionWorker Error while trying to connect to the endpoint. Cannot borrow client
for ssl://localhost:7712
org.wso2.carbon.databridge.agent.exception.DataEndpointAuthenticationException:
Cannot borrow client for ssl://localhost:7712
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:99)
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.run(DataEndpointConnectionWorker.java:42)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724) Caused by: org.wso2.carbon.databridge.agent.exception.DataEndpointSecurityException:
Error while trying to connect to ssl://localhost:7712
at org.wso2.carbon.databridge.agent.endpoint.thrift.ThriftSecureClientPoolFactory.createClient(ThriftSecureClientPoolFactory.java:81)
at org.wso2.carbon.databridge.agent.client.AbstractClientPoolFactory.makeObject(AbstractClientPoolFactory.java:39)
at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1212)
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:91)
... 7 more Caused by: org.apache.thrift.transport.TTransportException: Could not connect to
localhost on port 7712
at org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:237)
at org.apache.thrift.transport.TSSLTransportFactory.getClientSocket(TSSLTransportFactory.java:169)
at org.wso2.carbon.databridge.agent.endpoint.thrift.ThriftSecureClientPoolFactory.createClient(ThriftSecureClientPoolFactory.java:63)
... 10 more Caused by: java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.connect0(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:618)
at sun.security.ssl.SSLSocketImpl.(SSLSocketImpl.java:407)
at sun.security.ssl.SSLSocketFactoryImpl.createSocket(SSLSocketFactoryImpl.java:88)
at org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:233)
... 12 more [2018-05-08 02:52:34,371] WARN - DataEndpointGroup No receiver is reachable at reconnection, will try
to reconnect every 30 sec [2018-05-08 02:52:35,378] ERROR -
DataEndpointConnectionWorker Error while trying to connect to
ssl://localhost:7712
org.wso2.carbon.databridge.agent.exception.DataEndpointSecurityException:
Error while trying to connect to ssl://localhost:7712
at org.wso2.carbon.databridge.agent.endpoint.thrift.ThriftSecureClientPoolFactory.createClient(ThriftSecureClientPoolFactory.java:81)
at org.wso2.carbon.databridge.agent.client.AbstractClientPoolFactory.makeObject(AbstractClientPoolFactory.java:39)
at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1212)
WSO2 API-Analytic:
JAVA_HOME environment variable is set to C:\Program Files\Java\jdk1.7.0_25 CARBON_HOME environment variable is set to
C:\WSO2AM~4\WSO2AM~1\bin.. Loading spark environment variables
[2018-05-08 02:36:02,932] INFO
{org.wso2.carbon.core.internal.CarbonCoreActivator} - Starting WSO2
Carbon... [2018-05-08 02:36:02,936] INFO
{org.wso2.carbon.core.internal.CarbonCoreActivator} - Operating
System : Windows Server 2008 R2 6.1, amd64 [2018-05-08 02:36:02,936]
INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java Home
: C:\Program Files\Java\jdk1.7.0_25\jre [2018-05-08 02:36:02,936]
INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java
Version : 1.7.0_25 [2018-05-08 02:36:02,937] INFO
{org.wso2.carbon.core.internal.CarbonCoreActivator} - Java VM
: Java HotSpot(TM) 64-Bit Server VM 23.25-b01,Oracle Corporation
[2018-05-08 02:36:02,937] INFO
{org.wso2.carbon.core.internal.CarbonCoreActivator} - Carbon Home
: C:\WSO2AM~4\WSO2AM~1\bin.. [2018-05-08 02:36:02,937] INFO
{org.wso2.carbon.core.internal.CarbonCoreActivator} - Java Temp Dir
: C:\WSO2AM~4\WSO2AM~1\bin..\tmp [2018-05-08 02:36:02,937] INFO
{org.wso2.carbon.core.internal.CarbonCoreActivator} - User
: wsoadm_dev_svc, en-US, America/Los_Angeles [2018-05-08 02:36:03,570]
INFO
{org.wso2.carbon.event.output.adapter.kafka.internal.ds.KafkaEventAdapterServiceDS}
- Successfully deployed the Kafka output event adapto [2018-05-08 02:36:03,887] INFO
{org.wso2.carbon.event.template.manager.core.internal.ds.TemplateDeployerServiceTrackerDS}
- Successfully deployed the execution manager [2018-05-08 02:36:15,048] INFO
{org.wso2.carbon.databridge.receiver.binary.internal.BinaryDataReceiver}
- Started Binary SSL Transport on port : 9712 [2018-05-08 02:36:15,050] INFO
{org.wso2.carbon.databridge.receiver.binary.internal.BinaryDataReceiver}
- Started Binary TCP Transport on port : 9612 [2018-05-08 02:36:15,207] INFO
{org.wso2.carbon.databridge.core.internal.DataBridgeDS} -
Successfully deployed Agent Server [2018-05-08 02:36:15,388] INFO
{org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService} -
Configured Registry in 64ms [2018-05-08 02:36:15,912] INFO
{org.wso2.carbon.registry.core.internal.RegistryCoreServiceComponent}
- Registry Mode : READ-WRITE [2018-05-08 02:36:22,854] INFO {org.wso2.carbon.metrics.impl.util.JmxReporterBuilder} - Creating JMX
reporter for Metrics with domain 'org.wso2.carbon.metrics' [2018-05-08
02:36:22,861] INFO
{org.wso2.carbon.metrics.impl.util.JDBCReporterBuilder} - Creating
JDBC reporter for Metrics with source 'SOADEVV001', data source 'jdbc/
[2018-05-08 02:36:22,862] INFO
{org.wso2.carbon.metrics.impl.reporter.AbstractReporter} - Started
JMX reporter for Metrics [2018-05-08 02:36:22,870] INFO
{org.wso2.carbon.metrics.impl.reporter.AbstractReporter} - Started
JDBC reporter for Metrics [2018-05-08 02:36:26,352] INFO
{org.wso2.carbon.registry.indexing.solr.SolrClient} - Default
Embedded Solr Server Initialized [2018-05-08 02:36:27,524] INFO
{org.wso2.carbon.user.core.internal.UserStoreMgtDSComponent} - Carbon
UserStoreMgtDSComponent activated successfully.
[2018-05-08 02:37:32,134] WARN {org.wso2.carbon.core.init.CarbonServerManager} - Carbon
initialization is delayed due to the following unsatisfied items:
[2018-05-08 02:37:32,498] WARN
{org.wso2.carbon.core.init.CarbonServerManager} - Waiting for
required OSGi Service: org.apache.axis2.engine.AxisObserver
[2018-05-08 02:38:32,135] WARN
{org.wso2.carbon.core.init.CarbonServerManager} - Carbon
initialization is delayed due to the following unsatisfied items:
Note:I am using same Machine for both API-M and API API-Analytic instance.
For APIM Offset is set to 0 and for API-Analytic it is 1.
I am using default H2 DB for for both.
Any help would be appreciated.

How to remotely start a Akka actor: akka-in-action\chapter-remoting

By following Akka documents, I can start two actors(front-end and back-end) on the same machine, and they can talk to each other. However, when I tried to deploy back-end actor to another machine(Linux), I hit error of start remoting:
============
Multiple main classes detected, select one to run:
[1] com.goticks.BackendMain
[2] com.goticks.BackendRemoteDeployMain
[3] com.goticks.FrontendMain
[4] com.goticks.FrontendRemoteDeployMain
[5] com.goticks.FrontendRemoteDeployWatchMain
[6] com.goticks.SingleNodeMain
Enter number: 2
[info] Running com.goticks.BackendRemoteDeployMain
INFO [Slf4jLogger]: Slf4jLogger started
INFO [Remoting]: Starting remoting
ERROR [NettyTransport]: failed to bind to /192.168.1.9:2551, shutting down Netty transport
192.168.1.9 is another machine.
In backend.conf:
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
#hostname = "0.0.0.0"
hostname = "192.168.1.9"
port = 2551
}
}
I have one basic question, when deploy and start a remote actor on remote JVM, do we need user login information to remote machine?
Thanks,
You don't need user login information, I think your port 2551 is already in use on hostname = 192.168.1.9, are you sure you don't use it in the past ?
I also had the same problem, and I accidentally forgot to close the running program on the same port after that I tried to run the program for the second time and it happened Exception in thread "main" org.jboss.netty.channel.ChannelException: Failed to bind to: /192.168.3.216:2552
Just to add more information regarding my previous question:
Multiple main classes detected, select one to run:
[1] com.goticks.BackendMain
[2] com.goticks.BackendRemoteDeployMain
[3] com.goticks.FrontendMain
[4] com.goticks.FrontendRemoteDeployMain
[5] com.goticks.FrontendRemoteDeployWatchMain
[6] com.goticks.SingleNodeMain
Enter number: 2
[info] Running com.goticks.BackendRemoteDeployMain
[DEBUG] [04/18/2016 15:54:11.554] [run-main-0] [EventStream(akka://backend)] logger log1-Logging$DefaultLogger started
[DEBUG] [04/18/2016 15:54:11.555] [run-main-0] [EventStream(akka://backend)] Default Loggers started
[INFO] [04/18/2016 15:54:11.591] [run-main-0] [akka.remote.Remoting] Starting remoting
[ERROR] [04/18/2016 15:54:11.748] [backend-akka.remote.default-remote-dispatcher-5] [NettyTransport(akka://backend)] failed to bind to /192.168.1.9:2551, shutting down Netty transport
[ERROR] [04/18/2016 15:54:11.757] [run-main-0] [akka.remote.Remoting] Remoting error: [Startup failed] [
akka.remote.RemoteTransportException: Startup failed at
akka.remote.Remoting.akka$remote$Remoting$$notifyError(Remoting.scala:136)
at akka.remote.Remoting.start(Remoting.scala:201)
at akka.remote.RemoteActorRefProvider.init(RemoteActorRefProvider.scala:184)
at akka.actor.ActorSystemImpl.liftedTree2$1(ActorSystem.scala:663)
at akka.actor.ActorSystemImpl._start$lzycompute(ActorSystem.scala:660)
at akka.actor.ActorSystemImpl._start(ActorSystem.scala:660)
at akka.actor.ActorSystemImpl.start(ActorSystem.scala:676)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:143)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:120)
at com.goticks.BackendRemoteDeployMain$.delayedEndpoint$com$goticks$BackendRemoteDeployMain$1(BackendRemoteDeployMain.scala:9)
at com.goticks.BackendRemoteDeployMain$delayedInit$body.apply(BackendRemoteDeployMain.scala:6)
at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:76)
at scala.App$$anonfun$main$1.apply(App.scala:76)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
at scala.App$class.main(App.scala:76)
at com.goticks.BackendRemoteDeployMain$.main(BackendRemoteDeployMain.scala:6)
at com.goticks.BackendRemoteDeployMain.main(BackendRemoteDeployMain.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at sbt.Run.invokeMain(Run.scala:67)
at sbt.Run.run0(Run.scala:61)
at sbt.Run.sbt$Run$$execute$1(Run.scala:51)
at sbt.Run$$anonfun$run$1.apply$mcV$sp(Run.scala:55)
at sbt.Run$$anonfun$run$1.apply(Run.scala:55)
at sbt.Run$$anonfun$run$1.apply(Run.scala:55)
at sbt.Logger$$anon$4.apply(Logger.scala:85)
at sbt.TrapExit$App.run(TrapExit.scala:248)
at java.lang.Thread.run(Unknown Source)
Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to: /192.168.1.9:2551
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:410)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:406)