I am connecting to AWS Elasticache Redis via Redisson from my Amazon EC2 instance. After lots of request of redis connection, I get the following issue which halt my program execution. The problem doesn't occur for few request to redis interation, but it eventually happen after lots of requests.
2018-10-11 11:02:38,363 ERROR org.redisson.client.handler.CommandsQueue - Exception occured. Channel: [id: 0x46c06a6a, L:0.0.0.0/0.0.0.0:49308 ! R:redis-pa-qc-001.redis-pa-qc.yzmnbg.use1.cache.amazonaws.com/10.0.24.226:6379]
io.netty.handler.codec.DecoderException: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1412)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:943)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:141)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1429)
at sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:535)
at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:813)
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:781)
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:292)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1248)
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1159)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1194)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
... 16 common frames omitted
Caused by: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at sun.security.validator.PKIXValidator.(PKIXValidator.java:90)
at sun.security.validator.Validator.getInstance(Validator.java:179)
at sun.security.ssl.X509TrustManagerImpl.getValidator(X509TrustManagerImpl.java:312)
at sun.security.ssl.X509TrustManagerImpl.checkTrustedInit(X509TrustManagerImpl.java:171)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:239)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1493)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:919)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:916)
at java.security.AccessController.doPrivileged(Native Method)
at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1369)
at io.netty.handler.ssl.SslHandler.runDelegatedTasks(SslHandler.java:1408)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1316)
... 20 common frames omitted
Caused by: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at java.security.cert.PKIXParameters.setTrustAnchors(PKIXParameters.java:200)
at java.security.cert.PKIXParameters.(PKIXParameters.java:120)
at java.security.cert.PKIXBuilderParameters.(PKIXBuilderParameters.java:104)
at sun.security.validator.PKIXValidator.(PKIXValidator.java:88)
The error is basically JVM is not able to get the required SSL certificates to establish the connection. Check your Java version and update it.
And update the java ca-certs.
Run:
sudo apt-get install ca-certificates-java" (Ubuntu system) or
update-ca-certificates -f
Related
I'm getting below error while starting my wildfly10
04:21:19,604 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-2) MSC000001: Failed to start service jboss.module.service."deployment.giftregistry.war".main: org.jboss.msc.service.StartException in service jboss.module.service."deployment.giftregistry.war".main: WFLYSRV0179: Failed to load module: deployment.giftregistry.war:main
at org.jboss.as.server.moduleservice.ModuleLoadService.start(ModuleLoadService.java:91) [wildfly-server-2.2.0.Final.jar:2.2.0.Final]
at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948) [jboss-msc-1.2.6.Final.jar:1.2.6.Final]
at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881) [jboss-msc-1.2.6.Final.jar:1.2.6.Final]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_191]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_191]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_191]
Caused by: org.jboss.modules.ModuleNotFoundException: config:main
at org.jboss.modules.Module.addPaths(Module.java:1093) [jboss-modules.jar:1.5.2.Final]
at org.jboss.modules.Module.link(Module.java:1449) [jboss-modules.jar:1.5.2.Final]
at org.jboss.modules.Module.relinkIfNecessary(Module.java:1477) [jboss-modules.jar:1.5.2.Final]
at org.jboss.modules.ModuleLoader.loadModule(ModuleLoader.java:225) [jboss-modules.jar:1.5.2.Final]
at org.jboss.as.server.moduleservice.ModuleLoadService.start(ModuleLoadService.java:68) [wildfly-server-2.2.0.Final.jar:2.2.0.Final]
... 5 more
04:21:19,618 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("deploy") failed - address: ([("deployment" => "giftregistry.war")]) - failure description: {
"WFLYCTL0080: Failed services" => {"jboss.module.service.\"deployment.giftregistry.war\".main" => "org.jboss.msc.service.StartException in service jboss.module.service.\"deployment.giftregistry.war\".main: WFLYSRV0179: Failed to load module: deployment.giftregistry.war:main
Caused by: org.jboss.modules.ModuleNotFoundException: config:main"},
"WFLYCTL0412: Required services that are not installed:" => ["jboss.module.service.\"deployment.giftregistry.war\".main"],
"WFLYCTL0180: Services with missing/unavailable dependencies" => undefined
Basically I'm migrating my application in AWS. I'm running this server on AWS EC2 instance using docker.I have copy pasted all the configuration present in my last server to this wildfly10 configuratuion.
Basically it says that there is a module named config missing.
It should be defined in [wildfly-root]/modules/config/main/module.xml together with any needed resources.
I have resolved the issue by removing
<global-modules>
<module name="config" slot="main"/>
</global-modules>
from my standalone.xml
I've downloaded latest version of kie-workbench 7.7.0 which is compatible with wildFly-11 . Kept kie-workbench-7.7.0.war into deployments folder of wildFly 11. Starting from bin folder using following command standalone.bat --server-config=standalone-full.xml. Then I'm getting the following issue. Can anyone say what I'm doing wrong here? If I take older version of kie-workbench 6.5.0 with wildFly10 it's working fine.I'm doing this in Windows10 OS. Thanks in advance.
19:16:43,847 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 82) MSC000001: Failed to start service jboss.undertow.deployment.default-server.default-host./kie-wb: org.jboss.msc.service.StartException in service jboss.undertow.deployment.default-server.default-host./kie-wb: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.IllegalArgumentException: Parameter 'directory' is not a directory: D:\downloads\drools%20workbench7\wildfly-11.0.0.Final\standalone\tmp\vfs\temp\temp7606482f82f9e84e\content-3e547a5371ce9421
at org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:84)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at org.jboss.threads.JBossThread.run(JBossThread.java:320)
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.IllegalArgumentException: Parameter 'directory' is not a directory: D:\downloads\drools%20workbench7\wildfly-11.0.0.Final\standalone\tmp\vfs\temp\temp7606482f82f9e84e\content-3e547a5371ce9421
at io.undertow.servlet.core.DeploymentManagerImpl.deploy(DeploymentManagerImpl.java:241)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentService.startContext(UndertowDeploymentService.java:99)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:81)
... 6 more
Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: Parameter 'directory' is not a directory: D:\downloads\drools%20workbench7\wildfly-11.0.0.Final\standalone\tmp\vfs\temp\temp7606482f82f9e84e\content-3e547a5371ce9421
at org.uberfire.backend.server.plugins.engine.PluginManager.init(PluginManager.java:79)
at org.uberfire.backend.server.plugins.PluginService.init(PluginService.java:79)
at org.uberfire.backend.server.plugins.PluginStartup.contextInitialized(PluginStartup.java:32)
at io.undertow.servlet.core.ApplicationListeners.contextInitialized(ApplicationListeners.java:187)
at io.undertow.servlet.core.DeploymentManagerImpl$1.call(DeploymentManagerImpl.java:205)
at io.undertow.servlet.core.DeploymentManagerImpl$1.call(DeploymentManagerImpl.java:174)
at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:42)
at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
at org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at io.undertow.servlet.core.DeploymentManagerImpl.deploy(DeploymentManagerImpl.java:239)
... 8 more
Caused by: java.lang.IllegalArgumentException: Parameter 'directory' is not a directory: D:\downloads\drools%20workbench7\wildfly-11.0.0.Final\standalone\tmp\vfs\temp\temp7606482f82f9e84e\content-3e547a5371ce9421
at org.apache.commons.io.FileUtils.validateListFilesParameters(FileUtils.java:536)
at org.apache.commons.io.FileUtils.listFiles(FileUtils.java:512)
at org.apache.commons.io.FileUtils.listFiles(FileUtils.java:684)
We have an installation of AWS EMR in a client environment. The encryption in transit and the encryption at rest has been enabled using security configuration. We continue to get the below mapreduce errors when we execute a simple Hive query.
Diagnostic Messages for this Task:
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError:
error in shuffle in fetcher#1
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:377)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by:
java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:366)
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:288)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.openShuffleUrl(Fetcher.java:282)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:323)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193)
Please let me know if anyone has faced this error before.
We are currently running into the following error when attempting to run a Spark Job on DSE 4.8 Analytics
ERROR 2016-04-11 20:59:42,825 UserGroupInformation.java:1128 -
org.apache.hadoop.security.UserGroupInformation:
PriviledgedActionException as:ubuntu
cause:java.util.concurrent.TimeoutException: Futures timed out after
[120 seconds] Exception in thread "main"
java.lang.reflect.InvocationTargetException at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.spark.DseSecureRunner.(DseSecureRunner.scala:24) at
org.apache.spark.DseSecureRunner$.main(DseSecureRunner.scala:34) at
org.apache.spark.DseSecureRunner.main(DseSecureRunner.scala) Caused
by: java.lang.reflect.UndeclaredThrowableException: Unknown exception
in doAs at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1138)
at
org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:67)
at
org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:146)
at
org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:245)
at
org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
... 7 more Caused by: java.security.PrivilegedActionException:
java.util.concurrent.TimeoutException: Futures timed out after [120
seconds] at java.security.AccessController.doPrivileged(Native
Method) at javax.security.auth.Subject.doAs(Subject.java:422) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1125)
... 11 more Caused by: java.util.concurrent.TimeoutException: Futures
timed out after [120 seconds] at
scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at
scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107) at
org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:97) at
org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:159)
at
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:68)
at
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:67)
... 14 more
This error occurs on the 2 worker nodes, while the node which is running the driver runs fine.
We have the following Security Group configuration for a Cluster of 3 nodes created using the DataStax AMI of the 2.6 flavor
Scraped from the documentation my Security Group is like this with one minor exception
The following port was ignored
8983 Custom TCP Rule TCP 0.0.0.0/0 Solr port and Demo applications web site port (Portfolio, Search, Search log, Weather Sensors)
The only way to get around this error was to do the following
ALL TCP
TCP (6)
ALL
cluster-security-group (using the picture as reference this would be sg-bbc40aff)
Which leads me to believe that some process is trying to communicate with nodes in the cluster via another port.
http://docs.datastax.com/en/datastax_enterprise/4.8/datastax_enterprise/install/installAMIsecurity.html
Has anyone ran into this problem running Spark Jobs using DSE Analytics on AWS?
Thanks
I have defined a KafkaReceiver usind WSO2-CEP in management console.
It doesn't work and I can read this error on server-console log
[2016-02-26 10:17:25,412] INFO {org.wso2.carbon.event.input.adapter.core.internal.InputAdapterRuntime} - Connecting receiver kafkaReceiver
[2016-02-26 10:17:25,415] ERROR {org.wso2.carbon.event.input.adapter.core.internal.InputAdapterRuntime} - Error initializing Input Adapter 'kafkaReceiver, hence this will be suspended indefinitely, Cannot access kafka context due to missing jars
org.wso2.carbon.event.input.adapter.core.exception.InputEventAdapterRuntimeException: Cannot access kafka context due to missing jars
at org.wso2.carbon.event.input.adapter.kafka.KafkaEventAdapter.createConsumerConfig(KafkaEventAdapter.java:114)
at org.wso2.carbon.event.input.adapter.kafka.KafkaEventAdapter.createKafkaAdaptorListener(KafkaEventAdapter.java:132)
at org.wso2.carbon.event.input.adapter.kafka.KafkaEventAdapter.connect(KafkaEventAdapter.java:66)
at org.wso2.carbon.event.input.adapter.core.internal.InputAdapterRuntime.start(InputAdapterRuntime.java:72)
at org.wso2.carbon.event.input.adapter.core.internal.InputAdapterRuntime.startPolling(InputAdapterRuntime.java:62)
at org.wso2.carbon.event.input.adapter.core.internal.CarbonInputEventAdapterService.startPolling(CarbonInputEventAdapterService.java:187)
at org.wso2.carbon.event.receiver.core.internal.CarbonEventReceiverService.startPolling(CarbonEventReceiverService.java:620)
at org.wso2.carbon.event.receiver.core.internal.CarbonEventReceiverManagementService.startPolling(CarbonEventReceiverManagementService.java:99)
at org.wso2.carbon.event.processor.manager.core.internal.CarbonEventManagementService$2.run(CarbonEventManagementService.java:184)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoClassDefFoundError: kafka/consumer/ConsumerConfig
at org.wso2.carbon.event.input.adapter.kafka.KafkaEventAdapter.createConsumerConfig(KafkaEventAdapter.java:112)
... 15 more
Caused by: java.lang.ClassNotFoundException: kafka.consumer.ConsumerConfig cannot be found by org.wso2.carbon.event.input.adapter.kafka_5.0.3
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:501)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 16 more
If we read the class source at this link https://github.com/wso2/carbon-analytics-common/blob/master/components/event-receiver/event-input-adapters/org.wso2.carbon.event.input.adapter.kafka/src/main/java/org/wso2/carbon/event/input/adapter/kafka/KafkaEventAdapter.java
I read that the error is in ConsumerConfig creation
In order to use kafka you need to add kafka download and copy following client JAR files to /repository/components/lib/ directory
kafka_2.10-0.8.2.1.jar
zkclient-0.3.jar
scala-library-2.10.4.jar
zookeeper-3.4.6.jar
metrics-core-2.2.0.jar
kafka-clients-0.8.2.1.jar
[1]. https://docs.wso2.com/display/CEP400/Supporting+Different+Transports#SupportingDifferentTransports-KafkaTransport