Geode/GemFire/PCF/SCDF error - user not authorized for DATA:WRITE / DATA:READ permission - cloud-foundry

We are using Spring Cloud Data Flow to build stream pipe. The spring-cloud-dataflow-server version is 2.8.3.
The out of box sink module gemfire is provided by official site: https://docs.spring.io/spring-cloud-dataflow/docs/2.8.3/reference/htmlsingle/#applications
Here is the source code of this module: https://github.com/spring-attic/gemfire/tree/v2.1.4.RELEASE
Recently the server end enabled the Gemfire security Authorization feature. On client end we set username/password in SCDF stream definition. But when the data sink to Gemfire, we got user not authorized for DATA:WRITE / DATA:READ error. I attached the details at the end.
The problem is, gemfire server end already granted client's user DATA READ/WRITE permission, and this gemfire sink module can write data into database. But meanwhile we are keep getting this error.
According to the Spring Project Version Compatibility Matrix:
https://github.com/spring-projects/spring-boot-data-geode/wiki/Spring-Boot-for-Apache-Geode-and-VMware-Tanzu-GemFire-Version-Compatibility-Matrix
We tried all different Apache Geode version, but all of them got the same error.
Is there any way to handle this issue?
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] org.springframework.messaging.MessageHandlingException: nested exception is org.springframework.messaging.MessageHandlingException: error occurred in message handler [messageHandler]; nested exception is org.springframework.dao.DataAccessResourceFailureException: remote server on 93aed963-4624-4e01-6227-954e(23:loner):47226:c35a0e8b: org.apache.geode.security.NotAuthorizedException: user not authorized for DATA:WRITE:WriteTest; nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on 93aed963-4624-4e01-6227-954e(23:loner):47226:c35a0e8b: org.apache.geode.security.NotAuthorizedException: user not authorized for DATA:WRITE:WriteTest, failedMessage=GenericMessage [payload=PDX[4548420,__GEMFIRE_JSON]{read_datetime=2022-03-15T00:52:40:722Z}, headers={id=8ef8d368-87a7-addc-1074-06bb58043933, timestamp=1647305561161}]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.handler.MethodInvokingMessageProcessor.processMessage(MethodInvokingMessageProcessor.java:109) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.handler.ServiceActivatingHandler.handleRequestMessage(ServiceActivatingHandler.java:93) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:123) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:169) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:115) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:132) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:105) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:73) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:453) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:401) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:187) ~[spring-messaging-5.1.14.RELEASE.jar!/:5.1.14.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:166) ~[spring-messaging-5.1.14.RELEASE.jar!/:5.1.14.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47) ~[spring-messaging-5.1.14.RELEASE.jar!/:5.1.14.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:109) ~[spring-messaging-5.1.14.RELEASE.jar!/:5.1.14.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:205) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.sendMessageIfAny(KafkaMessageDrivenChannelAdapter.java:369) ~[spring-integration-kafka-3.1.0.RELEASE.jar!/:3.1.0.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.access$400(KafkaMessageDrivenChannelAdapter.java:74) ~[spring-integration-kafka-3.1.0.RELEASE.jar!/:3.1.0.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:431) ~[spring-integration-kafka-3.1.0.RELEASE.jar!/:3.1.0.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:402) ~[spring-integration-kafka-3.1.0.RELEASE.jar!/:3.1.0.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeOnMessage(KafkaMessageListenerContainer.java:1316) [spring-kafka-2.2.12.RELEASE.jar!/:2.2.12.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeOnMessage(KafkaMessageListenerContainer.java:1299) [spring-kafka-2.2.12.RELEASE.jar!/:2.2.12.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:1259) [spring-kafka-2.2.12.RELEASE.jar!/:2.2.12.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:1240) [spring-kafka-2.2.12.RELEASE.jar!/:2.2.12.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:1155) [spring-kafka-2.2.12.RELEASE.jar!/:2.2.12.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:965) [spring-kafka-2.2.12.RELEASE.jar!/:2.2.12.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:772) [spring-kafka-2.2.12.RELEASE.jar!/:2.2.12.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:705) [spring-kafka-2.2.12.RELEASE.jar!/:2.2.12.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_242]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_242]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_242]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] Caused by: org.springframework.messaging.MessageHandlingException: error occurred in message handler [messageHandler]; nested exception is org.springframework.dao.DataAccessResourceFailureException: remote server on 93aed963-4624-4e01-6227-954e(23:loner):47226:c35a0e8b: org.apache.geode.security.NotAuthorizedException: user not authorized for DATA:WRITE:WriteTest; nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on 93aed963-4624-4e01-6227-954e(23:loner):47226:c35a0e8b: org.apache.geode.security.NotAuthorizedException: user not authorized for DATA:WRITE:WriteTest
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.support.utils.IntegrationUtils.wrapInHandlingExceptionIfNecessary(IntegrationUtils.java:189) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:186) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.cloud.stream.app.gemfire.sink.GemfireSinkHandler.handle(GemfireSinkHandler.java:65) ~[spring-cloud-starter-stream-sink-gemfire-2.1.6.RELEASE.jar!/:2.1.6.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at sun.reflect.GeneratedMethodAccessor121.invoke(Unknown Source) ~[na:na]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_242]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_242]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:171) ~[spring-messaging-5.1.14.RELEASE.jar!/:5.1.14.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:120) ~[spring-messaging-5.1.14.RELEASE.jar!/:5.1.14.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.handler.support.MessagingMethodInvokerHelper$HandlerMethod.invoke(MessagingMethodInvokerHelper.java:1115) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.handler.support.MessagingMethodInvokerHelper.invokeHandlerMethod(MessagingMethodInvokerHelper.java:624) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.handler.support.MessagingMethodInvokerHelper.processInternal(MessagingMethodInvokerHelper.java:491) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.handler.support.MessagingMethodInvokerHelper.process(MessagingMethodInvokerHelper.java:362) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] at org.springframework.integration.handler.MethodInvokingMessageProcessor.processMessage(MethodInvokingMessageProcessor.java:106) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] ... 29 common frames omitted
2022-03-14T20:52:41.232-04:00 [APP/PROC/WEB/0] [OUT] Caused by: org.springframework.dao.DataAccessResourceFailureException: remote server on 93aed963-4624-4e01-6227-954e(23:loner):47226:c35a0e8b: org.apache.geode.security.NotAuthorizedException: user not authorized for DATA:WRITE:WriteTest;

Technically, using Spring Boot for Apache Geode [equally for GemFire] (SBDG), particularly when running in a Pivotal CloudFoundry (PCF) environment connecting your Spring [Boot] app, or in your case, Spring Cloud Data Flow (SCDF) app, to a Pivotal Cloud Cache (PCC) service instance (i.e. GemFire in PCF), then SBDG will automatically connect, authenticate and authorize your application once your Spring [Boot] app has been pushed up to PCF.
NOTE: Pivotal CloudFoundry (PCF) is now known as VMware Tanzu Appliation Service (TAS) and Pivotal Cloud Cache (PCC) is now known as VMware Tanzu GemFire for VMS.
Of course, this assumes that the PCF/PCC environment, and specifically, the VCAP environment variables, were setup and configured properly when the PCC service instance was provisioned.
If you are not using Spring Boot for Apache Geode, then there is no "automatic" inspection of the PCF/PCC environment (VCAP env vars) and therefore, you become responsible for handling connections, auth, etc.
SBDG was specifically designed to handle these concerns across environments and provides auto-configuration to handle connections, auth and other concerns when a Spring Boot app is pushed up to PCF connected to PCC.
More details can be found in the documentation.
Additionally, the Getting Started Sample walks a user through building a Spring Boot app using Apache Geode in a local context, then switching to a non-managed client/server topology locally, and finally pushing and running the app in a managed context like PCF, connecting (and authenticating) with PCC.
All of this requires SBDG though.
I am not certain that SCDF uses SBDG under the hood. It may simply only use Spring Data for Apache Geode (SDG), in which case, you may need to swap out the SDG dependency for SBDG.
There is most likely other work involved in this process as well since it is unclear to me what specific GemFire/Geode objects (e.g. a cache instance) SCDF creates on your behalf when using SCDF (sources/sinks) that may conflict with the auto-configuration provided in and by SBDG.
For instance, if SCDF creates a cache instance (i.e. ClientCache) for you, then it will override the SBDG auto-configuration that automatically creates a ClientCache instance]3, by default. If this is the case, then once again, you become responsible for security (auth) since security must be configured before an GemFire/Geode cache instance (e.g. ClientCache) is created.
NOTE: This is a GemFire/Geode requirement, not a Spring requirement.
Therefore, SBDG's auto-configuration arrangement is very deliberate in its precedence and ordering when being applied. If the SBDG auto-configuration is explicitly overridden either by you, or implicitly by another framework (e.g. SCDF), then you become responsible for knowing the expectations (internals) of GemFire/Geode configuration.
On the other hand, if you are certain SBDG is in the application classpath and being used properly, then perhaps this problem stems from the app using the wrong assigned user.
If your environment is rather complex, declaring multiple users with different sets of assigned permissions, then maybe your app needs to be run with a different user assignment, in which case, you should review this particular section of the documentation.
As always you should make sure you Spring [Boot | CDF] application runs correctly in a local, non-managed environment with the similar setup and configuration before running remotely in a managed environment like PCF.
The goals of SBDG have always been clear and SBDG is tested and proven to this effect.
Please share as many specifics (code, configuration, etc) as you can here in order for us to be able to triage this problem correctly.

Related

flink HA standalone cluster failed

2 computers,203,204
both run jobmanager and taskmanager on every computer
masters
hz203:9081
hz204:9081
slaves
hz203
hz204
flink-conf.yaml
jobmanager.rpc.port: 6123
rest.port: 9081
blob.server.port: 6124
query.server.port: 6125
web.tmpdir: /home/ctu/flink/deploy/webTmp
web.log.path: /home/ctu/flink/deploy/log
taskmanager.tmp.dirs: /home/ctu/flink/deploy/taskManagerTmp
high-availability: zookeeper
high-availability.storageDir: file:///home/ctu/flink/deploy/HA
high-availability.zookeeper.quorum: 10.0.1.79:2181
high-availability.zookeeper.path.root: /flink
high-availability.cluster-id: /flink
run ./start-cluster.sh
Starting HA cluster with 2 masters.
Starting standalonesession daemon on host hz203.
Starting standalonesession daemon on host hz204.
Starting taskexecutor daemon on host hz203.
Starting taskexecutor daemon on host hz204.
logs
2018-12-20 20:44:03,843 INFO org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService - Starting ZooKeeperLeaderElectionService ZooKeeperLeaderElectionService{leaderPath='/leader/rest_server_lock'}.
2018-12-20 20:44:03,864 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - Web frontend listening at http://127.0.0.1:9081.
2018-12-20 20:44:03,875 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for org.apache.flink.runtime.resourcemanager.StandaloneResourceManager at akka://flink/user/resourcemanager .
2018-12-20 20:44:03,989 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for org.apache.flink.runtime.dispatcher.StandaloneDispatcher at akka://flink/user/dispatcher .
2018-12-20 20:44:03,999 INFO org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService - Starting ZooKeeperLeaderElectionService ZooKeeperLeaderElectionService{leaderPath='/leader/resource_manager_lock'}.
2018-12-20 20:44:04,008 INFO org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService - Starting ZooKeeperLeaderRetrievalService /leader/resource_manager_lock.
2018-12-20 20:44:04,009 INFO org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService - Starting ZooKeeperLeaderElectionService ZooKeeperLeaderElectionService{leaderPath='/leader/dispatcher_lock'}.
2018-12-20 20:44:04,010 INFO org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService - Starting ZooKeeperLeaderRetrievalService /leader/dispatcher_lock.
2018-12-20 20:44:04,206 WARN akka.remote.transport.netty.NettyTransport - Remote connection to [null] failed with java.net.ConnectException: Connection refused: /127.0.0.1:43012
2018-12-20 20:44:04,221 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink#127.0.0.1:43012] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink#127.0.0.1:43012]] Caused by: [Connection refused: /127.0.0.1:43012]
2018-12-20 20:44:04,301 WARN akka.remote.transport.netty.NettyTransport - Remote connection to [null] failed with java.net.ConnectException: Connection refused: /127.0.0.1:43012
2018-12-20 20:44:04,301 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink#127.0.0.1:43012] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink#127.0.0.1:43012]] Caused by: [Connection refused: /127.0.0.1:43012]
2018-12-20 20:44:04,378 WARN akka.remote.transport.netty.NettyTransport - Remote connection to [null] failed with java.net.ConnectException: Connection refused: /127.0.0.1:43012
2018-12-20 20:44:04,378 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink#127.0.0.1:43012] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink#127.0.0.1:43012]] Caused by: [Connection refused: /127.0.0.1:43012]
2018-12-20 20:44:04,451 WARN akka.remote.transport.netty.NettyTransport - Remote connection to [null] failed with java.net.ConnectException: Connection refused: /127.0.0.1:43012
2018-12-20 20:44:04,451 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink#127.0.0.1:43012] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink#127.0.0.1:43012]] Caused by: [Connection refused: /127.0.0.1:43012]
2018-12-20 20:44:04,520 WARN akka.remote.transport.netty.NettyTransport - Remote connection to [null] failed with java.net.ConnectException: Connection refused: /127.0.0.1:43012
questions
`akka.tcp://flink#127.0.0.1:33567/user/resourcemanager` --- Why the 127.0.0.1 instead of the `jobmanager` ip in the `masters's` config file?
The problem is a bug we fixed in version 1.6.1. In 1.6.0 we did not respect the --host command line option in the method ClusterEntrypoint#loadConfiguration as you can see here compared to the code of version 1.6.1.
Thus, upgrading to the latest 1.6.x version should fix the problem. In general I would always recommend upgrading to the latest bug fix version of a release if possible.

Amazon EC2 to AWS Elasticache Redis connection problem

I am connecting to AWS Elasticache Redis via Redisson from my Amazon EC2 instance. After lots of request of redis connection, I get the following issue which halt my program execution. The problem doesn't occur for few request to redis interation, but it eventually happen after lots of requests.
2018-10-11 11:02:38,363 ERROR org.redisson.client.handler.CommandsQueue - Exception occured. Channel: [id: 0x46c06a6a, L:0.0.0.0/0.0.0.0:49308 ! R:redis-pa-qc-001.redis-pa-qc.yzmnbg.use1.cache.amazonaws.com/10.0.24.226:6379]
io.netty.handler.codec.DecoderException: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1412)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:943)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:141)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1429)
at sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:535)
at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:813)
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:781)
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:292)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1248)
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1159)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1194)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
... 16 common frames omitted
Caused by: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at sun.security.validator.PKIXValidator.(PKIXValidator.java:90)
at sun.security.validator.Validator.getInstance(Validator.java:179)
at sun.security.ssl.X509TrustManagerImpl.getValidator(X509TrustManagerImpl.java:312)
at sun.security.ssl.X509TrustManagerImpl.checkTrustedInit(X509TrustManagerImpl.java:171)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:239)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1493)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:919)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:916)
at java.security.AccessController.doPrivileged(Native Method)
at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1369)
at io.netty.handler.ssl.SslHandler.runDelegatedTasks(SslHandler.java:1408)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1316)
... 20 common frames omitted
Caused by: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at java.security.cert.PKIXParameters.setTrustAnchors(PKIXParameters.java:200)
at java.security.cert.PKIXParameters.(PKIXParameters.java:120)
at java.security.cert.PKIXBuilderParameters.(PKIXBuilderParameters.java:104)
at sun.security.validator.PKIXValidator.(PKIXValidator.java:88)
The error is basically JVM is not able to get the required SSL certificates to establish the connection. Check your Java version and update it.
And update the java ca-certs.
Run:
sudo apt-get install ca-certificates-java" (Ubuntu system) or
update-ca-certificates -f

Springboot unable to determine JDBC url from datasource while deploying application in PCF(Pivotal Cloud Foundary)

I am deploying SpringBoot application in PCF which has oracle database connection i have also made user created service instance with oracle credentials and binded to application.
Following is the VCAP service variables :
{
"name": "healthwatch-api-database",
"instance_name": "healthwatch-api-database",
"binding_name": null,
"credentials": {
"driver": "oracle.jdbc.OracleDriver",
"url": "jdbc:oracle:thin:#(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=10.157.129.175)(PORT=1527))(CONNECT_DATA=(SERVER=DEDICATED)(SID=DEVCLOUD)))",
"username": "EXTRANET_USER",
"password": "EXTRANET_USER1"
},
Following are the logs after pushing application :
2018-10-09T18:26:41.29+0530 [APP/PROC/WEB/0] OUT o.s.b.w.s.ServletRegistrationBean - Mapping servlet: 'dispatcherServlet' to [/]
2018-10-09T18:26:41.47+0530 [APP/PROC/WEB/0] OUT 09 Oct 2018 12:56:41.471/UTC [main] INFO
2018-10-09T18:26:41.47+0530 [APP/PROC/WEB/0] OUT o.s.j.d.DriverManagerDataSource - Loaded JDBC driver: oracle.jdbc.OracleDriver
2018-10-09T18:26:42.17+0530 [APP/PROC/WEB/0] OUT 09 Oct 2018 12:56:42.174/UTC [main] WARN
2018-10-09T18:26:42.17+0530 [APP/PROC/WEB/0] OUT o.s.b.a.orm.jpa.DatabaseLookup - Unable to determine jdbc url from datasource
2018-10-09T18:26:42.17+0530 [APP/PROC/WEB/0] OUT org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta data; nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1
2018-10-09T18:26:42.17+0530 [APP/PROC/WEB/0] OUT ORA-01882: timezone region not found
2018-10-09T18:26:42.17+0530 [APP/PROC/WEB/0] OUT at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:338)
Looking at the error log, the cause seems to be related to timezone settings.
2018-10-09T18:26:42.17+0530 [APP/PROC/WEB/0] OUT ORA-01882: timezone region not found
Here is a post talking about a similar issue - Getting ORA-01882: timezone region not found with Oracle UCP, on aws ec2 instance?
Now how do you pass the timezone to your app?
You do that through manifest file. Check out the article -
https://docs.cloudfoundry.org/devguide/deploy-apps/manifest.html

Unable to start the application in SAP Cloud platform Cloud foundry

After I deploy a Fiori application in cloud foundry environment and try to start the application, It is getting crashed. I tried to solve the issue with troubleshooting guide from the link below but couldn't solve the issue.
https://docs.cloudfoundry.org/devguide/deploy-apps/troubleshoot-app-health.html
I updated the manifest.yml file as below.
---
applications:
- name: cf_fioriapp
command: node my-app.js
memory: 768M
intances: 1
buildpack: nodejs_buildpack
Below is the package.json file
{
"name": "automate",
"version": "1.0.0",
"description": "This is the descripion for package.json file",
"private": true,
"devDependencies": {
"grunt": "1.0.1",
"#sap/grunt-sapui5-bestpractice-build": "1.3.33"
},
"scripts": {
"start": "node app.js"
}
}
First, I ran the npm install command which downloaded node_modules. Then to push the app to cloud foundry, I ran the below command.
cf push cf_fioriapp -c "node my-app.js"
Below is the log file.
2018-04-24T11:14:09.06-0400 [APP/PROC/WEB/0] ERR module.js:478
2018-04-24T11:14:09.06-0400 [APP/PROC/WEB/0] ERR throw err;
2018-04-24T11:14:09.06-0400 [APP/PROC/WEB/0] ERR ^
2018-04-24T11:14:09.06-0400 [APP/PROC/WEB/0] ERR Error: Cannot find
module '/home/vcap/app/my-app.js' 2018-04-24T11:14:09.06-0400
[APP/PROC/WEB/0] ERR at Function.Module._resolveFilename
(module.js:476:15) 2018-04-24T11:14:09.06-0400 [APP/PROC/WEB/0] ERR
at Function.Module._load (module.js:424:25)
2018-04-24T11:14:09.06-0400 [APP/PROC/WEB/0] ERR at Module.runMain
(module.js:611:10) 2018-04-24T11:14:09.06-0400 [APP/PROC/WEB/0] ERR
at run (bootstrap_node.js:387:7) 2018-04-24T11:14:09.06-0400
[APP/PROC/WEB/0] ERR at startup (bootstrap_node.js:153:9)
2018-04-24T11:14:09.06-0400 [APP/PROC/WEB/0] ERR at
bootstrap_node.js:500:3
Thanks,
Sankeerth
From the buildpack output provided, it looks like the Node.js buildpack is trying to launch the application using a script called “start”. Potentially, this script is configured to be used in the app’s package.json under the “start” key (see https://docs.npmjs.com/misc/scripts#default-values). Alternatively, maybe the script exists but does not have the executable bit set?
I was able by modifying the manifest in 2 ways: add buildpack and add command.
Here my manifest.yalm:
---
applications:
- name: myapp
command: node ./myapp/server.js
buildpack: https://github.com/cloudfoundry/nodejs-buildpack
random-route: true
path: myapp
memory: 128M
Also ./ in front of the path worked for me
---
applications:
- name: myapp
random-route: true
path: ./myapp
memory: 128M
Regards
Norman

Multiple app instances on Cloud Foundry not shown in Netflix Hystrix dashboard

I have set up Netflix Eureka, Hystrix and Turbine on Cloud Foundry split in two apps:
A monitoring app "mrc-service" includes Eureka Server, Turbine and Hystrix Dashboard. The application.yml for this app looks like this:
---
spring:
profiles: cloud
eureka:
instance:
nonSecurePort: 80
hostname: ${vcap.application.uris[0]}
leaseRenewalIntervalInSeconds: 10
metadataMap:
instanceId: ${spring.application.name}:${vcap.application.instance_id:${spring.application.instance_id:${random.value}}}
client:
registerWithEureka: true
fetchRegistry: true
service-url:
defaultZone: https://mrc-service.myurl/eureka/
turbine:
aggregator:
clusterConfig: LOG-TEST
appConfig: log-test
The Hystrix stream producing app called "log-test" has multiple instances on Cloud Foundry. The app is an Eureka Client and exposes a Hystrix Stream using Spring Actuator. Here the application.yml for the app:
---
spring:
profiles: cloud
eureka:
instance:
nonSecurePort: 80
hostname: ${vcap.application.uris[0]}
metadataMap:
instanceId: ${spring.application.name}:${vcap.application.instance_id:${spring.application.instance_id:${random.value}}}
secure-port-enabled: true
client:
healthcheck:
enabled: true
service-url:
defaultZone: https://mrc-service.myurl/eureka/
The two instances of the log-test app register correctly with the Eureka server:
But when I start to monitor the turbine stream the Hystrix dashboard shows only one host (as indicated by the red arrow) instead of two:
The Turbine log retrieves both instances correctly, but then says that only one Host is up:
2017-08-23T10:12:10.764+02:00 [APP/PROC/WEB/0] [OUT] 2017-08-23 08:12:10.764 INFO 19 --- [ Timer-0] o.s.c.n.turbine.EurekaInstanceDiscovery : Fetching instances for app: log-test
2017-08-23T10:12:10.764+02:00 [APP/PROC/WEB/0] [OUT] 2017-08-23 08:12:10.764 INFO 19 --- [ Timer-0] o.s.c.n.turbine.EurekaInstanceDiscovery : Received instance list for app: log-test, size=2
2017-08-23T10:12:10.764+02:00 [APP/PROC/WEB/0] [OUT] 2017-08-23 08:12:10.763 INFO 19 --- [ Timer-0] o.s.c.n.t.CommonsInstanceDiscovery : Fetching instance list for apps: [log-test]
2017-08-23T10:12:10.764+02:00 [APP/PROC/WEB/0] [OUT] 2017-08-23 08:12:10.764 INFO 19 --- [ Timer-0] c.n.t.discovery.InstanceObservable : Retrieved hosts from InstanceDiscovery: 2
2017-08-23T10:12:10.765+02:00 [APP/PROC/WEB/0] [OUT] 2017-08-23 08:12:10.764 INFO 19 --- [ Timer-0] c.n.t.discovery.InstanceObservable : Found hosts that have been previously terminated: 0
2017-08-23T10:12:10.765+02:00 [APP/PROC/WEB/0] [OUT] 2017-08-23 08:12:10.764 DEBUG 19 --- [ Timer-0] c.n.t.discovery.InstanceObservable : Retrieved hosts from InstanceDiscovery: [StatsInstance [hostname=log-test.myurl:80, cluster: LOG-TEST, isUp: true, attrs={securePort=443, fusedHostPort=log-test.myurl:443, instanceId=log-test:97d83c44-8b9e-44c4-56b4-742cef7bada0, port=80}], StatsInstance [hostname=log-test.myurl:80, cluster: LOG-TEST, isUp: true, attrs={securePort=443, fusedHostPort=log-test.myurl:443, instanceId=log-test:3d8359e4-a5c1-4aa0-5109-5b49a77a1f6f, port=80}]]
2017-08-23T10:12:10.765+02:00 [APP/PROC/WEB/0] [OUT] 2017-08-23 08:12:10.764 INFO 19 --- [ Timer-0] c.n.t.discovery.InstanceObservable : Hosts up:1, hosts down: 0
So I wonder if Turbine actually aggregates the Hystrix streams of the two instances. Turbine would have to contact the instances e.g. using Cloud Foundry specific header parameters like X-CF-APP-INSTANCE. Not sure if this already this happens.
Is the described approach even feasible on Cloud Foundry or do I have to use Turbine Stream with RabbitMQ instead?
I got an official reply from the Spring Cloud Netflix Issue tracker: aggregation of Hystrix data from multiple app instances on Cloud Foundry requires Turbine Stream in combination with a broker (e.g. RabbitMQ).
To open Turbine in aggregate way, it's the same steps as Hystrix, but you should inform the cluster via Turbine: http://localhost:8989//turbine.stream?cluster=READ.
That will open the same screen that Hystrix, but if I have more services, they will appear in an aggregate way.