ORA error in wso2 apim-analytics server - wso2

1) I'm getting below error in wso2carbon logs when I try to configure wso2 apim-analytics(2.1) server with Oracle DB(12c version). I have tried with ojdbc6.jar and ojdbc7.jar in lib folder but still error is there.
error:
Caused by: java.lang.RuntimeException: ORA-28040: No matching authentication
protocol
2) Is there any REST api available for wso2 apim-analytics similar to DAS server to extract data?
full error:
ERROR
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} - Error while executing
the scheduled task for the script: APIM_LAST_ACCESS_TIME_SCRIPT
{org.wso2.carbon.analytics.spark.core.AnalyticsTask}
org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
Exception in executing query create temporary table APILastAccessSummaryData
using CarbonJDBC options (dataSource "WSO2AM_STATS_DB", tableName
"API_LAST_ACCESS_TIME_SUMMARY", schema "tenantDomain STRING ,
apiPublisher STRING , api STRING , version STRING , userId STRING ,
context STRING , max_request_time LONG ", primaryKeys
"tenantDomain,apiPublisher,api" )
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:764)
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:721)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
at
org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:60)
at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException:
ORA-28040: No matching authentication protocol
thanks,
Santosh

This was an issue identified in Oracle, and the workaround is : to set SQLNET.ALLOWED_LOGON_VERSION=8 in the $crs_home/network/admin/sqlnet.ora file. [1]
[1] https://community.softwaregrp.com/t5/UCMDB-and-UD-Practitioners-Forum/ORA-28040-No-matching-authentication-protocol/m-p/253403

Related

WSO2 Streaming Integrator Tooling(connect to MQTT broker) - "no NetworkModule installed for scheme "tcp""

I am trying to run Siddhi application in Streaming Integrator Tooling.
Here's my app:
#App:name('1')
#App:description('Description of the plan')
#source(type = 'mqtt', url = "tcp://192.168.100.82:1883", client.id = "1", topic = "mqtt_topic_input",
#map(type = 'xml'))
define stream SweetProductionStream (name string, amount double);
#sink(type = 'log')
define stream LowProductionAlertStream (name string, amount double);
-- passthrough data in the SweetProductionStream into LowProducitonAlertStream
#info(name = 'query1')
from SweetProductionStream
select *
insert into LowProductionAlertStream;
I also put the following files in the < SI_TOOLING_HOME>/lib directory:
org.eclipse.paho.client.mqttv3-1.1.1.jar and siddhi-io-mqtt-3.0.2.jar
I also have a working mosquitto-broker at 192.168.100.82
When I try to execute the application, then in the console I get an error:
> 1.siddhi - Started Successfully!
[2022-11-09_23-21-40_302] ERROR {io.siddhi.core.stream.input.source.Source} - Error on '1'. no NetworkModule installed for scheme "tcp" of URI "tcp://192.168.100.82:1883" Error while connecting at Source 'mqtt' at 'SweetProductionStream'. (Encoded)
java.lang.IllegalArgumentException: no NetworkModule installed for scheme "tcp" of URI "tcp://192.168.100.82:1883"
at org.eclipse.paho.client.mqttv3.internal.NetworkModuleService.validateURI(NetworkModuleService.java:70)
at org.eclipse.paho.client.mqttv3.MqttAsyncClient.(MqttAsyncClient.java:454)
at org.eclipse.paho.client.mqttv3.MqttAsyncClient.(MqttAsyncClient.java:320)
at org.eclipse.paho.client.mqttv3.MqttAsyncClient.(MqttAsyncClient.java:315)
at org.eclipse.paho.client.mqttv3.MqttClient.(MqttClient.java:227)
at io.siddhi.extension.io.mqtt.source.MqttSource.connect(MqttSource.java:196)
at io.siddhi.core.stream.input.source.Source.connectWithRetry(Source.java:161)
at io.siddhi.core.SiddhiAppRuntimeImpl.startSources(SiddhiAppRuntimeImpl.java:535)
at io.siddhi.core.SiddhiAppRuntimeImpl.start(SiddhiAppRuntimeImpl.java:460)
at org.wso2.carbon.siddhi.editor.core.internal.DebugRuntime.start(DebugRuntime.java:93)
at org.wso2.carbon.siddhi.editor.core.internal.DebugProcessorService.start(DebugProcessorService.java:43)
at org.wso2.carbon.siddhi.editor.core.internal.EditorMicroservice.start(EditorMicroservice.java:795)
at org.wso2.carbon.siddhi.editor.core.internal.EditorMicroservice.startWithVariables(EditorMicroservice.java:815)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.wso2.msf4j.internal.router.HttpMethodInfo.invokeResource(HttpMethodInfo.java:187)
at org.wso2.msf4j.internal.router.HttpMethodInfo.invoke(HttpMethodInfo.java:143)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener.dispatchMethod(MSF4JHttpConnectorListener.java:218)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener.lambda$onMessage$58(MSF4JHttpConnectorListener.java:129)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener$$Lambda$312/404129248.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
What is the reason for the error?
I tried to change the versions of the org.eclipse.paho.client.mqttv3-1.1.1.jar and siddhi-io-mqtt-3.0.2.jar files.
This did not bring results.

Migrating data from Teradata to BigQuery

I'm running Teradata express on VMWare player. In order to migrate the data to BigQuery I'm trying to follow the official documentation capturing BigQuery transfer job.
https://cloud.google.com/bigquery-transfer/docs/teradata-migration
I have downloaded the required jars and trying to start the migration process alfer successful initialization. Providing the config file contents generated after successful initialization.
TDExpress1620_Sles11:~/Desktop # cat testpath/bq.config
{
"agent-id": "84f7cf04-2133-4c5a-ae43-dd22a30281ba",
"transfer-configuration": {
"project-id": "106730138445",
"location": "us",
"id": "5e1c7eda-0000-249c-aab6-94eb2c06245a"
},
"source-type": "teradata",
"console-log": false,
"silent": false,
"teradata-config": {
"connection": {
"host": "localhost"
},
"local-processing-space": "testpath",
"database-credentials-file-path": "",
"max-local-storage": "200GB",
"gcs-upload-chunk-size": "32MB",
"use-tpt": false,
"max-sessions": 0,
"spool-mode": "NoSpool",
"max-parallel-upload": 1,
"max-parallel-extract-threads": 1,
"session-charset": "UTF8",
"max-unload-file-size": "2GB"
}
After running the command to start the migration process, I come across below error.
TDExpress1620_Sles11:~/Desktop # java -cp terajdbc4.jar:mirroring-agent.jar com.google.cloud.bigquery.dms.Agent --configuration-file=testpath/bq.config
Reading data from gs://data_transfer_agent/latest/version
WARNING: Failed to get the latest released agent version with error: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Exception in thread "main" com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:69)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1349)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:398)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1024)
at com.google.common.util.concurrent.AbstractFuture.addListener(AbstractFuture.java:670)
at com.google.common.util.concurrent.ForwardingListenableFuture.addListener(ForwardingListenableFuture.java:45)
at com.google.api.core.ApiFutureToListenableFuture.addListener(ApiFutureToListenableFuture.java:52)
at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1330)
at com.google.api.core.ApiFutures.addCallback(ApiFutures.java:63)
at com.google.api.gax.grpc.GrpcExceptionCallable.futureCall(GrpcExceptionCallable.java:67)
at com.google.api.gax.rpc.AttemptCallable.call(AttemptCallable.java:86)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: com.google.api.gax.rpc.AsyncTaskException: Asynchronous task failed
at com.google.api.gax.rpc.ApiExceptions.callAndTranslateApiException(ApiExceptions.java:57)
at com.google.api.gax.rpc.UnaryCallable.call(UnaryCallable.java:112)
at com.google.cloud.bigquery.datatransfer.v1.DataTransferServiceClient.getTransferConfig(DataTransferServiceClient.java:741)
at com.google.cloud.bigquery.datatransfer.v1.DataTransferServiceClient.getTransferConfig(DataTransferServiceClient.java:718)
at com.google.cloud.bigquery.dms.gcloud.TransferServiceClient.getTransferConfig(TransferServiceClient.java:62)
at com.google.cloud.bigquery.dms.gcloud.TransferServiceClient.getDestinationBucketName(TransferServiceClient.java:94)
at com.google.cloud.bigquery.dms.Agent.main(Agent.java:235)
Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
at io.grpc.Status.asRuntimeException(Status.java:533)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:490)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:700)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:399)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:500)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:65)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:592)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$700(ClientCallImpl.java:508)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:632)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
... 7 more
Caused by: io.grpc.netty.shaded.io.netty.channel.AbstractChannel$AnnotatedNoRouteToHostException: null: bigquerydatatransfer.googleapis.com/2404:6800:4009:810:0:0:0:200a:443
at io.grpc.netty.shaded.io.netty.channel.unix.Errors.throwConnectException(Errors.java:104)
at io.grpc.netty.shaded.io.netty.channel.unix.Socket.connect(Socket.java:257)
at io.grpc.netty.shaded.io.netty.channel.epoll.AbstractEpollChannel.doConnect0(AbstractEpollChannel.java:730)
at io.grpc.netty.shaded.io.netty.channel.epoll.AbstractEpollChannel.doConnect(AbstractEpollChannel.java:715)
at io.grpc.netty.shaded.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.connect(AbstractEpollChannel.java:557)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1340)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:532)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:517)
at io.grpc.netty.shaded.io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50)
at io.grpc.netty.shaded.io.grpc.netty.WriteBufferingAndExceptionHandler.connect(WriteBufferingAndExceptionHandler.java:136)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:532)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.access$1000(AbstractChannelHandlerContext.java:38)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext$9.run(AbstractChannelHandlerContext.java:522)
at io.grpc.netty.shaded.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.grpc.netty.shaded.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:333)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 more
Caused by: java.net.NoRouteToHostException
... 19 more
Please let me know if somebody has successfully implemented the same or able to figure out the exact issue being faced by me.

Start token not found error while using JsonSerDe

I am trying to import a JSON data from S3, and after making some queries, export the output as JSON format to S3 again. However, I get the "org.apache.hadoop.hive.serde2.SerDeException: java.io.IOException: Start token not found where expected" error at hive step on EMR cluster. In order to understand what the problem is, I simplify the Hive script and JSON data, but it keeps giving the same error. How can I solve this problem?
Cluster configuration:
Release: emr-5.3.1
Hive version: 2.1.1
Hadoop distribution: Amazon 2.7.3
Service Role: EMR_DefaultRole
MasterInstanceType: m4.large
The content of the simplifed JSON data:
[{"MyID":"FOO123","MyField":"FOO"},{"MyID":"BAR123","MyField":"BAR"}]
Hive script:
DROP TABLE IF EXISTS SOURCE;
DROP TABLE IF EXISTS DESTINATION;
CREATE EXTERNAL TABLE SOURCE(MyID STRING, MyField STRING)
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
LOCATION 's3://myPath/subPath/';
CREATE EXTERNAL TABLE DESTINATION(MyID STRING, MyField STRING)
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
LOCATION 's3://anotherPath/subPath/';
INSERT OVERWRITE TABLE DESTINATION SELECT MyID, MyField FROM SOURCE;
And here is the stack trace:
Vertex failed, vertexName=Map 4, vertexId=vertex_1278452616863_0001_1_00, diagnostics=[Task failed, taskId=task_1278452616863, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : attempt_1278452616863:java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing writable [{"MyID":"FOO123","MyField":"FOO"},{"MyID":"BAR123","MyField":"BAR"}]
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:211)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing writable [{"MyID":"FOO123","MyField":"FOO"},{"MyID":"BAR123","MyField":"BAR"}]
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:95)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:70)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:383)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:185)
... 14 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing writable [{"MyID":"FOO123","MyField":"FOO"},{"MyID":"BAR123","MyField":"BAR"}]
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:497)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:86)
... 17 more
Caused by: org.apache.hadoop.hive.serde2.SerDeException: java.io.IOException: Start token not found where expected
at org.apache.hive.hcatalog.data.JsonSerDe.deserialize(JsonSerDe.java:183)
at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.readRow(MapOperator.java:128)
at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.access$200(MapOperator.java:92)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:488)
... 18 more
Caused by: java.io.IOException: Start token not found where expected
at org.apache.hive.hcatalog.data.JsonSerDe.deserialize(JsonSerDe.java:169)
... 21 more
Thanks.
JSON should start with { and not with array ([)
I tried with this approach updated my JSON file with structure as
{"MyID":"FOO123","MyField":"FOO"},
{"MyID":"BAR123","MyField":"BAR"}
but after done, I noticed only the first object is being inserted into the table.

wso2 bam2.4 connect to external cassandra failed by .AuthenticationException

I think I have followed all the examples what I can find,
and I am working on bam2.4.1, the apache cassandra is 2.1.2.
I have configured casandra to disable security by setting cassandra.yaml
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
Then I follow bam2.4.1 document to connect to external cassandra, then start wso2 bam by
wso2server.bat -Ddisable.cassandra.server.startup=true
All most everything works except authentication
[2014-11-25 20:39:41,295] ERROR {org.wso2.carbon.bam.notification.task.Notificat
ionDispatchTask} - Error executing notification dispatch task: Access denied fo
r user wso2bam to login TCP,192.168.1.7:7611,TCP,192.168.1.7:7711
org.wso2.carbon.databridge.commons.exception.AuthenticationException: Access den
ied for user wso2bam to login TCP,192.168.1.7:7611,TCP,192.168.1.7:7711
at org.wso2.carbon.databridge.agent.thrift.internal.publisher.authentica
tor.AgentAuthenticator.connect(AgentAuthenticator.java:54)
at org.wso2.carbon.databridge.agent.thrift.DataPublisher.start(DataPubli
sher.java:273)
at org.wso2.carbon.databridge.agent.thrift.DataPublisher.<init>(DataPubl
isher.java:211)
at org.wso2.carbon.bam.notification.task.NotificationDispatchTask.initPu
blisherKS(NotificationDispatchTask.java:100)
at org.wso2.carbon.bam.notification.task.NotificationDispatchTask.execut
e(NotificationDispatchTask.java:210)
at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuar
tzJobAdapter.java:67)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:47
1)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: org.wso2.carbon.databridge.agent.thrift.exception.AgentAuthenticatorE
xception: Thrift exception
at org.wso2.carbon.databridge.agent.thrift.internal.publisher.authentica
tor.ThriftAgentAuthenticator.connect(ThriftAgentAuthenticator.java:51)
at org.wso2.carbon.databridge.agent.thrift.internal.publisher.authentica
tor.AgentAuthenticator.connect(AgentAuthenticator.java:51)
... 12 more
I found some more exception like
Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketExcep
tion: Connection closed by remote host
at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTranspo
rt.java:147)
at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryProtocol.j
ava:163)
at org.apache.thrift.protocol.TBinaryProtocol.writeMessageBegin(TBinaryP
rotocol.java:91)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
at org.wso2.carbon.databridge.commons.thrift.service.secure.ThriftSecure
EventTransmissionService$Client.send_connect(ThriftSecureEventTransmissionServic
e.java:82)
at org.wso2.carbon.databridge.commons.thrift.service.secure.ThriftSecure
EventTransmissionService$Client.connect(ThriftSecureEventTransmissionService.jav
a:73)
at org.wso2.carbon.databridge.agent.thrift.internal.publisher.authentica
tor.ThriftAgentAuthenticator.connect(ThriftAgentAuthenticator.java:47)
... 13 more
Caused by: java.net.SocketException: Connection closed by remote host
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1506)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:70)
at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTranspo
rt.java:145)
... 19 more

Tasks fail after BAM admin password change

After changing the default password for admin user in WSO2 BAM 4.1.0, tasks fail with the following error:
TID: [0] [BAM] [2013-06-20 16:56:15,464] ERROR {org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl} - Error while executing Hive script.
Query returned non-zero code: 9, cause: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask {org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl}
java.sql.SQLException: Query returned non-zero code: 9, cause: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
at org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:189)
at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:355)
at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:250)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
TID: [0] [BAM] [2013-06-20 16:56:15,467] ERROR {org.wso2.carbon.analytics.hive.task.HiveScriptExecutorTask} - Error while executing script : am_stats_analyzer_460 {org.wso2.carbon.analytics.hive.ta
sk.HiveScriptExecutorTask}
org.wso2.carbon.analytics.hive.exception.HiveExecutionException: Error while executing Hive script.Query returned non-zero code: 9, cause: FAILED: Execution Error, return code 2 from org.apache.hado
op.hive.ql.exec.MapRedTask
at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl.execute(HiveExecutorServiceImpl.java:117)
at org.wso2.carbon.analytics.hive.task.HiveScriptExecutorTask.execute(HiveScriptExecutorTask.java:60)
at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:56)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Reverting back the password to its original value solves the issue.
How do I change the password for the admin user and keep the tasks working?
Have you changed the username, password in the hive script am_stats_analyzer? The defaults are admin/admin, check the hive script and update the password accordingly. The property is as follows;
"cassandra.ks.username" = "admin",
"cassandra.ks.password" = "xxxxx",
Check if that fixes your issue.
In order to solve the issue I had to perform the following steps:
edit the file [BAM_HOME]/repository/conf/etc/cassandra-auth.xml and change the password value to the new password.
edit the file [BAM_HOME]/repository/conf/datasources/master-datasources.xml and change the password value of the WSO2BAM_CASSANDRA_DATASOURCE datasource to the new password.
restart the BAM: the Hive tasks now run without errors.
where the new password is the password I assigned to the admin user.
Moreover, the Main \ Manage \ Cassandra Keyspaces \ List page in the BAM UI, which was raising the following error, is now fixed:
org.wso2.carbon.cassandra.mgt.ui.CassandraAdminClientException: Error retrieving keyspace names !
(...)
Caused by: org.apache.axis2.AxisFault: InvalidRequestException(why:You have not logged in)
(...)
Sorry I couldn't follow up with the question earlier, anyway glad your problem is sorted now .! Keep on trying BAM and don't hesitate to holler if you run into any issues.
Thanks,
Shariq.