Can you please help me with the grok pattern for this jenkins sample data or log. The log is only a single line.
hudson.slaves.CommandLauncher launch\nSEVERE: Unable to launch the agent for dot-dewsttlas403-ci\njava.io.IOException: Failed to create a temporary file in /opt_shared/iit_slave/jenkins_slave/workspace\n\tat hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:144)\n\tat hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:109)\n\tat hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:84)\n\tat hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:74)\n\tat hudson.util.TextFile.write(TextFile.java:116)\n\tat jenkins.branch.WorkspaceLocatorImpl$WriteAtomic.invoke(WorkspaceLocatorImpl.java:264)\n\tat jenkins.branch.WorkspaceLocatorImpl$WriteAtomic.invoke(WorkspaceLocatorImpl.java:256)\n\tat hudson.FilePath$FileCallableWrapper.call(FilePath.java:3042)\n\tat hudson.remoting.UserRequest.perform(UserRequest.java:212)\n\tat hudson.remoting.UserRequest.perform(UserRequest.java:54)\n\tat hudson.remoting.Request$2.run(Request.java:369)\n\tat hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat java.lang.Thread.run(Thread.java:745)\n\tSuppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to dot-dewsttlas403-ci\n\t\tat hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)\n\t\tat hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)\n\t\tat hudson.remoting.Channel.call(Channel.java:957)\n\t\tat hudson.FilePath.act(FilePath.java:1069)\n\t\tat hudson.FilePath.act(FilePath.java:1058)\n\t\tat jenkins.branch.WorkspaceLocatorImpl.save(WorkspaceLocatorImpl.java:254)\n\t\tat jenkins.branch.WorkspaceLocatorImpl.access$500(WorkspaceLocatorImpl.java:80)\n\t\tat jenkins.branch.WorkspaceLocatorImpl$Collector.onOnline(WorkspaceLocatorImpl.java:561)\n\t\tat hudson.slaves.SlaveComputer.setChannel(SlaveComputer.java:697)\n\t\tat hudson.slaves.SlaveComputer.setChannel(SlaveComputer.java:432)\n\t\tat hudson.slaves.CommandLauncher.launch(CommandLauncher.java:154)\n\t\tat hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:294)\n\t\tat jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)\n\t\tat jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:71)\n\t\tat java.util.concurrent.FutureTask.run(Unknown Source)\n\t\tat java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n\t\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n\t\tat java.lang.Thread.run(Unknown Source)\nCaused by: java.io.IOException: No space left on device\n\tat java.io.UnixFileSystem.createFileExclusively(Native Method)\n\tat java.io.File.createTempFile(File.java:2024)\n\tat hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:142)\n\t... 15 more\n
I am only interested to extract the following from the above logs.
agent status, agent name
Expected Result:
agent status: Unable
agent name: dot-dewsttlas403-ci
SEVERE: %{DATA:agent_status} to launch the agent for %{DATA:agent_name}\\n
This should give you the result you are interested in, but it would only work if the structure of the message is the same.
Configuration used:
input {stdin{}}
filter{
grok {
match =>{
"message" => "SEVERE: %{DATA:agent_status} to launch the agent for %{DATA:agent_name}\\n"
}
}
}
output {stdout{codec => json}}
Result:
{
"host": "MY_COMPUTER",
"agent_status": "Unable",
"message": "hudson.slaves.CommandLauncher launch\\nSEVERE: Unable to launch the agent for dot-dewsttlas403-ci\\njava.io.IOException: Failed to create a temporary file in /opt_shared/iit_slave/jenkins_slave/workspace\\n\\tat \r",
"agent_name": "dot-dewsttlas403-ci",
"#timestamp": "2020-01-29T16:54:27.256Z",
"#version": "1"
}
Also to help you next time you're working with logstash-grok:
An online tester for patterns:
http://grokconstructor.appspot.com/do/match
The basic grok patterns: https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns
Related
I'm trying to push documents from local to elastic server in AWS, and when trying to do so I get 403 error and Logstash keeps on trying to establish connection with the server like so:
[2021-05-09T11:09:52,707][TRACE][logstash.inputs.file ][main] Registering file input {:path=>["~/home/ubuntu/json_try/json_try.json"]}
[2021-05-09T11:09:52,737][DEBUG][logstash.javapipeline ][main] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<Thread:0x5033269f run>"}
[2021-05-09T11:09:53,441][DEBUG][logstash.outputs.amazonelasticsearch][main] Waiting for connectivity to Elasticsearch cluster. Retrying in 4s
[2021-05-09T11:09:56,403][INFO ][logstash.outputs.amazonelasticsearch][main] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://my-dom.co:8001/scans, :path=>"/"}
[2021-05-09T11:09:56,461][WARN ][logstash.outputs.amazonelasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://my-dom.co:8001/scans", :error_type=>LogStash::Outputs::AmazonElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '403' contacting Elasticsearch at URL 'https://my-dom.co:8001/scans/'"}
[2021-05-09T11:09:56,849][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2021-05-09T11:09:56,853][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2021-05-09T11:09:57,444][DEBUG][logstash.outputs.amazonelasticsearch][main] Waiting for connectivity to Elasticsearch cluster. Retrying in 8s
.
.
.
I'm using the following logstash conf file:
input {
file{
type => "json"
path => "~/home/ubuntu/json_try/json_try.json"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
output{
amazon_es {
hosts => ["https://my-dom.co/scans"]
port => 8001
ssl => true
region => "us-east-1b"
index => "snapshot-%{+YYYY.MM.dd}"
}
}
Also I've exported AWS keys for the SSL to work. Is there anything I'm missing in order for the connection to succeed?
I've been able to solve this by using elasticsearch as my output plugin instead of amazon_es.
This usage will require cloud_id of the target AWS node, cloud_auth for it and also the target index in elastic for the data to be sent to. So the conf file will look something like this:
input {
file{
type => "json"
path => "~/home/ubuntu/json_try/json_try.json"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
output{
elasticsearch {
cloud_id: "node_name:node_hash"
cloud_auth: "auth_hash"
index: "snapshot-%{+YYYY.MM.dd}"
}
}
I'm running Teradata express on VMWare player. In order to migrate the data to BigQuery I'm trying to follow the official documentation capturing BigQuery transfer job.
https://cloud.google.com/bigquery-transfer/docs/teradata-migration
I have downloaded the required jars and trying to start the migration process alfer successful initialization. Providing the config file contents generated after successful initialization.
TDExpress1620_Sles11:~/Desktop # cat testpath/bq.config
{
"agent-id": "84f7cf04-2133-4c5a-ae43-dd22a30281ba",
"transfer-configuration": {
"project-id": "106730138445",
"location": "us",
"id": "5e1c7eda-0000-249c-aab6-94eb2c06245a"
},
"source-type": "teradata",
"console-log": false,
"silent": false,
"teradata-config": {
"connection": {
"host": "localhost"
},
"local-processing-space": "testpath",
"database-credentials-file-path": "",
"max-local-storage": "200GB",
"gcs-upload-chunk-size": "32MB",
"use-tpt": false,
"max-sessions": 0,
"spool-mode": "NoSpool",
"max-parallel-upload": 1,
"max-parallel-extract-threads": 1,
"session-charset": "UTF8",
"max-unload-file-size": "2GB"
}
After running the command to start the migration process, I come across below error.
TDExpress1620_Sles11:~/Desktop # java -cp terajdbc4.jar:mirroring-agent.jar com.google.cloud.bigquery.dms.Agent --configuration-file=testpath/bq.config
Reading data from gs://data_transfer_agent/latest/version
WARNING: Failed to get the latest released agent version with error: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Exception in thread "main" com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:69)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1349)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:398)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1024)
at com.google.common.util.concurrent.AbstractFuture.addListener(AbstractFuture.java:670)
at com.google.common.util.concurrent.ForwardingListenableFuture.addListener(ForwardingListenableFuture.java:45)
at com.google.api.core.ApiFutureToListenableFuture.addListener(ApiFutureToListenableFuture.java:52)
at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1330)
at com.google.api.core.ApiFutures.addCallback(ApiFutures.java:63)
at com.google.api.gax.grpc.GrpcExceptionCallable.futureCall(GrpcExceptionCallable.java:67)
at com.google.api.gax.rpc.AttemptCallable.call(AttemptCallable.java:86)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: com.google.api.gax.rpc.AsyncTaskException: Asynchronous task failed
at com.google.api.gax.rpc.ApiExceptions.callAndTranslateApiException(ApiExceptions.java:57)
at com.google.api.gax.rpc.UnaryCallable.call(UnaryCallable.java:112)
at com.google.cloud.bigquery.datatransfer.v1.DataTransferServiceClient.getTransferConfig(DataTransferServiceClient.java:741)
at com.google.cloud.bigquery.datatransfer.v1.DataTransferServiceClient.getTransferConfig(DataTransferServiceClient.java:718)
at com.google.cloud.bigquery.dms.gcloud.TransferServiceClient.getTransferConfig(TransferServiceClient.java:62)
at com.google.cloud.bigquery.dms.gcloud.TransferServiceClient.getDestinationBucketName(TransferServiceClient.java:94)
at com.google.cloud.bigquery.dms.Agent.main(Agent.java:235)
Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
at io.grpc.Status.asRuntimeException(Status.java:533)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:490)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:700)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:399)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:500)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:65)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:592)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$700(ClientCallImpl.java:508)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:632)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
... 7 more
Caused by: io.grpc.netty.shaded.io.netty.channel.AbstractChannel$AnnotatedNoRouteToHostException: null: bigquerydatatransfer.googleapis.com/2404:6800:4009:810:0:0:0:200a:443
at io.grpc.netty.shaded.io.netty.channel.unix.Errors.throwConnectException(Errors.java:104)
at io.grpc.netty.shaded.io.netty.channel.unix.Socket.connect(Socket.java:257)
at io.grpc.netty.shaded.io.netty.channel.epoll.AbstractEpollChannel.doConnect0(AbstractEpollChannel.java:730)
at io.grpc.netty.shaded.io.netty.channel.epoll.AbstractEpollChannel.doConnect(AbstractEpollChannel.java:715)
at io.grpc.netty.shaded.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.connect(AbstractEpollChannel.java:557)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1340)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:532)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:517)
at io.grpc.netty.shaded.io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50)
at io.grpc.netty.shaded.io.grpc.netty.WriteBufferingAndExceptionHandler.connect(WriteBufferingAndExceptionHandler.java:136)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:532)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.access$1000(AbstractChannelHandlerContext.java:38)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext$9.run(AbstractChannelHandlerContext.java:522)
at io.grpc.netty.shaded.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.grpc.netty.shaded.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:333)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 more
Caused by: java.net.NoRouteToHostException
... 19 more
Please let me know if somebody has successfully implemented the same or able to figure out the exact issue being faced by me.
Wanting a filter that extracts given information from log messages.
Currently using, although it's very specific to one format/log layout
filter {
if "ONT" in [message] {
grok{
match => { "message" => "%{SYSLOGBASE} %{WORD:Alarm_Severity} %{DATA:Message} %{QS:ONT_ID} %{DATA:Time} %{QS:ONT_Message}" }
}
}
Log Files are:
Dec 16 15:01:13 172.20.x.xx NPF_OLT_LAB05: clear Alarm for card 1/1 at 2019/12/16 15:01:13.39: "Backup files exist"
Dec 16 15:01:13 172.20.x.xx NPF_OLT_LAB05: service "403
for ONT: "10002" - ONT needs restart at 2019/12/16 15:01:13.39 ONT message: "Backup files exist"
Wanting layout to give me these parameters
Time:15:01:13
Host: NPF_OLT_LAB05
Alarm Severity: clear
ONT ID: 10002
Source IP: 172.20.x.xx
ONT Message: "Backup files exist"
Message: clear Alarm for card 1/1
Service ID: 403
I guess these are two different logs, you need to have two different grok pattern as below,
Dec 16 15:01:13 172.20.12.12 NPF_OLT_LAB05: clear Alarm for card 1/1 at 2019/12/16 15:01:13.39: "Backup files exist"
Grok pattern
(?<Date>%{MONTH} +%{MONTHDAY}) %{TIME:Time} %{IPV4:SourceIP} %{NOTSPACE:HOST}\:\s(%{WORD:Severity} %{GREEDYDATA:Message})\s(?<timestamp>%{YEAR}\/%{MONTHNUM}\/%{MONTHDAY}\s%{TIME})\S\s\S%{GREEDYDATA:ONTMessage}\"
Dec 16 15:01:13 172.20.x.xx NPF_OLT_LAB05: service "403 for ONT: "10002" - ONT needs restart at 2019/12/16 15:01:13.39 ONT message: "Backup files exist"
Grok pattern
(?<Date>%{MONTH} +%{MONTHDAY}) %{TIME:Time} %{IPV4:SourceIP} %{NOTSPACE:HOST}\S\s%{WORD:Severity}\s\S%{BASE10NUM:ServiceID} %{NOTSPACE}\s(?:ONT: \S%{BASE10NUM:ONT_ID}\S) %{NOTSPACE} %{GREEDYDATA:Message}\s(?<timestamp>%{YEAR}\/%{MONTHNUM}\/%{MONTHDAY}\s%{TIME}) (?:ONT message\: \S(?<ONT Message:>%{GREEDYDATA}\S))
Below Conf
filter {
if "ONT" in [message] {
grok{
match => { "message" => [ "(?<Date>%{MONTH} +%{MONTHDAY}) %{TIME:Time} %{IPV4:SourceIP} %{NOTSPACE:HOST}\:\s(%{WORD:Severity} %{GREEDYDATA:Message})\s(?<timestamp>%{YEAR}\/%{MONTHNUM}\/%{MONTHDAY}\s%{TIME})\S\s\S%{GREEDYDATA:ONTMessage}\"" ,
"(?<Date>%{MONTH} +%{MONTHDAY}) %{TIME:Time} %{IPV4:SourceIP} %{NOTSPACE:HOST}\S\s%{WORD:Severity}\s\S%{BASE10NUM:ServiceID} %{NOTSPACE}\s(?:ONT: \S%{BASE10NUM:ONT_ID}\S) %{NOTSPACE} %{GREEDYDATA:Message}\s(?<timestamp>%{YEAR}\/%{MONTHNUM}\/%{MONTHDAY}\s%{TIME}) (?:ONT message\: \S(?<ONT Message:>%{GREEDYDATA}\S))" ]
}
}
}
I have two elasticsearch nodes setup in EC2 and am trying to use logstash with it. I get this error when I run logstash:
log4j, [2014-02-24T10:45:32.722] WARN: org.elasticsearch.discovery.zen.ping.unicast: [Ishihara, Shirow] failed to send ping to [[#zen_unicast_1#][inet[/10.110.65.91:9300]]]
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize exception response from stream
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:169)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:123)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
That's a snippet of it.
Here is the conf file I am using with logstash:
input {
redis {
host => "10.110.65.91"
# these settings should match the output of the agent
data_type => "list"
key => "logstash"
# We use the 'json' codec here because we expect to read
# json events from redis.
codec => json
}
}
output {
stdout { debug => true debug_format => "json"}
elasticsearch {
host => "10.110.65.91"
cluster => searchbuild
}
}
~
I'm running Logstash on .91 (have a second terminal window open) Am I missing something?
I had to change "elasticsearch" to "elasticsearch_http".
Fixed.
After changing the default password for admin user in WSO2 BAM 4.1.0, tasks fail with the following error:
TID: [0] [BAM] [2013-06-20 16:56:15,464] ERROR {org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl} - Error while executing Hive script.
Query returned non-zero code: 9, cause: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask {org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl}
java.sql.SQLException: Query returned non-zero code: 9, cause: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
at org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:189)
at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:355)
at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:250)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
TID: [0] [BAM] [2013-06-20 16:56:15,467] ERROR {org.wso2.carbon.analytics.hive.task.HiveScriptExecutorTask} - Error while executing script : am_stats_analyzer_460 {org.wso2.carbon.analytics.hive.ta
sk.HiveScriptExecutorTask}
org.wso2.carbon.analytics.hive.exception.HiveExecutionException: Error while executing Hive script.Query returned non-zero code: 9, cause: FAILED: Execution Error, return code 2 from org.apache.hado
op.hive.ql.exec.MapRedTask
at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl.execute(HiveExecutorServiceImpl.java:117)
at org.wso2.carbon.analytics.hive.task.HiveScriptExecutorTask.execute(HiveScriptExecutorTask.java:60)
at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:56)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Reverting back the password to its original value solves the issue.
How do I change the password for the admin user and keep the tasks working?
Have you changed the username, password in the hive script am_stats_analyzer? The defaults are admin/admin, check the hive script and update the password accordingly. The property is as follows;
"cassandra.ks.username" = "admin",
"cassandra.ks.password" = "xxxxx",
Check if that fixes your issue.
In order to solve the issue I had to perform the following steps:
edit the file [BAM_HOME]/repository/conf/etc/cassandra-auth.xml and change the password value to the new password.
edit the file [BAM_HOME]/repository/conf/datasources/master-datasources.xml and change the password value of the WSO2BAM_CASSANDRA_DATASOURCE datasource to the new password.
restart the BAM: the Hive tasks now run without errors.
where the new password is the password I assigned to the admin user.
Moreover, the Main \ Manage \ Cassandra Keyspaces \ List page in the BAM UI, which was raising the following error, is now fixed:
org.wso2.carbon.cassandra.mgt.ui.CassandraAdminClientException: Error retrieving keyspace names !
(...)
Caused by: org.apache.axis2.AxisFault: InvalidRequestException(why:You have not logged in)
(...)
Sorry I couldn't follow up with the question earlier, anyway glad your problem is sorted now .! Keep on trying BAM and don't hesitate to holler if you run into any issues.
Thanks,
Shariq.