I am trying to connect Ruby on Rails application to dbShards database. Their database switched off session variable to optimise shard performance. Hence when I tried to run the application, i get this error:
2014-11-17 08:23:29,647 ERROR c.d.s.SQLParser [NioServer-WorkerPool-2] Failed to parse SET NAMES utf8, ##SESSION.sql_auto_is_null = 0, ##SESSION.wait_timeout = 21
47483, ##SESSION.sql_mode = 'STRICT_ALL_TABLES'
java.sql.SQLException: Session variables are not shard-safe
at com.dbshards.sqlparsernew.SQLParser.parseSet(SQLParser.java:558)
at com.dbshards.sqlparsernew.SQLParser.parse(SQLParser.java:402)
at com.dbshards.sqlparsernew.SQLParserCache.parse(SQLParserCache.java:79)
at com.dbshards.jdbc.sharding.DbsShardedStatement.execute(DbsShardedStatement.java:388)
at com.dbshards.osp.client.impl.processor.JDBCProcessor.execute(JDBCProcessor.java:321)
at com.dbshards.osp.client.impl.myosp.MyOSPProtocolHandler.processQueryRequest(MyOSPProtocolHandler.java:876)
at com.dbshards.osp.client.impl.myosp.MyOSPProtocolHandler.doProcessBuffer(MyOSPProtocolHandler.java:518)
at com.dbshards.osp.client.impl.myosp.MyOSPProtocolHandler.processBuffer(MyOSPProtocolHandler.java:1520)
at com.codefutures.common.messaging.transport.nio.NioConnection.doRead(NioConnection.java:127)
at com.codefutures.common.messaging.transport.nio.NioServer.doProcessRequests(NioServer.java:556)
at com.codefutures.common.messaging.transport.nio.NioServer.access$200(NioServer.java:25)
at com.codefutures.common.messaging.transport.nio.NioServer$2.run(NioServer.java:534)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
even when i set strict: false in database.yml, this is what i get:
2014-11-17 12:11:11,229 ERROR c.d.s.SQLParser [NioServer-WorkerPool-12] Failed to parse SET NAMES utf8, ##SESSION.sql_auto_is_null = 0, ##SESSION.wait_timeout = 21
47483, ##SESSION.sql_mode = ''
java.sql.SQLException: Session variables are not shard-safe
at com.dbshards.sqlparsernew.SQLParser.parseSet(SQLParser.java:558)
at com.dbshards.sqlparsernew.SQLParser.parse(SQLParser.java:402)
at com.dbshards.sqlparsernew.SQLParserCache.parse(SQLParserCache.java:79)
at com.dbshards.jdbc.sharding.DbsShardedStatement.execute(DbsShardedStatement.java:388)
at com.dbshards.osp.client.impl.processor.JDBCProcessor.execute(JDBCProcessor.java:321)
at com.dbshards.osp.client.impl.myosp.MyOSPProtocolHandler.processQueryRequest(MyOSPProtocolHandler.java:876)
at com.dbshards.osp.client.impl.myosp.MyOSPProtocolHandler.doProcessBuffer(MyOSPProtocolHandler.java:518)
at com.dbshards.osp.client.impl.myosp.MyOSPProtocolHandler.processBuffer(MyOSPProtocolHandler.java:1520)
at com.codefutures.common.messaging.transport.nio.NioConnection.doRead(NioConnection.java:127)
at com.codefutures.common.messaging.transport.nio.NioServer.doProcessRequests(NioServer.java:556)
at com.codefutures.common.messaging.transport.nio.NioServer.access$200(NioServer.java:25)
at com.codefutures.common.messaging.transport.nio.NioServer$2.run(NioServer.java:534)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Is there a way in Ruby On Rails to totally turn off session variables since dbshards.com products reject them?
Related
I am trying to run Siddhi application in Streaming Integrator Tooling.
Here's my app:
#App:name('1')
#App:description('Description of the plan')
#source(type = 'mqtt', url = "tcp://192.168.100.82:1883", client.id = "1", topic = "mqtt_topic_input",
#map(type = 'xml'))
define stream SweetProductionStream (name string, amount double);
#sink(type = 'log')
define stream LowProductionAlertStream (name string, amount double);
-- passthrough data in the SweetProductionStream into LowProducitonAlertStream
#info(name = 'query1')
from SweetProductionStream
select *
insert into LowProductionAlertStream;
I also put the following files in the < SI_TOOLING_HOME>/lib directory:
org.eclipse.paho.client.mqttv3-1.1.1.jar and siddhi-io-mqtt-3.0.2.jar
I also have a working mosquitto-broker at 192.168.100.82
When I try to execute the application, then in the console I get an error:
> 1.siddhi - Started Successfully!
[2022-11-09_23-21-40_302] ERROR {io.siddhi.core.stream.input.source.Source} - Error on '1'. no NetworkModule installed for scheme "tcp" of URI "tcp://192.168.100.82:1883" Error while connecting at Source 'mqtt' at 'SweetProductionStream'. (Encoded)
java.lang.IllegalArgumentException: no NetworkModule installed for scheme "tcp" of URI "tcp://192.168.100.82:1883"
at org.eclipse.paho.client.mqttv3.internal.NetworkModuleService.validateURI(NetworkModuleService.java:70)
at org.eclipse.paho.client.mqttv3.MqttAsyncClient.(MqttAsyncClient.java:454)
at org.eclipse.paho.client.mqttv3.MqttAsyncClient.(MqttAsyncClient.java:320)
at org.eclipse.paho.client.mqttv3.MqttAsyncClient.(MqttAsyncClient.java:315)
at org.eclipse.paho.client.mqttv3.MqttClient.(MqttClient.java:227)
at io.siddhi.extension.io.mqtt.source.MqttSource.connect(MqttSource.java:196)
at io.siddhi.core.stream.input.source.Source.connectWithRetry(Source.java:161)
at io.siddhi.core.SiddhiAppRuntimeImpl.startSources(SiddhiAppRuntimeImpl.java:535)
at io.siddhi.core.SiddhiAppRuntimeImpl.start(SiddhiAppRuntimeImpl.java:460)
at org.wso2.carbon.siddhi.editor.core.internal.DebugRuntime.start(DebugRuntime.java:93)
at org.wso2.carbon.siddhi.editor.core.internal.DebugProcessorService.start(DebugProcessorService.java:43)
at org.wso2.carbon.siddhi.editor.core.internal.EditorMicroservice.start(EditorMicroservice.java:795)
at org.wso2.carbon.siddhi.editor.core.internal.EditorMicroservice.startWithVariables(EditorMicroservice.java:815)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.wso2.msf4j.internal.router.HttpMethodInfo.invokeResource(HttpMethodInfo.java:187)
at org.wso2.msf4j.internal.router.HttpMethodInfo.invoke(HttpMethodInfo.java:143)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener.dispatchMethod(MSF4JHttpConnectorListener.java:218)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener.lambda$onMessage$58(MSF4JHttpConnectorListener.java:129)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener$$Lambda$312/404129248.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
What is the reason for the error?
I tried to change the versions of the org.eclipse.paho.client.mqttv3-1.1.1.jar and siddhi-io-mqtt-3.0.2.jar files.
This did not bring results.
i am a beignner of flume,when i try to write a template to study how to use flume of handling posting file to HDFS in real-time,i got a error about connection denied.
what i want to do in template:
-->using flume to collect log created by hive,and post it to HDFS
here is my job_conf
#name sources,sinks,and channels
a2.sources=r2
a2.sinks=k2
a2.channels=c2
#sources conf,specify the log file i want to watch
a2.sources.r2.type=exec
a2.sources.r2.command = tail -F /opt/module/hive/logs/hive.log
a2.sources.r2.shell=/bin/bash -c
#sinks conf
a2.sinks.k2.type = hdfs
#connect porperties
a2.sinks.k2.hdfs.path = hdfs://hadoop102/flume/%Y%m%d/%H
a2.sinks.k2.hdfs.filePrefix = hive_log-
a2.sinks.k2.hdfs.useLocalTimeStamp = true
a2.sinks.k2.hdfs.fileType = DataStream
a2.sinks.k2.hdfs.round = true
a2.sinks.k2.hdfs.roundValue = 1
a2.sinks.k2.hdfs.roundUnit = hour
a2.sinks.k2.hdfs.rollInterval = 600
a2.sinks.k2.hdfs.rollSize = 134217700
a2.sinks.k2.hdfs.batchSize = 1000
a2.sinks.k2.hdfs.rollCount = 0
a2.sinks.k2.hdfs.minBlockReplicas = 1
# Use a channel which buffers events in memory
a2.channels.c2.type = memory
a2.channels.c2.capacity = 1000
a2.channels.c2.transactionCapacity = 100
# Bind the source and sink to the channel
a2.sources.r2.channels = c2
a2.sinks.k2.channel = c2
errors info of flume.log below
03 July 2019 00:21:41,002 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.open:231) - Creating hdfs://hadoop102/flume/20190703/00/logs-.1562084496871.tmp
03 July 2019 00:21:41,132 WARN [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.HDFSEventSink.process:443) - HDFS IO error
java.net.ConnectException: Call From hadoop102/192.168.1.102 to hadoop102:8020 failed on connection exception: java.net.ConnectException: 拒绝连接(connection denied); For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
at org.apache.hadoop.ipc.Client.call(Client.java:1479)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy12.create(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:296)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy13.create(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1652)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1689)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1624)
at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:444)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:890)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:776)
at org.apache.flume.sink.hdfs.HDFSDataStream.doOpen(HDFSDataStream.java:81)
at org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:108)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:242)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:232)
at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:668)
at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:665)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: 拒绝连接(connection denied)
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)
at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
at org.apache.hadoop.ipc.Client.call(Client.java:1451)
... 34 more
it looks so wired that i get expected file in HDFS if i change
a2.sinks.k2.hdfs.path = hdfs://hadoop102/flume/%Y%m%d/%H
to
a2.sinks.k2.hdfs.path = /flume/%Y%m%d/%H
i guess when i do not specify hdfs://hadoop102,flume will load hadoop configuration to info of hadoop cluster.
by the way,i have seen such a info in flume official website
hdfs.path – HDFS directory path (eg hdfs://namenode/flume/webdata/)
and a example given by flume website
a1.channels = c1
a1.sinks = k1
a1.sinks.k1.type = hdfs
a1.sinks.k1.channel = c1
a1.sinks.k1.hdfs.path = /flume/events/%y-%m-%d/%H%M/%S
a1.sinks.k1.hdfs.filePrefix = events-
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute
it alse do not specify hadoop cluster ip
If the node on which you are running the flume agent is part of the Hadoop eco-system, like if that node is configured as hadoop client machine then you can give the direct folder path without providing scheme and namenode information, if that node is not part of the eco system then you need to provide the full path, like: hdfs://name-node:port/flume/....
1) I'm getting below error in wso2carbon logs when I try to configure wso2 apim-analytics(2.1) server with Oracle DB(12c version). I have tried with ojdbc6.jar and ojdbc7.jar in lib folder but still error is there.
error:
Caused by: java.lang.RuntimeException: ORA-28040: No matching authentication
protocol
2) Is there any REST api available for wso2 apim-analytics similar to DAS server to extract data?
full error:
ERROR
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} - Error while executing
the scheduled task for the script: APIM_LAST_ACCESS_TIME_SCRIPT
{org.wso2.carbon.analytics.spark.core.AnalyticsTask}
org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
Exception in executing query create temporary table APILastAccessSummaryData
using CarbonJDBC options (dataSource "WSO2AM_STATS_DB", tableName
"API_LAST_ACCESS_TIME_SUMMARY", schema "tenantDomain STRING ,
apiPublisher STRING , api STRING , version STRING , userId STRING ,
context STRING , max_request_time LONG ", primaryKeys
"tenantDomain,apiPublisher,api" )
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:764)
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:721)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
at
org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:60)
at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException:
ORA-28040: No matching authentication protocol
thanks,
Santosh
This was an issue identified in Oracle, and the workaround is : to set SQLNET.ALLOWED_LOGON_VERSION=8 in the $crs_home/network/admin/sqlnet.ora file. [1]
[1] https://community.softwaregrp.com/t5/UCMDB-and-UD-Practitioners-Forum/ORA-28040-No-matching-authentication-protocol/m-p/253403
I have a really simple PySpark script that creates a dataframe from some parquet data on S3 and then call count() method and print out the number of records.
I run the script on AWS EMR cluster and I'm seeing following strange WARN information:
17/12/04 14:20:26 WARN ServletHandler:
javax.servlet.ServletException: java.util.NoSuchElementException: None.get
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:489)
at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
at org.spark_project.jetty.servlet.ServletHolder.handle(ServletHolder.java:845)
at org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1689)
at org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:164)
at org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
at org.spark_project.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at org.spark_project.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at org.spark_project.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at org.spark_project.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at org.spark_project.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.spark_project.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:461)
at org.spark_project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at org.spark_project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.spark_project.jetty.server.Server.handle(Server.java:524)
at org.spark_project.jetty.server.HttpChannel.handle(HttpChannel.java:319)
at org.spark_project.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
at org.spark_project.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.spark_project.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.spark_project.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at org.spark_project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at org.spark_project.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at org.apache.spark.status.api.v1.MetricHelper.submetricQuantiles(AllStagesResource.scala:313)
at org.apache.spark.status.api.v1.AllStagesResource$$anon$1.build(AllStagesResource.scala:178)
at org.apache.spark.status.api.v1.AllStagesResource$.taskMetricDistributions(AllStagesResource.scala:181)
at org.apache.spark.status.api.v1.OneStageResource$$anonfun$taskSummary$1.apply(OneStageResource.scala:71)
at org.apache.spark.status.api.v1.OneStageResource$$anonfun$taskSummary$1.apply(OneStageResource.scala:62)
at org.apache.spark.status.api.v1.OneStageResource$$anonfun$withStageAttempt$1.apply(OneStageResource.scala:130)
at org.apache.spark.status.api.v1.OneStageResource$$anonfun$withStageAttempt$1.apply(OneStageResource.scala:126)
at org.apache.spark.status.api.v1.OneStageResource.withStage(OneStageResource.scala:97)
at org.apache.spark.status.api.v1.OneStageResource.withStageAttempt(OneStageResource.scala:126)
at org.apache.spark.status.api.v1.OneStageResource.taskSummary(OneStageResource.scala:62)
at sun.reflect.GeneratedMethodAccessor153.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161)
at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:205)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:389)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:347)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102)
at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:326)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:473)
... 28 more
17/12/04 14:20:26 WARN HttpChannel: //ip-172-31-81-10.ec2.internal:4040/api/v1/applications/application_1512395256824_0002/stages/3/0/taskSummary?proxyapproved=true
javax.servlet.ServletException: java.util.NoSuchElementException: None.get
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:489)
at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
at org.spark_project.jetty.servlet.ServletHolder.handle(ServletHolder.java:845)
at org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1689)
at org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:164)
at org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
at org.spark_project.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at org.spark_project.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at org.spark_project.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at org.spark_project.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at org.spark_project.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.spark_project.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:461)
at org.spark_project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at org.spark_project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.spark_project.jetty.server.Server.handle(Server.java:524)
at org.spark_project.jetty.server.HttpChannel.handle(HttpChannel.java:319)
at org.spark_project.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
at org.spark_project.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.spark_project.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.spark_project.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at org.spark_project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at org.spark_project.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at org.apache.spark.status.api.v1.MetricHelper.submetricQuantiles(AllStagesResource.scala:313)
at org.apache.spark.status.api.v1.AllStagesResource$$anon$1.build(AllStagesResource.scala:178)
at org.apache.spark.status.api.v1.AllStagesResource$.taskMetricDistributions(AllStagesResource.scala:181)
at org.apache.spark.status.api.v1.OneStageResource$$anonfun$taskSummary$1.apply(OneStageResource.scala:71)
at org.apache.spark.status.api.v1.OneStageResource$$anonfun$taskSummary$1.apply(OneStageResource.scala:62)
at org.apache.spark.status.api.v1.OneStageResource$$anonfun$withStageAttempt$1.apply(OneStageResource.scala:130)
at org.apache.spark.status.api.v1.OneStageResource$$anonfun$withStageAttempt$1.apply(OneStageResource.scala:126)
at org.apache.spark.status.api.v1.OneStageResource.withStage(OneStageResource.scala:97)
at org.apache.spark.status.api.v1.OneStageResource.withStageAttempt(OneStageResource.scala:126)
at org.apache.spark.status.api.v1.OneStageResource.taskSummary(OneStageResource.scala:62)
at sun.reflect.GeneratedMethodAccessor153.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161)
at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:205)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:389)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:347)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102)
at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:326)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:473)
It seems like it does not fail the job, though. I got the count returned successfully.
Just wonder if anyone know why this happens and how to get rid of it.
Thanks
Those warning messages can be suppressed by adding the following lines to /etc/spark/conf/log4j.properties:
log4j.logger.org.spark_project.jetty.server.HttpChannel=ERROR
log4j.logger.org.spark_project.jetty.servlet.ServletHandler=ERROR
I didn't see any impact on performance nor on job stability.
Now my logs are much readable :)
For those who use AWS EMR with Spark, I also got the same error message when executing my Spark jobs in
emr-5.10.0
emr-5.11.0
emr-5.11.1
emr-5.12.0
Simply downgrading to emr-5.9.0 will help you to solve this problem.
Hope it helps.
I'm using Weka for text classification task.
I created my data.arff File. It contains two attributes:
text attribute
class attribute
Then, the generated ARFF file is processed with the StringToWordVector:
java weka.filters.unsupervised.attribute.StringToWordVector -i data/weather.arff -o data/out.arff
Then, NaiveBayes is used:
java weka.classifiers.bayes.NaiveBayes -t data/out.arff -K
I have this problem:
weka.core.UnsupportedAttributeTypeException: weka.classifiers.bayes.NaiveBayes: Cannot handle numeric class!
at weka.core.Capabilities.test(Capabilities.java:954)
at weka.core.Capabilities.test(Capabilities.java:1110)
at weka.core.Capabilities.test(Capabilities.java:1023)
at weka.core.Capabilities.testWithFail(Capabilities.java:1302)
at weka.classifiers.bayes.NaiveBayes.buildClassifier(NaiveBayes.java:213)
at weka.classifiers.Evaluation.evaluateModel(Evaluation.java:1076)
at weka.classifiers.Classifier.runClassifier(Classifier.java:312)
at weka.classifiers.bayes.NaiveBayes.main(NaiveBayes.java:944)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at weka.gui.SimpleCLIPanel$ClassRunner.run(SimpleCLIPanel.java:265)
Could anyone help me?
I'm stuck at this level.
It's exactly what it says - it cannot handle numeric values for the class variable. If you declared the class variable to be string, change the numeric values to their equivalent text values.