Couchbase - ElasticSearch Java Heap memory - amazon-web-services

We have a Couchbase instance mounted on a AmazoneWeb Service Server, and an Elastic Search instance running on the same server.
The connection bewtween the two of them is being done ok, and currently replicating fine until...
Out of the blue, we got the following error log on ElasticSearch:
[2013-08-29 21:27:34,947][WARN ][cluster.metadata ] [01-Thor] failed to dynamically update the mapping in cluster_state from shard
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:343)
at org.elasticsearch.common.io.FastByteArrayOutputStream.write(FastByteArrayOutputStream.java:103)
at org.elasticsearch.common.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:1848)
at org.elasticsearch.common.jackson.core.json.UTF8JsonGenerator.writeString(UTF8JsonGenerator.java:436)
at org.elasticsearch.common.xcontent.json.JsonXContentGenerator.writeString(JsonXContentGenerator.java:84)
at org.elasticsearch.common.xcontent.XContentBuilder.field(XContentBuilder.java:314)
at org.elasticsearch.index.mapper.core.AbstractFieldMapper.doXContentBody(AbstractFieldMapper.java:601)
at org.elasticsearch.index.mapper.core.NumberFieldMapper.doXContentBody(NumberFieldMapper.java:286)
at org.elasticsearch.index.mapper.core.LongFieldMapper.doXContentBody(LongFieldMapper.java:338)
at org.elasticsearch.index.mapper.core.AbstractFieldMapper.toXContent(AbstractFieldMapper.java:595)
at org.elasticsearch.index.mapper.object.ObjectMapper.toXContent(ObjectMapper.java:920)
at org.elasticsearch.index.mapper.object.ObjectMapper.toXContent(ObjectMapper.java:852)
at org.elasticsearch.index.mapper.object.ObjectMapper.toXContent(ObjectMapper.java:920)
at org.elasticsearch.index.mapper.object.ObjectMapper.toXContent(ObjectMapper.java:852)
at org.elasticsearch.index.mapper.object.ObjectMapper.toXContent(ObjectMapper.java:920)
at org.elasticsearch.index.mapper.object.ObjectMapper.toXContent(ObjectMapper.java:852)
at org.elasticsearch.index.mapper.object.ObjectMapper.toXContent(ObjectMapper.java:920)
at org.elasticsearch.index.mapper.object.ObjectMapper.toXContent(ObjectMapper.java:852)
at org.elasticsearch.index.mapper.object.ObjectMapper.toXContent(ObjectMapper.java:920)
at org.elasticsearch.index.mapper.DocumentMapper.toXContent(DocumentMapper.java:700)
at org.elasticsearch.index.mapper.DocumentMapper.refreshSource(DocumentMapper.java:682)
at org.elasticsearch.index.mapper.DocumentMapper.<init>(DocumentMapper.java:342)
at org.elasticsearch.index.mapper.DocumentMapper$Builder.build(DocumentMapper.java:224)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:231)
at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:380)
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:190)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$2.execute(MetaDataMappingService.java:185)
at org.elasticsearch.cluster.service.InternalClusterService$2.run(InternalClusterService.java:229)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:95)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
[2013-08-29 21:27:56,948][WARN ][indices.ttl ] [01-Thor] failed to execute ttl purge
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.ByteBlockPool$Allocator.getByteBlock(ByteBlockPool.java:66)
at org.apache.lucene.util.ByteBlockPool.nextBuffer(ByteBlockPool.java:202)
at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:319)
at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:274)
at org.apache.lucene.search.ConstantScoreAutoRewrite$CutOffTermCollector.collect(ConstantScoreAutoRewrite.java:131)
at org.apache.lucene.search.TermCollectingRewrite.collectTerms(TermCollectingRewrite.java:79)
at org.apache.lucene.search.ConstantScoreAutoRewrite.rewrite(ConstantScoreAutoRewrite.java:95)
at org.apache.lucene.search.MultiTermQuery$ConstantScoreAutoRewrite.rewrite(MultiTermQuery.java:220)
at org.apache.lucene.search.MultiTermQuery.rewrite(MultiTermQuery.java:288)
at org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:639)
at org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:686)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:309)
at org.elasticsearch.indices.ttl.IndicesTTLService.purgeShards(IndicesTTLService.java:186)
at org.elasticsearch.indices.ttl.IndicesTTLService.access$000(IndicesTTLService.java:65)
at org.elasticsearch.indices.ttl.IndicesTTLService$PurgerThread.run(IndicesTTLService.java:122)
[2013-08-29 21:29:23,919][WARN ][indices.ttl ] [01-Thor] failed to execute ttl purge
java.lang.OutOfMemoryError: Java heap space
We tried changing several memory values, but we cant seem to get it right.
Did some one experienced the same issue?

A few troubleshooting tips:
Generally smart to dedicate one AWS instance only to Elasticsearch for predictable performance / ease of debugging.
Monitor your memory usage using the Bigdesk plugin. This will show you if your memory bottleneck is occurring from Elasticsearch - might be from the OS, simultaneous heavy querying and indexing, or else something unexpected.
Elasticsearch's Java heap should be set around 50% of your boxes's total memory.
This gist from Shay Banon offers several solutions to solve memory problems in Elasticsearch.

Related

Collect one cell from pyspark Dataframe failed [duplicate]

I get the following error when I add --conf spark.driver.maxResultSize=2050 to my spark-submit command.
17/12/27 18:33:19 ERROR TransportResponseHandler: Still have 1 requests outstanding when connection from /XXX.XX.XXX.XX:36245 is closed
17/12/27 18:33:19 WARN Executor: Issue communicating with driver in heartbeater
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:92)
at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:726)
at org.apache.spark.executor.Executor$$anon$2$$anonfun$run$1.apply$mcV$sp(Executor.scala:755)
at org.apache.spark.executor.Executor$$anon$2$$anonfun$run$1.apply(Executor.scala:755)
at org.apache.spark.executor.Executor$$anon$2$$anonfun$run$1.apply(Executor.scala:755)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1954)
at org.apache.spark.executor.Executor$$anon$2.run(Executor.scala:755)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Connection from /XXX.XX.XXX.XX:36245 closed
at org.apache.spark.network.client.TransportResponseHandler.channelInactive(TransportResponseHandler.java:146)
The reason of adding this configuration was the error:
py4j.protocol.Py4JJavaError: An error occurred while calling o171.collectToPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 16 tasks (1048.5 MB) is bigger than spark.driver.maxResultSize (1024.0 MB)
Therefore, I increased maxResultSize to 2.5 Gb, but the Spark job fails anyway (the error shown above).
How to solve this issue?
It seems like the problem is the amount of data you are trying to pull back to to your driver is too large. Most likely you are using the collect method to retrieve all values from a DataFrame/RDD. The driver is a single process and by collecting a DataFrame you are pulling all of that data you had distributed across the cluster back to one node. This defeats the purpose of distributing it! It only makes sense to do this after you have reduced the data down to a manageable amount.
You have two options:
If you really need to work with all that data, then you should keep it out on the executors. Use HDFS and Parquet to save the data in a distributed manner and use Spark methods to work with the data on the cluster instead of trying to collect it all back to one place.
If you really need to get the data back to the driver, you should examine whether you really need ALL of the data or not. If you only need summary statistics then compute that out on the executors before calling collect. Or if you only need the top 100 results, then only collect the top 100.
Update:
There is another reason you can run into this error that is less obvious. Spark will try to send data back the driver beyond just when you explicitly call collect. It will also send back accumulator results for each task if you are using accumulators, data for broadcast joins, and some small status data about each task. If you have LOTS of partitions (20k+ in my experience) you can sometimes see this error. This is a known issue with some improvements made, and more in the works.
The options for getting past if if this is your issue are:
Increase spark.driver.maxResultSize or set it to 0 for unlimited
If broadcast joins are the culprit, you can reduce spark.sql.autoBroadcastJoinThreshold to limit the size of broadcast join data
Reduce the number of partitions
Cause: caused by actions like RDD's collect() that send big chunk of data to the driver
Solution:
set by SparkConf: conf.set("spark.driver.maxResultSize", "4g")
OR
set by spark-defaults.conf: spark.driver.maxResultSize 4g
OR
set when calling spark-submit: --conf spark.driver.maxResultSize=4g

How to solve stability problems in Google Dataflow

I have a Dataflow job that has been running stable for several months.
The last 3 days or so, I've problems with the job, it's getting stuck after a certain amount of time and the only thing I can do is stop the job and start a new one. This happened after 2, 6 and 24 hours of processing. Here is the latest exception:
java.lang.ExceptionInInitializerError
at org.apache.beam.runners.dataflow.worker.options.StreamingDataflowWorkerOptions$WindmillServerStubFactory.create (StreamingDataflowWorkerOptions.java:183)
at org.apache.beam.runners.dataflow.worker.options.StreamingDataflowWorkerOptions$WindmillServerStubFactory.create (StreamingDataflowWorkerOptions.java:169)
at org.apache.beam.sdk.options.ProxyInvocationHandler.returnDefaultHelper (ProxyInvocationHandler.java:592)
at org.apache.beam.sdk.options.ProxyInvocationHandler.getDefault (ProxyInvocationHandler.java:533)
at org.apache.beam.sdk.options.ProxyInvocationHandler.invoke (ProxyInvocationHandler.java:158)
at com.sun.proxy.$Proxy54.getWindmillServerStub (Unknown Source)
at org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.<init> (StreamingDataflowWorker.java:677)
at org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.fromDataflowWorkerHarnessOptions (StreamingDataflowWorker.java:562)
at org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.main (StreamingDataflowWorker.java:274)
Caused by: java.lang.RuntimeException: Loading windmill_service failed:
at org.apache.beam.runners.dataflow.worker.windmill.WindmillServer.<clinit> (WindmillServer.java:42)
Caused by: java.io.IOException: No space left on device
at sun.nio.ch.FileDispatcherImpl.write0 (Native Method)
at sun.nio.ch.FileDispatcherImpl.write (FileDispatcherImpl.java:60)
at sun.nio.ch.IOUtil.writeFromNativeBuffer (IOUtil.java:93)
at sun.nio.ch.IOUtil.write (IOUtil.java:65)
at sun.nio.ch.FileChannelImpl.write (FileChannelImpl.java:211)
at java.nio.channels.Channels.writeFullyImpl (Channels.java:78)
at java.nio.channels.Channels.writeFully (Channels.java:101)
at java.nio.channels.Channels.access$000 (Channels.java:61)
at java.nio.channels.Channels$1.write (Channels.java:174)
at java.nio.file.Files.copy (Files.java:2909)
at java.nio.file.Files.copy (Files.java:3027)
at org.apache.beam.runners.dataflow.worker.windmill.WindmillServer.<clinit> (WindmillServer.java:39)
Seems like there is no space left on a device, but shouldn't this be managed by Google? Or is this an error in my job somehow?
UPDATE:
The workflow is as follows:
Reading mass data from PubSub (up to 1500/s)
Filter some messages
Keeping session window on key and grouping by it
Sort the data and do calculations
Output the data to another PubSub
You can increase the storage capacity in the parameter of your pipelise. Look at this one diskSizeGb in this page
In addition, more you keep data in memory, more you need memory. It's the case for the windows, if you never close them, or if you allow late data for too long time, you need a lot of memory to keep all these data up.
Tune either your pipeline, or your machine type. Or both!

Corda - problem while executing flow with multiple output states

I'm trying to execute a corda flow with 3000 output states (Java) but I got the error:
[Thread-8 (ActiveMQ-IO-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$4#6a8da5c5)] impl.JournalImpl.run - appendAddRecord::java.lang.IllegalArgumentException: Record is too large to store 18603342 {}
java.lang.IllegalArgumentException: Record is too large to store 18603342
at org.apache.activemq.artemis.core.journal.impl.JournalImpl.switchFileIfNecessary(JournalImpl.java:2915) ~[artemis-journal-2.2.0.jar:2.2.0]
at org.apache.activemq.artemis.core.journal.impl.JournalImpl.appendRecord(JournalImpl.java:2640) ~[artemis-journal-2.2.0.jar:2.2.0]
at org.apache.activemq.artemis.core.journal.impl.JournalImpl.access$200(JournalImpl.java:88) ~[artemis-journal-2.2.0.jar:2.2.0]
at org.apache.activemq.artemis.core.journal.impl.JournalImpl$1.run(JournalImpl.java:778) [artemis-journal-2.2.0.jar:2.2.0]
at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42) [artemis-commons-2.2.0.jar:2.2.0]
at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31) [artemis-commons-2.2.0.jar:2.2.0]
at org.apache.activemq.artemis.utils.actors.ProcessorBase$ExecutorTask.run(ProcessorBase.java:53) [artemis-commons-2.2.0.jar:2.2.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:1.8.0_181]
at java.lang.Thread.run(Unknown Source) [?:1.8.0_181]
To avoid this problem I divided the execution of the flow into more steps and call it n times (in this case 6) processing 500 output states in every execution.
This solution works, but there is a better/efficient solution to solve this problem?
Thank you in advance.
This error indicates that a message you are trying to send exceeds the network's max message size.
As of Corda 3.x, this max message size is hardcoded to 10MB (10,485,760 bytes).
In a future version of Corda, the network operator will be able to configure the max message size for the network as part of the network parameters.
The purpose of setting a max message size is to prevent large nodes from bullying smaller nodes by forcing them to process excessively large messages.

WSO2 EI log about Java heap space

I have called an endpoint and it response a large data, unfortunately show the error message in WSO2 carbon log . How can I solve it? Thank you.
TID: [-1] [] [2018-02-26 17:48:47,869] ERROR {org.wso2.carbon.das.messageflow.data.publisher.data.MessageFlowObserverStore} - Error occurred while notifying the statistics observer {org.wso2.carbon.das.messageflow.data.publisher.data.MessageFlowObserverStore}
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3236)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
at com.esotericsoftware.kryo.io.Output.flush(Output.java:181)
at com.esotericsoftware.kryo.io.Output.require(Output.java:160)
at com.esotericsoftware.kryo.io.Output.writeString_slow(Output.java:462)
at com.esotericsoftware.kryo.io.Output.writeString(Output.java:363)
at com.esotericsoftware.kryo.serializers.DefaultSerializers$StringSerializer.write(DefaultSerializers.java:191)
at com.esotericsoftware.kryo.serializers.DefaultSerializers$StringSerializer.write(DefaultSerializers.java:184)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628)
at com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:100)
at com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:40)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628)
at com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:100)
at com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:40)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628)
at com.esotericsoftware.kryo.serializers.MapSerializer.write(MapSerializer.java:113)
at com.esotericsoftware.kryo.serializers.MapSerializer.write(MapSerializer.java:39)
at com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:534)
at org.wso2.carbon.das.messageflow.data.publisher.publish.StatisticsPublisher.addEventData(StatisticsPublisher.java:116)
at org.wso2.carbon.das.messageflow.data.publisher.publish.StatisticsPublisher.process(StatisticsPublisher.java:67)
at org.wso2.carbon.das.messageflow.data.publisher.observer.DASMediationFlowObserver.updateStatistics(DASMediationFlowObserver.java:55)
at org.wso2.carbon.das.messageflow.data.publisher.data.MessageFlowObserverStore.notifyObservers(MessageFlowObserverStore.java:71)
at org.wso2.carbon.das.messageflow.data.publisher.services.MessageFlowReporterThread.processAndPublishEventList(MessageFlowReporterThread.java:225)
at org.wso2.carbon.das.messageflow.data.publisher.services.MessageFlowReporterThread.run(MessageFlowReporterThread.java:95)
By looking at the out of memory issue it is hard to say anything about the culprit. In order to find out the actual root cause we have to analyze the heapdump (There will heapdump created by wso2 servers automatically in CARBON_HOME/repository/logs/heap-dump.hprof) using an analyzing tool such as MAT, jprofile.
However, if the response message is large, there is a possibility that the server goes OOM as it keeps and may build the response message in memory. If you want to process large messages, you can tune the heap memory allocation as in the doc.

Data error(cyclic redundancy check) while logging transaction status using bitronix transaction manager

Below exception occurred. Any possible explanations. My notion is may be problem with filesystem!?
Caused by: bitronix.tm.internal.BitronixSystemException: error logging status
at bitronix.tm.BitronixTransaction.setStatus(BitronixTransaction.java:400)
at bitronix.tm.BitronixTransaction.setStatus(BitronixTransaction.java:379)
at bitronix.tm.BitronixTransaction.setActive(BitronixTransaction.java:367)
at bitronix.tm.BitronixTransactionManager.begin(BitronixTransactionManager.java:126)
... 8 more
Caused by: java.io.IOException: Data error (cyclic redundancy check)
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:71)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:89)
at sun.nio.ch.IOUtil.write(IOUtil.java:60)
at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:195)
at bitronix.tm.journal.TransactionLogAppender.writeLog(TransactionLogAppender.java:121)
at bitronix.tm.journal.DiskJournal.log(DiskJournal.java:98)
at bitronix.tm.BitronixTransaction.setStatus(BitronixTransaction.java:389)
... 12 more
There are two reasons for such problem: a bug in the BTM disk journal or a hardware failure (could be RAM, disk, power supply, motherboard... almost anything).
Since the Disk journal is IMHO quite a solid piece of software that has been running on many production systems for years, I'd rather suspect your hardware first.