Sometimes when running statements through lein repl I get following errors:
Exception in thread "Thread-3" clojure.lang.ExceptionInfo: Subprocess failed {:exit-code 137}
at clojure.core$ex_info.invokeStatic(core.clj:4617)
at clojure.core$ex_info.invoke(core.clj:4617)
at leiningen.core.eval$fn__5732.invokeStatic(eval.clj:264)
at leiningen.core.eval$fn__5732.invoke(eval.clj:260)
at clojure.lang.MultiFn.invoke(MultiFn.java:233)
at leiningen.core.eval$eval_in_project.invokeStatic(eval.clj:366)
at leiningen.core.eval$eval_in_project.invoke(eval.clj:356)
at leiningen.repl$server$fn__11838.invoke(repl.clj:243)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.core$apply.invokeStatic(core.clj:646)
at clojure.core$with_bindings_STAR_.invokeStatic(core.clj:1881)
at clojure.core$with_bindings_STAR_.doInvoke(core.clj:1881)
at clojure.lang.RestFn.invoke(RestFn.java:425)
at clojure.lang.AFn.applyToHelper(AFn.java:156)
at clojure.lang.RestFn.applyTo(RestFn.java:132)
at clojure.core$apply.invokeStatic(core.clj:650)
at clojure.core$bound_fn_STAR_$fn__4671.doInvoke(core.clj:1911)
at clojure.lang.RestFn.invoke(RestFn.java:397)
at clojure.lang.AFn.run(AFn.java:22)
at java.lang.Thread.run(Thread.java:745)
What does exit code 137 mean?
I am using vim-fireplace.
Thanks!
Exit-code above 128 means that the process died because of a received signal (where exitCode = 128 + signalNumber). In this case it was signal 9 (= SIGKILL); maybe sent because your system is running out of memory?
Related
My application is designed in such a way that whenever someone tries to start it, the application will close other instances of itself. (notice that this is working as expected and should not be changed).
I am using Akka and one of my Actors is a PersistentActor that uses leveldb.
When the application starts it locks <path-to-lock>/LOCK, when it starts again the leveldb cannot lock the file so the PersistentActor cannot start.
After some inspection I found that the leveldb Actor class is LeveldbJournal and it starts under the system guardian with path akka://actor-system/system/akka.persistence.journal.leveldb.
I would like leveldb to restart itself until the file can be locked or a max retries limit has reached.
Logs:
[ERROR] [02/04/2019 15:39:25.731] [operator-actor-system-akka.actor.default-dispatcher-5] [akka://operator-actor-system/system/akka.persistence.journal.leveldb] Unable to acquire lock on '<path-to-lock>/LOCK'
akka.actor.ActorInitializationException: akka://operator-actor-system/system/akka.persistence.journal.leveldb: exception during creation
at akka.actor.ActorInitializationException$.apply(Actor.scala:180)
at akka.actor.ActorCell.create(ActorCell.scala:607)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:461)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:483)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:282)
at akka.dispatch.Mailbox.run(Mailbox.scala:223)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Unable to acquire lock on '<path-to-lock>/LOCK'
at org.iq80.leveldb.impl.DbLock.<init>(DbLock.java:55)
at org.iq80.leveldb.impl.DbImpl.<init>(DbImpl.java:167)
at org.iq80.leveldb.impl.Iq80DBFactory.open(Iq80DBFactory.java:59)
at akka.persistence.journal.leveldb.LeveldbStore$class.preStart(LeveldbStore.scala:178)
at akka.persistence.journal.leveldb.LeveldbJournal.preStart(LeveldbJournal.scala:23)
at akka.actor.Actor$class.aroundPreStart(Actor.scala:510)
at akka.persistence.journal.leveldb.LeveldbJournal.aroundPreStart(LeveldbJournal.scala:23)
at akka.actor.ActorCell.create(ActorCell.scala:590)
... 7 more
When the PersistentActor restarts:
[ERROR] [02/04/2019 15:39:57.168] [operator-actor-system-akka.actor.default-dispatcher-16] [akka://operator-actor-system/user/controller/view-manager/records-service-api-supervisor/records-service-api] Persistence failure when replaying events for persistenceId [record-service-persistence-actor]. Last known sequence number [0] (akka.persistence.RecoveryTimedOut)
Thanks,
Ido Sorozon
P.S. Versions:
Scala: 2.11.8
Akka: 2.4.19
I am upgrading solr 5.3.1, I am getting following error when I run specs on semaphoreci
RSolr::Error::Http:
RSolr::Error::Http - 500 Internal Server Error
Error: {msg=SolrCore 'default' is not available due to init failure: Error opening new
searcher,trace=org.apache.solr.common.SolrException: SolrCore
'default' is not available due to init failure: Error
opening new searcher
at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:974)
at org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:250)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:417)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:214)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
URI: http://localhost:8981/solr/default/update?wt=json
Request Headers: {"Content-Type"=>"application/json"}
Request Data: "[{\"id\":\"Contact 1\",\"type\":[\"Contact\",\"ActiveRecord::Base\"],\"class_name\":\"Contact\",\"first_name_text\":\"Danial\",\"last_name_text\":\"Ullrich\"}]"
Backtrace: /home/runner/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/rsolr-2.0.2/lib/rsolr/client.rb:195:in
rescue in execute'
/home/runner/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/rsolr-2.0.2/lib/rsolr/client.rb:185:in
execute'
/home/runner/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/rsolr-2.0.2/lib/rsolr/client.rb:180:in
send_and_receive'
/home/runner/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/sunspot_rails-2.2.7/lib/sunspot/rails/solr_instrumentation.rb:16:in
block in send_and_receive_with_as_instrumentation'
/home/runner/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/activesupport-4.2.7.1/lib/active_support/notifications.rb:164:in
block in instrument'
/home/runner/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/activesupport-4.2.7.1/lib/active_support/notifications/instrumenter.rb:20:in
instrument'
/home/runner/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/activesupport-4.2.7.1/lib/active_support/notifications.rb:164:in
instrument'
/home/runner/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/sunspot_rails-2.2.7/lib/sunspot/rails/solr_instrumentation.rb:15:in
send_and_receive_with_as_instrumentation'
(eval):2:in post'
/home/runner/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/rsolr-2.0.2/lib/rsolr/client.rb:83:in
update'
/home/runner/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/rsolr-2.0.2/lib/rsolr/client.rb:102:in
add'
# (eval):2:inpost'
# ./spec/controllers/contacts_controller_spec.rb:319:in block (3 levels) in <top (required)>'
# ------------------
# --- Caused by: ---
# Faraday::ClientError:
# the server responded with status 500
# (eval):2:inpost'
Solr Logs
1152 ERROR (coreLoadExecutor-6-thread-2) [ x:development] o.a.s.c.CoreContainer Error creating core [development]: Index locked for write for core 'development'. Solr now longer supports forceful unlocking via 'unlockOnStartup'. Please verify locks manually!
org.apache.solr.common.SolrException: Index locked for write for core 'development'. Solr now longer supports forceful unlocking via 'unlockOnStartup'. Please verify locks manually!
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:820)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:659)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:723)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:443)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:434)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:210)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.lucene.store.LockObtainFailedException: Index locked for write for core 'development'. Solr now longer supports forceful unlocking via 'unlockOnStartup'. Please verify locks manually!
at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:528)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:761)
Any help would be much appreciated.
write.lock file might be present under solr/{environment}/data/index directory due to unclean shutdown. Removing write.lock file will fix the issue.
I was having this issue even when write.lock was not present. Here's what solved it for me:
1) Delete entire /solr directory
2) run ps -ef|grep solr to see any solr processes running
3) run kill -9 [solr pid] for each of the processes found in step 2
4) run rake sunspot:solr:start and then rake sunspot:solr:reindex
Hope this helps!
In my application i use infinispan as distributed cache.
I work with 3 application server running: wildfly 9.2. On each of them i job is exectuted and its work is just to validate some cache items. If the validation fails the job will remove the cache as it's not valid any more.
The removing code is quite simple:
if (somecondition){
cacheManager.removeCache(sessionCacheName);
}
I realized that when all three server are running (so there are 3 jobs that concurrently execute the romove operation) i get systematically this exception:
19:43:00,005 WARN [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (OOB-20,ws-7-aor-57542) ISPN000220: Problems un-marshalling remote command from byte buffer
a.lang.NullPointerException
at org.infinispan.commands.RemoteCommandsFactory.fromStream(RemoteCommandsFactory.java:219)
at org.infinispan.marshall.exts.ReplicableCommandExternalizer.fromStream(ReplicableCommandExternalizer.java:107)
at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:155)
at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:65)
at org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:436)
at org.infinispan.marshall.core.ExternalizerTable.readObject(ExternalizerTable.java:227)
at org.infinispan.marshall.core.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:153)
at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:354)
at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)
at org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:134)
at org.infinispan.marshall.core.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:101)
at org.infinispan.commons.marshall.AbstractDelegatingMarshaller.objectFromByteBuffer(AbstractDelegatingMarshaller.java:80)
at org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectFromBuffer(MarshallerAdapter.java:28)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:298)
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460)
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377)
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:250)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:675)
at org.jgroups.JChannel.up(JChannel.java:739)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1029)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:383)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:394)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1042)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)
at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:435)
at org.jgroups.protocols.pbcast.NAKACK2.deliver(NAKACK2.java:961)
at org.jgroups.protocols.pbcast.NAKACK2.handleMessage(NAKACK2.java:843)
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:618)
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:155)
at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:200)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:297)
at org.jgroups.protocols.MERGE3.up(MERGE3.java:288)
at org.jgroups.protocols.Discovery.up(Discovery.java:291)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1577)
at org.jgroups.protocols.TP$MyHandler.run(TP.java:1796)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
on one server while this one:
19:44:00,199 ERROR [stderr] (DefaultQuartzScheduler_Worker-1) Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from ws-7-aor-36158, see cause for remote stack trace
19:44:00,200 ERROR [stderr] (DefaultQuartzScheduler_Worker-1) at org.infinispan.remoting.transport.AbstractTransport.checkResponse(AbstractTransport.java:46)
19:44:00,211 ERROR [stderr] (DefaultQuartzScheduler_Worker-1) at org.infinispan.remoting.transport.AbstractTransport.parseResponseAndAddToResponseList(AbstractTransport.java:71)
19:44:00,211 ERROR [stderr] (DefaultQuartzScheduler_Worker-1) at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:586)
19:44:00,212 ERROR [stderr] (DefaultQuartzScheduler_Worker-1) at org.infinispan.manager.DefaultCacheManager.removeCache(DefaultCacheManager.java:492)
on the other 2.
This error disappear when there is only 1 application server instance running.
So it's clearly related to the concurrence.
What am i missing?
The removeCache() method was only intended as an admin operation to be called from a JMX/RHQ console, so concurrent calls weren't much of a concern.
The good news is that concurrent calls will work in Infinispan 8.1+/WildFly 10, which include the fix for ISPN-5756.
I am trying to run one my process through callgrind. One of the child process (that I need to trace) calls into libhdfs and when running through callgrind this throws an exception:
java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.viewfs.ViewFileSystem could not be instantiated
at java.util.ServiceLoader.fail(ServiceLoader.java:224)
at java.util.ServiceLoader.access$100(ServiceLoader.java:181)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:377)
at java.util.ServiceLoader$1.next(ServiceLoader.java:445)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2602)
Caused by: java.lang.ExceptionInInitializerError
at org.apache.hadoop.fs.viewfs.ViewFileSystem.<init>(ViewFileSystem.java:134)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:374)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:373)
... 2 more
Caused by: java.lang.IllegalArgumentException: java.lang.ClassFormatError: Illegal UTF8 string in constant pool in class file com/sun/proxy/$Proxy7
at java.lang.reflect.Proxy$ProxyClassFactory.apply(Proxy.java:685)
at java.lang.reflect.Proxy$ProxyClassFactory.apply(Proxy.java:592)
at java.lang.reflect.WeakCache$Factory.get(WeakCache.java:244)
at java.lang.reflect.WeakCache.get(WeakCache.java:141)
at java.lang.reflect.Proxy.getProxyClass0(Proxy.java:455)
at java.lang.reflect.Proxy.newProxyInstance(Proxy.java:738)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.getProxyForCallback(MetricsSystemImpl.java:315)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:311)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:237)
at org.apache.hadoop.metrics2.MetricsSystem.register(MetricsSystem.java:60)
at org.apache.hadoop.security.UserGroupInformation$UgiMetrics.create(UserGroupInformation.java:120)
at org.apache.hadoop.security.UserGroupInformation.<clinit>(UserGroupInformation.java:236)
... 9 more
When running normally I do not get this exception. Any idea what's going on? I am new to valgrind/callgrind.
It appears that this is an issue with JVM which generates a lot of SIGSEGVs during it's normal course of operation which valgrind doesn't like. I overcame that with the following parameter to callgrind --vex-iropt-register-updates=allregs-at-mem-access.
I am trying to get the Handviewer java sample project running. When running the application crashes after (1-2 sec) and firing the following exceptions!
Please note that I have managed to run the UserViewers.java sample without problems. and I am running on OSX 10.8.3, NiTE-MacOSX-x64-2.2, OpenNI-MacOSX-x64-2.2, and Asus XtionPro Live.
Exception in thread "Thread-17" java.util.NoSuchElementException: Unknown pixel format: 0
at org.openni.PixelFormat.fromNative(PixelFormat.java:30)
at org.openni.VideoMode.<init>(VideoMode.java:38)
at com.primesense.nite.NativeMethods.niteReadHandTrackerFrame(Native Method)
at com.primesense.nite.HandTracker.readFrame(HandTracker.java:139)
at com.primesense.nite.Samples.HandViewer.HandViewer.onNewFrame(HandViewer.java:69)
at com.primesense.nite.HandTracker.onFrameReady(HandTracker.java:360)
Exception in thread "AWT-EventQueue-0" java.lang.NullPointerException
at com.primesense.nite.Samples.HandViewer.HandViewer.paint(HandViewer.java:55)
at javax.swing.JComponent.paintChildren(JComponent.java:884)
at javax.swing.JComponent.paint(JComponent.java:1046)
at javax.swing.JComponent._paintImmediately(JComponent.java:5106)
at javax.swing.JComponent.paintImmediately(JComponent.java:4890)
at javax.swing.RepaintManager$3.run(RepaintManager.java:814)
at javax.swing.RepaintManager$3.run(RepaintManager.java:802)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.AccessControlContext$1.doIntersectionPrivilege(AccessControlContext.java:87)
at javax.swing.RepaintManager.paintDirtyRegions(RepaintManager.java:802)
at javax.swing.RepaintManager.paintDirtyRegions(RepaintManager.java:745)
at javax.swing.RepaintManager.prePaintDirtyRegions(RepaintManager.java:725)
at javax.swing.RepaintManager.access$1000(RepaintManager.java:46)
at javax.swing.RepaintManager$ProcessingRunnable.run(RepaintManager.java:1684)
at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:209)
at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:708)
at java.awt.EventQueue.access$400(EventQueue.java:82)
at java.awt.EventQueue$2.run(EventQueue.java:669)
at java.awt.EventQueue$2.run(EventQueue.java:667)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.AccessControlContext$1.doIntersectionPrivilege(AccessControlContext.java:87)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:678)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:296)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:211)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:201)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:196)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:188)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:122)
This probably won't help you much because it's already been a while... but I have had a similar problem with my Kinect. I could resolve everything by reinstalling everything and switching to OpenNI 1.5.4. Everything worked after that. Good luck.