I have a Spray based HTTP application running on a remote machine and I would like to profile it using Yourkit. I followed the instructions as mentioned in the Yourkit website and I ended up hitting this error:
root#remote-worker:/home/joe/yjp-2016.02/bin# sh yjp.sh -attach 19960
Attaching to process 19960 using default options
[YourKit Java Profiler 2016.02-b36] Log file: /root/.yjp/log/yjp-23609.log
com.yourkit.runtime.PresentableException: com.sun.tools.attach.AttachNotSupportedException: Unable to open socket file: target process not responding or HotSpot VM not loaded
at com.yourkit.f.a.a(a:93)
at com.yourkit.f.b.attach(a:188)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.yourkit.Main$5.run(a:17)
Attach to a running JVM failed.
Solution: start JVM with the profiler agent instead of attaching it to a running JVM:
https://www.yourkit.com/docs/java/help/running_with_profiler.jsp
root#remote-worker:/home/joe/yjp-2016.02/bin#
Solution is printed at the end of message: start JVM with the profiler agent instead of attaching it to a running JVM: https://www.yourkit.com/docs/java/help/running_with_profiler.jsp
Attach works only in HotSpot JVM; and running JVM process should have enough permissions. Please find details in "The attach mode limitations" at https://www.yourkit.com/docs/java/help/attach_agent.jsp
Related
We have selenium hub deployed on kubernetes cliuster on AWS and used ingress-traefik to expose service.
We have also have selenium chrome node registered to this selenium hub on kubernetes.
When i see the grid console page i can see the chrome node attached to this hub.
But when i trigger my automation suite through Jenkins i am getting the below error message
org.openqa.selenium.remote.UnreachableBrowserException: Could not start a new session. Possible causes are invalid address of the remote server or browser start-up failure.
Build info: version: '3.141.59', revision: 'e82be7d358', time: '2018-11-14T08:17:03'
System info: host: 'xxxxx', ip: 'x.x.x.x', os.name: 'Linux', os.arch: 'amd64', os.version: '4.14.165-103.209.amzn1.x86_64', java.version: '1.8.0_221'
Driver info: driver.version: RemoteWebDriver
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:573)
at org.openqa.selenium.remote.RemoteWebDriver.startSession(RemoteWebDriver.java:213)
at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:131)
at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:144)
at stepDefns.SetUp.setUpBrowser(SetUp.java:145)
at stepDefns.OrderSpecTabSteps.user_sets_the_browser_to_and_version(OrderSpecTabSteps.java:25)
at ✽.Given user sets the browser to "chrome" and version "69"(/data/jenkins_home/workspace/FPSAutomation/src/test/java/features/NonRes.feature:4)
Caused by: java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
in Logs i can see Caused by: java.net.SocketTimeoutException: connect timed out
In My java code I am using node url as below which is "HTTPS"
String nodeURL = "https://<hostname>/wd/hub";
ChromeOptions remoteOptions = new ChromeOptions();
driver=new RemoteWebDriver(new URL(nodeURL), remoteOptions);
Please let me know how to resolve this issue.
Thanks in Advance.
We are currently running into the following error when attempting to run a Spark Job on DSE 4.8 Analytics
ERROR 2016-04-11 20:59:42,825 UserGroupInformation.java:1128 -
org.apache.hadoop.security.UserGroupInformation:
PriviledgedActionException as:ubuntu
cause:java.util.concurrent.TimeoutException: Futures timed out after
[120 seconds] Exception in thread "main"
java.lang.reflect.InvocationTargetException at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.spark.DseSecureRunner.(DseSecureRunner.scala:24) at
org.apache.spark.DseSecureRunner$.main(DseSecureRunner.scala:34) at
org.apache.spark.DseSecureRunner.main(DseSecureRunner.scala) Caused
by: java.lang.reflect.UndeclaredThrowableException: Unknown exception
in doAs at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1138)
at
org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:67)
at
org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:146)
at
org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:245)
at
org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
... 7 more Caused by: java.security.PrivilegedActionException:
java.util.concurrent.TimeoutException: Futures timed out after [120
seconds] at java.security.AccessController.doPrivileged(Native
Method) at javax.security.auth.Subject.doAs(Subject.java:422) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1125)
... 11 more Caused by: java.util.concurrent.TimeoutException: Futures
timed out after [120 seconds] at
scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at
scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107) at
org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:97) at
org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:159)
at
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:68)
at
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:67)
... 14 more
This error occurs on the 2 worker nodes, while the node which is running the driver runs fine.
We have the following Security Group configuration for a Cluster of 3 nodes created using the DataStax AMI of the 2.6 flavor
Scraped from the documentation my Security Group is like this with one minor exception
The following port was ignored
8983 Custom TCP Rule TCP 0.0.0.0/0 Solr port and Demo applications web site port (Portfolio, Search, Search log, Weather Sensors)
The only way to get around this error was to do the following
ALL TCP
TCP (6)
ALL
cluster-security-group (using the picture as reference this would be sg-bbc40aff)
Which leads me to believe that some process is trying to communicate with nodes in the cluster via another port.
http://docs.datastax.com/en/datastax_enterprise/4.8/datastax_enterprise/install/installAMIsecurity.html
Has anyone ran into this problem running Spark Jobs using DSE Analytics on AWS?
Thanks
In a hoplon project, by executing: boot development, i get these errors:
I/O exception (java.net.SocketException) caught when connecting to the target host: Network is unreachable
Failed to collect dependencies for [#<Dependency org.clojure:clojure:jar:1.6.0 (compile)> #<Dependency tailrecursion:boot.core:jar:2.5.1 (compile)> #<Dependency tailrecursion:boot.task:jar:2.2.4 (compile)> #<Dependency tailrecursion:hoplon:jar:5.10.23
Failed to read artifact descriptor for org.clojure:clojure:jar:1.6.0
↳ Caused by: Could not transfer artifact org.clojure:clojure:pom:1.6.0 from/to http://repo1.maven.org/maven2/ (http://repo1.maven.org/maven2/): Connection to http://repo1.maven.org refused
I have boot installed on my machine. (Ubuntu 12.04)
Any suggestions?
I'm using the web client for web service. Client stub is generated with Axis 2. Everything is deployed on WebSphere 7.0.
When deployed on my developer machine (websphere instance run inside IBM RAD Eclipse) I'm connected with remote web service. However, on test machine, the error is thrown:
java.lang.NoClassDefFoundError: com.ibm.ws.wstx.handler.WSATGenerator (initialization failure)
at java.lang.J9VMInternals.initialize(J9VMInternals.java:140)
at com.ibm.ws.wstx.WSTXClientTCMImpl.handleInbound(WSTXClientTCMImpl.java:100)
at com.ibm.ws.wstx.WSTXClientTCMImpl.cleanupContext(WSTXClientTCMImpl.java:81)
at org.apache.axis2.util.ThreadContextMigratorUtil.performContextCleanup(ThreadContextMigratorUtil.java:192)
at org.apache.axis2.jaxws.core.controller.impl.AxisInvocationController.postExecute(AxisInvocationController.java:657)
at org.apache.axis2.jaxws.core.controller.impl.AxisInvocationController.execute(AxisInvocationController.java:589)
at org.apache.axis2.jaxws.core.controller.impl.AxisInvocationController.doInvoke(AxisInvocationController.java:130)
at org.apache.axis2.jaxws.core.controller.impl.InvocationControllerImpl.invoke(InvocationControllerImpl.java:93)
at org.apache.axis2.jaxws.client.proxy.JAXWSProxyHandler.invokeSEIMethod(JAXWSProxyHandler.java:364)
at org.apache.axis2.jaxws.client.proxy.JAXWSProxyHandler.invoke(JAXWSProxyHandler.java:185)
at $Proxy94.getList(Unknown Source)
The ear that is deployed there is made by me on my development machine, so it's exactly the same code. So I suppose that it's some configuration issue. However, I have no idea, what part of configuration can be responsible for that.
So, I have the question, what is this WSATGenerator, and in which jar it should be available? Is it a standard library on WebSphere, or must it be manually configured? What configuration differences could cause that error thrown on test server and no by me?
I'm running Windows 7, the test machine is on Unix. Both machines are 64bit.
--edit--
Before NoClassDefFoundError, there is throws ExceptionInInitializerError:
java.lang.ExceptionInInitializerError
at java.lang.J9VMInternals.initialize(J9VMInternals.java:222)
at com.ibm.ws.wstx.handler.WSATGenerator.<clinit>(WSATGenerator.java:127)
at java.lang.J9VMInternals.initializeImpl(Native Method)
at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)
at com.ibm.ws.wstx.WSTXClientTCMImpl.migrateThreadToContext(WSTXClientTCMImpl.java:61)
at org.apache.axis2.util.ThreadContextMigratorUtil.performMigrationToContext(ThreadContextMigratorUtil.java:163)
at org.apache.axis2.jaxws.core.controller.impl.AxisInvocationController.preExecute(AxisInvocationController.java:608)
at org.apache.axis2.jaxws.core.controller.impl.AxisInvocationController.execute(AxisInvocationController.java:570)
... 82 more
Caused by:
java.lang.ClassCastException: com.systinet.jaxrpc.rpc.ServiceFactoryImpl incompatible with com.ibm.wsspi.webservices.rpc.ServiceFactory
at com.ibm.ws.Transaction.wstx.WSATServices$1.run(WSATServices.java:83)
at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:63)
at com.ibm.ws.Transaction.wstx.WSATServices.<clinit>(WSATServices.java:74)
at java.lang.J9VMInternals.initializeImpl(Native Method)
at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)
... 89 more
WHAT AM I TRYING TO DO
Trying to setup a VCAP on a UBUNTU SERVER VM on my machine by following the steps mentioned at https://github.com/cloudfoundry/vcap/
WHAT IS THE ISSUE
Things seemed to be working fine but at step5 (https://github.com/cloudfoundry/vcap/#step-5-validate-that-you-can-connect-and-tests-pass) I got an exception while trying to execute the following command - vmc target api.vcap.me
The exception that I see on my console is:
Host is not available or is not valid: 'http://api.vcap.me'
Would you like see the response? [yN]: y
HTTP exception: Errno::ECONNREFUSED:No connection could be made because the target machine actively refused it. - connect(2)
ANY OTHER RELEVANT INFO
For some earlier experiments I was using MicroCloud (provided as a download by CloudFoundry). I am having issues in pointing my VMC to this Microcloud as well.
On the Micro Cloud console I see the following message:
To access your Micro Cloud Foundry instance, use:
vmc target http://api.agoel.cloudfoundry.me
When I run this vmc command from the Ruby Command Prompt setup on my Windows7 I get following error:
Host is not available or is not valid: 'http://api.agoel.cloudfoundry.me'
Would you like see the response? [yN]: y
HTTP exception: Errno::ETIMEDOUT:A connection attempt failed because the connected party did not properly respond after a period of time, or
ost has failed to respond. - connect(2)
WHATS DOES VMC INFO DISPLAY
I ran vmc info command on command prompt. It displayed following info
VMware's Cloud Application Platform
For support visit support DOT cloudfoundry DOT com
Target: http:// api DOT cloudfoundry DOT com (v0.999)
Client: v0.3.18
User: ankitgoel1987#gmail.com
Usage: Memory (1.1G of 2.0G total)
Services (2 of 16 total)
Apps (2 of 20 total)
MY SETUP DETAILS
Windows7 running on 4GB RAM
Microcloud from Cloudfoundry already installed (this was done as part of some other exercise. My recent experiment requires me to setup a Ubuntu server with VCAP on it. So this MicroCloud should not really matter)
vmc 0.3.18 (installed on my Windows7 machine)
ruby 1.9.2p290 (2011-07-09) [i386-mingw32]
add in your hosts files the following entry:
IP_of_ubuntu_server vcap.me api.vcap.me
If you want to avoid having to edit your hosts file every time you deploy a new app and depending on what virtualisation platform you are using you may be able to forward all traffic on port 80 for your own computer on to the VM.
*.vcap.me is set to resolve to 127.0.0.1 so this is an ideal solution. To do this you should set the network settings to NAT rather than Bridged (maybe you have done this already) and then set port 80 to forward to the IP of the guest OS. In VMWare Fusion for example this is as simple as editing a settings file.