Can a jenkins slave be enslaved by multiple master instances? - build

I'm trying to have a slave that is hooked up to two masters. However, when I run the Jenkins JNLP I keep getting socket errors. Has anyone had experience with this and been able to work around it?
C:\Documents and Settings\Administrator>java -jar "C:\Documents and Settings\Adm
inistrator\Desktop\test2-slave.jar" -jnlpUrl http://test2.site.com:8080/com
puter/Slave1/slave-agent.jnlp -secret b4161b716c31a8985d8eb2760fdc6a404693bbf86c
7262973554877759ea1db1
Dec 25, 2013 10:50:16 AM hudson.remoting.jnlp.Main$CuiListener <init>
INFO: Jenkins agent is running in headless mode.
Dec 25, 2013 10:50:16 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Locating server among [http://test2.site.com:8080/]
Dec 25, 2013 10:50:16 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Connecting to test2.site.com:7777
Dec 25, 2013 10:50:47 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Connecting to test2.site.com:7777 (retrying:2)
java.net.ConnectException: Connection timed out: connect
at java.net.TwoStacksPlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(Unknown Source)
at java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source)
at java.net.AbstractPlainSocketImpl.connect(Unknown Source)
at java.net.PlainSocketImpl.connect(Unknown Source)
at java.net.SocksSocketImpl.connect(Unknown Source)
at java.net.Socket.connect(Unknown Source)
at java.net.Socket.connect(Unknown Source)
at java.net.Socket.<init>(Unknown Source)
at java.net.Socket.<init>(Unknown Source)
at hudson.remoting.Engine.connect(Engine.java:333)
at hudson.remoting.Engine.run(Engine.java:222)
I know that I can connect to the jenkins box on port 8080. (I checked).

you need to have two separate jar files and secret keys for two different masters.
I am seeing only one in the question.
Once you have got you can run them one after other so that same slave makes connection to two different masters.
Also it is important to note that you need Java version > 7.

Related

Pyspark read jdbc giving errors . How to fix?

I am connecting to RDS MySQL using JDBC in pyspark . I have tried almost everything that I found on Stackoverflow for debugging but still, i am unable to make it work .
spark = SparkSession.builder.config("spark.jars", mysql_jar) \
.master("local[*]").appName("PySpark_MySQL_test").getOrCreate()
df= spark.read.format("jdbc").option("url", "jdbc:mysql://hostname.amazonaws.com:1150/dbname?user=user_name&password=password") \
.option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "table_name").load()
I have tried using the same connection details in pymysql library of python it connects and brings back the result.
But here I getting the below error and am unable to solve it.
raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling o38.load.
: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:827)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:447)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:237)
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:199)
at org.apache.spark.sql.execution.datasources.jdbc.connection.BasicConnectionProvider.getConnection(BasicConnectionProvider.scala:49)
at org.apache.spark.sql.execution.datasources.jdbc.connection.ConnectionProvider$.create(ConnectionProvider.scala:68)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$createConnectionFactory$1(JdbcUtils.scala:62)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:56)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:226)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:355)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:325)
at org.apache.spark.sql.DataFrameReader.$anonfun$load$3(DataFrameReader.scala:307)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:307)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:225)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure
I have experienced the same issues.Now it is worked.The core reason is spark use master node to connect mysql and use work nodes to execute task.So you can connect mysql while raise communication error.Based on this theory,you can open the security rules on mysql to let all spark node can connect to mysql
For anyone coming here for an answer using Docker give the below solution a try.
use the below configuration
source_df = spark.read.format('jdbc').options(
url='jdbc:mysql://host.docker.internal:3306/superset?useSSL=false&allowPublicKeyRetrieval=true',
driver='com.mysql.cj.jdbc.Driver',
dbtable='table',
user='root',
password='root').load()
I have tried the host with localhost, 127.0.0.1, and even the IPAddress from docker inspect but didn't work then changed it to host.docker.internal and it worked.

Google Cloud LocalDatastoreHelper failing to connect

LocalDatastoreHelper from the Google Cloud API started failing one Jenkins worker VM; not on other Jenkins worker VMs nor various development machines. This known issue looks very similar but has a slightly different stacktrace.
(I am using Google Cloud SDK 202.0.0 with cloud-datastore-emulator 2.0.0)
com.google.cloud.datastore.DatastoreException: Deadline exceeded Deadline exceeded
java.net.SocketTimeoutException: connect timed out connect timed out
java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at sun.net.www.http.HttpClient.New(HttpClient.java:339)
at sun.net.www.http.HttpClient.New(HttpClient.java:357)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1220)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:984)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1334)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1309)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:77)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:981)
at com.google.datastore.v1.client.RemoteRpc.call(RemoteRpc.java:183)
at com.google.datastore.v1.client.Datastore.commit(Datastore.java:87)
at com.google.cloud.datastore.spi.v1.HttpDatastoreRpc.commit(HttpDatastoreRpc.java:153)
at com.google.cloud.datastore.DatastoreImpl$4.call(DatastoreImpl.java:485)
at com.google.cloud.datastore.DatastoreImpl$4.call(DatastoreImpl.java:482)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:89)
at com.google.cloud.RetryHelper.run(RetryHelper.java:74)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:51)
at com.google.cloud.datastore.DatastoreImpl.commit(DatastoreImpl.java:481)
at com.google.cloud.datastore.DatastoreImpl.commitMutation(DatastoreImpl.java:475)
at com.google.cloud.datastore.DatastoreImpl.put(DatastoreImpl.java:435)
at com.google.cloud.datastore.DatastoreHelper.put(DatastoreHelper.java:54)
at com.google.cloud.datastore.DatastoreImpl.put(DatastoreImpl.java:410)
This turned out to be a bug in Google's Cloud library. I opened this ticket, and eventually it was resolved. So this problem probably is obsolete.

Gradle build of Android app in VSTS failing after running out of memory

I have a gradle build in VSTS that is building an Android app and it's failing with the below error. Does the build machine really have to little memory or should I change some settings in gradle.properties, e.g. the org.gradle.jvmargs settings?
:app:transformDexArchiveWithExternalLibsDexMergerForDebug
Expiring Daemon because JVM Tenured space is exhausted
Problem in daemon expiration check
org.gradle.internal.event.ListenerNotificationException: Failed to notify daemon expiration listener.
at org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:86)
at org.gradle.internal.event.DefaultListenerManager$EventBroadcast$ListenerDispatch.dispatch(DefaultListenerManager.java:341)
at org.gradle.internal.event.DefaultListenerManager$EventBroadcast.dispatch(DefaultListenerManager.java:152)
at org.gradle.internal.event.DefaultListenerManager$EventBroadcast.dispatch(DefaultListenerManager.java:126)
at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy3.onExpirationEvent(Unknown Source)
at org.gradle.launcher.daemon.server.Daemon$DaemonExpirationPeriodicCheck.run(Daemon.java:271)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.lang.ClassLoader.findLoadedClass0(Native Method)
at java.lang.ClassLoader.findLoadedClass(ClassLoader.java:1038)
at java.lang.ClassLoader.loadClass(ClassLoader.java:406)
at java.lang.ClassLoader.loadClass(ClassLoader.java:411)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.gradle.launcher.daemon.server.DaemonRegistryUpdater.onExpire(DaemonRegistryUpdater.java:86)
at org.gradle.launcher.daemon.server.Daemon$DefaultDaemonExpirationListener.onExpirationEvent(Daemon.java:303)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.internal.event.DefaultListenerManager$ListenerDetails.dispatch(DefaultListenerManager.java:371)
at org.gradle.internal.event.DefaultListenerManager$ListenerDetails.dispatch(DefaultListenerManager.java:353)
at org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:58)
... 16 more
Expiring Daemon because JVM Tenured space is exhausted
Daemon will be stopped at the end of the build after running out of JVM memory
:app:transformDexArchiveWithExternalLibsDexMergerForDebug FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:transformDexArchiveWithExternalLibsDexMergerForDebug'.
> GC overhead limit exceeded
After specifying parameter org.gradle.jvmargs=-Xmx2048M -XX\:MaxHeapSize\=32g in file gradle.properties the build started working.

DataStax Enterpise on AWS - Futures Timed Out After [120] when running Spark job

We are currently running into the following error when attempting to run a Spark Job on DSE 4.8 Analytics
ERROR 2016-04-11 20:59:42,825 UserGroupInformation.java:1128 -
org.apache.hadoop.security.UserGroupInformation:
PriviledgedActionException as:ubuntu
cause:java.util.concurrent.TimeoutException: Futures timed out after
[120 seconds] Exception in thread "main"
java.lang.reflect.InvocationTargetException at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.spark.DseSecureRunner.(DseSecureRunner.scala:24) at
org.apache.spark.DseSecureRunner$.main(DseSecureRunner.scala:34) at
org.apache.spark.DseSecureRunner.main(DseSecureRunner.scala) Caused
by: java.lang.reflect.UndeclaredThrowableException: Unknown exception
in doAs at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1138)
at
org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:67)
at
org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:146)
at
org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:245)
at
org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
... 7 more Caused by: java.security.PrivilegedActionException:
java.util.concurrent.TimeoutException: Futures timed out after [120
seconds] at java.security.AccessController.doPrivileged(Native
Method) at javax.security.auth.Subject.doAs(Subject.java:422) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1125)
... 11 more Caused by: java.util.concurrent.TimeoutException: Futures
timed out after [120 seconds] at
scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at
scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107) at
org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:97) at
org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:159)
at
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:68)
at
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:67)
... 14 more
This error occurs on the 2 worker nodes, while the node which is running the driver runs fine.
We have the following Security Group configuration for a Cluster of 3 nodes created using the DataStax AMI of the 2.6 flavor
Scraped from the documentation my Security Group is like this with one minor exception
The following port was ignored
8983 Custom TCP Rule TCP 0.0.0.0/0 Solr port and Demo applications web site port (Portfolio, Search, Search log, Weather Sensors)
The only way to get around this error was to do the following
ALL TCP
TCP (6)
ALL
cluster-security-group (using the picture as reference this would be sg-bbc40aff)
Which leads me to believe that some process is trying to communicate with nodes in the cluster via another port.
http://docs.datastax.com/en/datastax_enterprise/4.8/datastax_enterprise/install/installAMIsecurity.html
Has anyone ran into this problem running Spark Jobs using DSE Analytics on AWS?
Thanks

Wso2 Error while deploying carbon application

I have downloaded Wos2 Developer Studio, downloaded carbon 4.2 and added it as a server in the developer studio. I started carbon and went to the carbon admin console, I added the following features repository
http://dist.wso2.org/p2/carbon/releases/turing/
and installed the Enterprise service bus. I created an ESB Config project and added a Rest API artefact. I deployed it successfully and it worked. However when I remove it using "Add and Remove.." it tells me it has removed it but in fact has not. Hence when I restart the server I get "Error while deploying carbon application" even through nothing is show as deployed on the server. I then can no longer deploy anything including new projects. Selecting the server "Clean" and the "Publish" does not fix the problem?
INFO {org.wso2.carbon.application.deployer.internal.ApplicationManager} - Deploying Carbon Application : getAPI_1.0.0.car...
[2014-07-10 11:48:23,380] ERROR {org.wso2.carbon.application.deployer.CappAxis2Deployer} - Error while deploying carbon application C:\Users\Ashley\Downloads\wso2carbon-4.2.0\wso2carbon-4.2.0\repository\deployment\server\carbonapps\getAPI_1.0.0.car
java.lang.NullPointerException
at java.util.Hashtable.put(Unknown Source)
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:146)
at org.wso2.carbon.application.deployer.synapse.SynapseAppDeployer.deployArtifacts(SynapseAppDeployer.java:100)
at org.wso2.carbon.application.deployer.internal.ApplicationManager.deployCarbonApp(ApplicationManager.java:251)
at org.wso2.carbon.application.deployer.CappAxis2Deployer.deploy(CappAxis2Deployer.java:71)
at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(DeploymentFileData.java:136)
at org.apache.axis2.deployment.DeploymentEngine.doDeploy(DeploymentEngine.java:810)
at org.apache.axis2.deployment.repository.util.WSInfoList.update(WSInfoList.java:144)
at org.apache.axis2.deployment.RepositoryListener.update(RepositoryListener.java:377)
at org.apache.axis2.deployment.RepositoryListener.checkServices(RepositoryListener.java:254)
at org.apache.axis2.deployment.RepositoryListener.startListener(RepositoryListener.java:371)
at org.apache.axis2.deployment.scheduler.SchedulerTask.checkRepository(SchedulerTask.java:59)
at org.apache.axis2.deployment.scheduler.SchedulerTask.run(SchedulerTask.java:67)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.runAxisDeployment(CarbonDeploymentSchedulerTask.java:65)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.run(CarbonDeploymentSchedulerTask.java:110)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.runAndReset(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)