DataStax Enterpise on AWS - Futures Timed Out After [120] when running Spark job - amazon-web-services

We are currently running into the following error when attempting to run a Spark Job on DSE 4.8 Analytics
ERROR 2016-04-11 20:59:42,825 UserGroupInformation.java:1128 -
org.apache.hadoop.security.UserGroupInformation:
PriviledgedActionException as:ubuntu
cause:java.util.concurrent.TimeoutException: Futures timed out after
[120 seconds] Exception in thread "main"
java.lang.reflect.InvocationTargetException at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.spark.DseSecureRunner.(DseSecureRunner.scala:24) at
org.apache.spark.DseSecureRunner$.main(DseSecureRunner.scala:34) at
org.apache.spark.DseSecureRunner.main(DseSecureRunner.scala) Caused
by: java.lang.reflect.UndeclaredThrowableException: Unknown exception
in doAs at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1138)
at
org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:67)
at
org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:146)
at
org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:245)
at
org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
... 7 more Caused by: java.security.PrivilegedActionException:
java.util.concurrent.TimeoutException: Futures timed out after [120
seconds] at java.security.AccessController.doPrivileged(Native
Method) at javax.security.auth.Subject.doAs(Subject.java:422) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1125)
... 11 more Caused by: java.util.concurrent.TimeoutException: Futures
timed out after [120 seconds] at
scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at
scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107) at
org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:97) at
org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:159)
at
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:68)
at
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:67)
... 14 more
This error occurs on the 2 worker nodes, while the node which is running the driver runs fine.
We have the following Security Group configuration for a Cluster of 3 nodes created using the DataStax AMI of the 2.6 flavor
Scraped from the documentation my Security Group is like this with one minor exception
The following port was ignored
8983 Custom TCP Rule TCP 0.0.0.0/0 Solr port and Demo applications web site port (Portfolio, Search, Search log, Weather Sensors)
The only way to get around this error was to do the following
ALL TCP
TCP (6)
ALL
cluster-security-group (using the picture as reference this would be sg-bbc40aff)
Which leads me to believe that some process is trying to communicate with nodes in the cluster via another port.
http://docs.datastax.com/en/datastax_enterprise/4.8/datastax_enterprise/install/installAMIsecurity.html
Has anyone ran into this problem running Spark Jobs using DSE Analytics on AWS?
Thanks

Related

Pyspark read jdbc giving errors . How to fix?

I am connecting to RDS MySQL using JDBC in pyspark . I have tried almost everything that I found on Stackoverflow for debugging but still, i am unable to make it work .
spark = SparkSession.builder.config("spark.jars", mysql_jar) \
.master("local[*]").appName("PySpark_MySQL_test").getOrCreate()
df= spark.read.format("jdbc").option("url", "jdbc:mysql://hostname.amazonaws.com:1150/dbname?user=user_name&password=password") \
.option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "table_name").load()
I have tried using the same connection details in pymysql library of python it connects and brings back the result.
But here I getting the below error and am unable to solve it.
raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling o38.load.
: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:827)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:447)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:237)
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:199)
at org.apache.spark.sql.execution.datasources.jdbc.connection.BasicConnectionProvider.getConnection(BasicConnectionProvider.scala:49)
at org.apache.spark.sql.execution.datasources.jdbc.connection.ConnectionProvider$.create(ConnectionProvider.scala:68)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$createConnectionFactory$1(JdbcUtils.scala:62)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:56)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:226)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:355)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:325)
at org.apache.spark.sql.DataFrameReader.$anonfun$load$3(DataFrameReader.scala:307)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:307)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:225)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure
I have experienced the same issues.Now it is worked.The core reason is spark use master node to connect mysql and use work nodes to execute task.So you can connect mysql while raise communication error.Based on this theory,you can open the security rules on mysql to let all spark node can connect to mysql
For anyone coming here for an answer using Docker give the below solution a try.
use the below configuration
source_df = spark.read.format('jdbc').options(
url='jdbc:mysql://host.docker.internal:3306/superset?useSSL=false&allowPublicKeyRetrieval=true',
driver='com.mysql.cj.jdbc.Driver',
dbtable='table',
user='root',
password='root').load()
I have tried the host with localhost, 127.0.0.1, and even the IPAddress from docker inspect but didn't work then changed it to host.docker.internal and it worked.

AWS Glue spark job - Snowflake connection error

I am getting the below error while running the AWS glue job using spark
Glue version 2.0 spark 2.4 python 3
Could you please let me know if anyone encountered similar issue using AWS glue and snowflake.
2021-04-27 13:59:23,858 ERROR [main] glue.ProcessLauncher (Logging.scala:logError(91)): Exception in User Class
java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1862)
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:64)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:188)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:281)
at org.apache.spark.executor.CoarseGrainedExecutorBackendPlugin$class.launch(CoarseGrainedExecutorBackendWrapper.scala:10)
at org.apache.spark.executor.CoarseGrainedExecutorBackendWrapper$$anon$1.launch(CoarseGrainedExecutorBackendWrapper.scala:15)
at org.apache.spark.executor.CoarseGrainedExecutorBackendWrapper.launch(CoarseGrainedExecutorBackendWrapper.scala:19)
at org.apache.spark.executor.CoarseGrainedExecutorBackendWrapper$.main(CoarseGrainedExecutorBackendWrapper.scala:5)
at org.apache.spark.executor.CoarseGrainedExecutorBackendWrapper.main(CoarseGrainedExecutorBackendWrapper.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.amazonaws.services.glue.SparkProcessLauncherPlugin$class.invoke(ProcessLauncher.scala:44)
at com.amazonaws.services.glue.ProcessLauncher$$anon$1.invoke(ProcessLauncher.scala:75)
at com.amazonaws.services.glue.ProcessLauncher.launch(ProcessLauncher.scala:114)
at com.amazonaws.services.glue.ProcessLauncher$.main(ProcessLauncher.scala:26)
at com.amazonaws.services.glue.ProcessLauncher.main(ProcessLauncher.scala)
Caused by: org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:101)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:201)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:65)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:64)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
... 17 more
Caused by: java.io.IOException: Failed to connect to /172.36.143.34:41447

Notary cannot install Corda service

I was trying to configure Business network Operator services in my solution by adding the toolkit provided by r3 as corrdapp dependancy in my application.I am able to build the application but when I runnodes i am getting error for Notary
UPDATE
I am adding the log
[ERROR] 2020-09-04T14:21:15,399Z [main] internal.Node. - Unable to install Corda service com.r3.businessnetworks.membership.flows.bno.service.BNOConfigurationService - [errorCode=dfc7g6, moreInformationAt=https://errors.corda.net/OS/4.5/dfc7g6]
java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_201]
[...]
at net.corda.node.Corda.main(Corda.kt:13) ~[corda-node-4.5.jar:?]
Caused by: java.lang.NullPointerException
at com.r3.businessnetworks.membership.flows.ConfigUtils.loadConfig(ConfigUtils.kt:16) ~[?:?]
at com.r3.businessnetworks.membership.flows.bno.service.BNOConfigurationService.<init>(BNOConfigurationService.kt:21) ~[?:?]
... 33 more
[ERROR] 2020-09-04T14:21:15,458Z [main] internal.NodeStartupLogging. - Exception during node startup - [errorCode=dfc7g6, moreInformationAt=https://errors.corda.net/OS/4.5/dfc7g6]
java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_201]
[...]
at net.corda.cliutils.CordaCliWrapperKt.start(CordaCliWrapper.kt:89) ~[corda-tools-cliutils-4.5.jar:?]
at net.corda.node.Corda.main(Corda.kt:13) ~[corda-node-4.5.jar:?]
Caused by: java.lang.NullPointerException
at com.r3.businessnetworks.membership.flows.ConfigUtils.loadConfig(ConfigUtils.kt:16) ~[?:?]
at com.r3.businessnetworks.membership.flows.bno.service.BNOConfigurationService.<init>(BNOConfigurationService.kt:21) ~[?:?]
... 33 more
You are missing the configuration file of this CorDapp as explained here; you must:
Create a config folder inside your node's cordapps folder (i.e. node-folder/cordapps/config).
Inside that folder create a membership-service.conf file.
Inside that file add:
// Whitelist of accepted BNOs. Attempt to communicate to not whitelisted
// BNO would result into an exception
bnoWhitelist = ["O=BNO,L=New York,C=US", "O=BNO,L=London,C=GB"]
// Name of the notary to use for BNO transactions such as membership approval
notaryName = "O=Notary,L=Longon,C=GB"
The CorDapp that you're using relies on a configuration file (the above 3 steps create that file) and it causes the NullPointerException when it's missing. To understand more about CorDapp configuration files, read my article.
On a side note, according to this; the CorDapp that you're using will be deprecated on 31 September 2020.

Elastic Beanstalk 502 error when deploying war file to Tomcat

I've deployed a Shopizer war file to a Tomcat Elastic Beanstalk instance. I've configured a mySQL database and as far as I can tell everything should be correct. the problem is that when I try to access the URL I get a 502 error. I'm aware Elastic Beanstalk times out after 60s so I increased my timeout limit but still get the problem.
I've noticed this in my Catalina.out log. I know its only a warning but could it possibly point to my issue?
2018-05-02 06:02:51.220 WARN 3233 --- [-AdminTaskTimer] c.m.v.a.ThreadPoolAsynchronousRunner : com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector#4446a77 -- APPARENT DEADLOCK!!! Complete Status:
Managed Threads: 3
Active Threads: 3
Active Tasks:
com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#55aa1343
on thread: C3P0PooledConnectionPoolManager[identityToken->1br9tjp9v8puiagh4cnwb|550ee827]-HelperThread-#2
com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#326b931e
on thread: C3P0PooledConnectionPoolManager[identityToken->1br9tjp9v8puiagh4cnwb|550ee827]-HelperThread-#0
com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#7fc7b7d9
on thread: C3P0PooledConnectionPoolManager[identityToken->1br9tjp9v8puiagh4cnwb|550ee827]-HelperThread-#1
Pending Tasks:
com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#591e9ef6
Pool thread stack traces:
Thread[C3P0PooledConnectionPoolManager[identityToken->1br9tjp9v8puiagh4cnwb|550ee827]-HelperThread-#2,5,main]
java.net.PlainSocketImpl.socketConnect(Native Method)
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
java.net.Socket.connect(Socket.java:589)
com.mysql.cj.core.io.StandardSocketFactory.connect(StandardSocketFactory.java:202)
com.mysql.cj.mysqla.io.MysqlaSocketConnection.connect(MysqlaSocketConnection.java:57)
com.mysql.cj.mysqla.MysqlaSession.connect(MysqlaSession.java:122)
com.mysql.cj.jdbc.ConnectionImpl.connectWithRetries(ConnectionImpl.java:1619)
com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:1601)
com.mysql.cj.jdbc.ConnectionImpl.(ConnectionImpl.java:633)
com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:347)
com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:219)
com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:175)
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:220)
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:206)
com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:203)
com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1138)
com.mchange.v2.resourcepool.BasicResourcePool.doAcquireAndDecrementPendingAcquiresWithinLockOnSuccess(BasicResourcePool.java:1125)
com.mchange.v2.resourcepool.BasicResourcePool.access$700(BasicResourcePool.java:44)
com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask.run(BasicResourcePool.java:1870)
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:696)
Thread[C3P0PooledConnectionPoolManager[identityToken->1br9tjp9v8puiagh4cnwb|550ee827]-HelperThread-#0,5,main]
java.net.PlainSocketImpl.socketConnect(Native Method)
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
java.net.Socket.connect(Socket.java:589)
com.mysql.cj.core.io.StandardSocketFactory.connect(StandardSocketFactory.java:202)
com.mysql.cj.mysqla.io.MysqlaSocketConnection.connect(MysqlaSocketConnection.java:57)
com.mysql.cj.mysqla.MysqlaSession.connect(MysqlaSession.java:122)
com.mysql.cj.jdbc.ConnectionImpl.connectWithRetries(ConnectionImpl.java:1619)
com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:1601)
com.mysql.cj.jdbc.ConnectionImpl.(ConnectionImpl.java:633)
com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:347)
com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:219)
com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:175)
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:220)
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:206)
com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:203)
com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1138)
com.mchange.v2.resourcepool.BasicResourcePool.doAcquireAndDecrementPendingAcquiresWithinLockOnSuccess(BasicResourcePool.java:1125)
com.mchange.v2.resourcepool.BasicResourcePool.access$700(BasicResourcePool.java:44)
com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask.run(BasicResourcePool.java:1870)
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:696)
Thread[C3P0PooledConnectionPoolManager[identityToken->1br9tjp9v8puiagh4cnwb|550ee827]-HelperThread-#1,5,main]
java.net.PlainSocketImpl.socketConnect(Native Method)
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
java.net.Socket.connect(Socket.java:589)
com.mysql.cj.core.io.StandardSocketFactory.connect(StandardSocketFactory.java:202)
com.mysql.cj.mysqla.io.MysqlaSocketConnection.connect(MysqlaSocketConnection.java:57)
com.mysql.cj.mysqla.MysqlaSession.connect(MysqlaSession.java:122)
com.mysql.cj.jdbc.ConnectionImpl.connectWithRetries(ConnectionImpl.java:1619)
com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:1601)
com.mysql.cj.jdbc.ConnectionImpl.(ConnectionImpl.java:633)
com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:347)
com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:219)
com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:175)
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:220)
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:206)
com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:203)
com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1138)
com.mchange.v2.resourcepool.BasicResourcePool.doAcquireAndDecrementPendingAcquiresWithinLockOnSuccess(BasicResourcePool.java:1125)
com.mchange.v2.resourcepool.BasicResourcePool.access$700(BasicResourcePool.java:44)
com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask.run(BasicResourcePool.java:1870)
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:696)
2018-05-02 06:03:51.222 WARN 3233 --- [-AdminTaskTimer] c.m.v.a.ThreadPoolAsynchronousRunner : Task com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#55aa1343 (in deadlocked PoolThread) failed to complete in maximum time 60000ms. Trying interrupt().
2018-05-02 06:03:51.222 WARN 3233 --- [-AdminTaskTimer] c.m.v.a.ThreadPoolAsynchronousRunner : Task com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#326b931e (in deadlocked PoolThread) failed to complete in maximum time 60000ms. Trying interrupt().
2018-05-02 06:03:51.222 WARN 3233 --- [-AdminTaskTimer] c.m.v.a.ThreadPoolAsynchronousRunner : Task com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#7fc7b7d9 (in deadlocked PoolThread) failed to complete in maximum time 60000ms. Trying interrupt().
I messed about with quite a few steps and haven't looked through what helped and what didn't. If I get a chance to go through that and give more specific help I will but for now, heres what I did.
In sm-shop/src/main/resources/application.properties add:
server.port = 5000
Start with creating a database. You'll need the details for the database.properties file. I created a mySQL Aurora DB.
Add the database properties to sm-shop/src/main/resources/database.properties.
In AWS, create Elastic Beanstalk application and environment. I went with Tomcat so had to build a war file, which I deployed to the environment.
In Configuration > Software set the following:
Initial JVM heap size (Xms) = 1024m
Max JVM heap size (Xmx) = 1024m
XX:MaxPermSize = 256m
Add the following Environment Properties:
HIBERNATE_DIALECT = org.hibernate.dialect.MySQLDialect
JDBC_CONNECTION_STRING = jdbc:mysql://mydb.ptmjbhdur9pw.eu-west-2.rds.amazonaws.com:3306/SALESMANAGER?user=username&password=password&autoReconnect=true&useUnicode=true&characterEncoding=UTF-8&&driverClass=com.mysql.cj.jdbc.Driver
SERVER_PORT = 5000
In Configuration > Modify instances:
Instance type = (at least)m1.small
EC2 security groups - I ticked the database security group here.
In Configuration > Modify capacity:
Environment type = Load balanced
In Configuration > Load balancer add the following listener:
Port = 8080
Protocol = HTTP
Instance Port = 8080
Instance Protocol = HTTP
When all that was done and I started the application without any apparent AWS issues, the application wouldn't load so I checked the Catalina log. It showed the same error as in https://groups.google.com/forum/#!searchin/shopizer/ru%7Csort:date/shopizer/hQjqp_5UswI/goVKf5BTCQAJ so I made that change. The application now loads.
I hope that saves somebody some time(and grief).

AWS EMR Mapreduce failure

We have an installation of AWS EMR in a client environment. The encryption in transit and the encryption at rest has been enabled using security configuration. We continue to get the below mapreduce errors when we execute a simple Hive query.
Diagnostic Messages for this Task:
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError:
error in shuffle in fetcher#1
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:377)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by:
java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:366)
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:288)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.openShuffleUrl(Fetcher.java:282)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:323)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193)
Please let me know if anyone has faced this error before.