I had run the following commands on pig on the google n-grams dataset:
inp = LOAD 'link to file' AS (ngram:chararray, year:int, occurences:float, books:float);
filter_input = FILTER inp BY (occurences >= 400) AND (books >= 8);
groupinp = GROUP filter_input BY ngram;
sum_occ = FOREACH groupinp GENERATE FLATTEN(group) as ngram, SUM(filter_input.occurences) / SUM(filter_input.books) AS ntry;
roundto = FOREACH sum_occ GENERATE sum_occ.ngram, ROUND_TO( sum_occ.ntry , 2 );
However I get the following error:
DUMP roundto;
601062 [main] WARN org.apache.pig.newplan.BaseOperatorPlan - Encountered Warning IMPLICIT_CAST_TO_FLOAT 2 time(s).
18/04/06 01:46:03 WARN newplan.BaseOperatorPlan: Encountered Warning IMPLICIT_CAST_TO_FLOAT 2 time(s).
601067 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: GROUP_BY,FILTER
18/04/06 01:46:03 INFO pigstats.ScriptState: Pig features used in the script: GROUP_BY,FILTER
601111 [main] INFO org.apache.pig.data.SchemaTupleBackend - Key [pig.schematuple] was not set... will not generate code.
18/04/06 01:46:03 INFO data.SchemaTupleBackend: Key [pig.schematuple] was not set... will not generate code.
601111 [main] INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, NestedLimitOptimizer, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
18/04/06 01:46:03 INFO optimizer.LogicalPlanOptimizer: {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, NestedLimitOptimizer, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
601238 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezLauncher - Tez staging directory is /tmp/temp-336429202 and resources directory is /tmp/temp-336429202
18/04/06 01:46:03 INFO tez.TezLauncher: Tez staging directory is /tmp/temp-336429202 and resources directory is /tmp/temp-336429202
601239 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.plan.TezCompiler - File concatenation threshold: 100 optimistic? false
18/04/06 01:46:03 INFO plan.TezCompiler: File concatenation threshold: 100 optimistic? false
601241 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.CombinerOptimizerUtil - Choosing to move algebraic foreach to combiner
18/04/06 01:46:03 INFO util.CombinerOptimizerUtil: Choosing to move algebraic foreach to combiner
601265 [main] INFO org.apache.pig.builtin.PigStorage - Using PigTextInputFormat
18/04/06 01:46:03 INFO builtin.PigStorage: Using PigTextInputFormat
18/04/06 01:46:03 INFO input.FileInputFormat: Total input files to process : 1
601285 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
18/04/06 01:46:03 INFO util.MapRedUtil: Total input paths to process : 1
601285 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
18/04/06 01:46:03 INFO util.MapRedUtil: Total input paths (combined) to process : 1
18/04/06 01:46:03 INFO hadoop.MRInputHelpers: NumSplits: 1, SerializedSize: 408
601322 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - Local resource: joda-time-2.9.4.jar
18/04/06 01:46:03 INFO tez.TezJobCompiler: Local resource: joda-time-2.9.4.jar
601322 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - Local resource: pig-0.17.0-core-h2.jar
18/04/06 01:46:03 INFO tez.TezJobCompiler: Local resource: pig-0.17.0-core-h2.jar
601322 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - Local resource: antlr-runtime-3.4.jar
18/04/06 01:46:03 INFO tez.TezJobCompiler: Local resource: antlr-runtime-3.4.jar
601322 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - Local resource: automaton-1.11-8.jar
18/04/06 01:46:03 INFO tez.TezJobCompiler: Local resource: automaton-1.11-8.jar
601402 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - For vertex - scope-141: parallelism=1, memory=1536, java opts=-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx1229m -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator -Dlog4j.configuration=tez-container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dtez.root.logger=INFO,CLA
18/04/06 01:46:03 INFO tez.TezDagBuilder: For vertex - scope-141: parallelism=1, memory=1536, java opts=-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx1229m -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator -Dlog4j.configuration=tez-container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dtez.root.logger=INFO,CLA
601402 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Processing aliases: filter_input,groupinp,inp,sum_occ
18/04/06 01:46:03 INFO tez.TezDagBuilder: Processing aliases: filter_input,groupinp,inp,sum_occ
601402 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Detailed locations: inp[1,6],inp[-1,-1],filter_input[2,15],sum_occ[4,10],groupinp[3,11]
18/04/06 01:46:03 INFO tez.TezDagBuilder: Detailed locations: inp[1,6],inp[-1,-1],filter_input[2,15],sum_occ[4,10],groupinp[3,11]
601402 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Pig features in the vertex:
18/04/06 01:46:03 INFO tez.TezDagBuilder: Pig features in the vertex:
601449 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Set auto parallelism for vertex scope-142
18/04/06 01:46:03 INFO tez.TezDagBuilder: Set auto parallelism for vertex scope-142
601450 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - For vertex - scope-142: parallelism=1, memory=3072, java opts=-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx2458m -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator -Dlog4j.configuration=tez-container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dtez.root.logger=INFO,CLA
18/04/06 01:46:03 INFO tez.TezDagBuilder: For vertex - scope-142: parallelism=1, memory=3072, java opts=-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx2458m -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator -Dlog4j.configuration=tez-container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dtez.root.logger=INFO,CLA
601450 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Processing aliases: roundto,sum_occ
18/04/06 01:46:03 INFO tez.TezDagBuilder: Processing aliases: roundto,sum_occ
601450 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Detailed locations: sum_occ[4,10],roundto[6,10]
18/04/06 01:46:03 INFO tez.TezDagBuilder: Detailed locations: sum_occ[4,10],roundto[6,10]
601450 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Pig features in the vertex: GROUP_BY
18/04/06 01:46:03 INFO tez.TezDagBuilder: Pig features in the vertex: GROUP_BY
601489 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - Total estimated parallelism is 2
18/04/06 01:46:04 INFO tez.TezJobCompiler: Total estimated parallelism is 2
601531 [PigTezLauncher-0] INFO org.apache.pig.tools.pigstats.tez.TezScriptState - Pig script settings are added to the job
18/04/06 01:46:04 INFO tez.TezScriptState: Pig script settings are added to the job
18/04/06 01:46:04 INFO client.TezClient: Tez Client Version: [ component=tez-api, version=0.8.4, revision=300391394352b074b85b529e870816a72c6f314a, SCM-URL=scm:git:https://git-wip-us.apache.org/repos/asf/tez.git, buildTime=2018-03-21T23:55:28Z ]
18/04/06 01:46:04 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-28-12.ec2.internal/172.31.28.12:8032
18/04/06 01:46:04 INFO client.TezClient: Using org.apache.tez.dag.history.ats.acls.ATSHistoryACLPolicyManager to manage Timeline ACLs
18/04/06 01:46:04 INFO impl.TimelineClientImpl: Timeline service address: http://ip-172-31-28-12.ec2.internal:8188/ws/v1/timeline/
18/04/06 01:46:04 INFO client.TezClient: Session mode. Starting session.
18/04/06 01:46:04 INFO client.TezClientUtils: Using tez.lib.uris value from configuration: hdfs:///apps/tez/tez.tar.gz
18/04/06 01:46:04 INFO client.TezClientUtils: Using tez.lib.uris.classpath value from configuration: null
18/04/06 01:46:04 INFO client.TezClient: Tez system stage directory hdfs://ip-172-31-28-12.ec2.internal:8020/tmp/temp-336429202/.tez/application_1522978297921_0003 doesn't exist and is created
18/04/06 01:46:04 INFO acls.ATSHistoryACLPolicyManager: Created Timeline Domain for History ACLs, domainId=Tez_ATS_application_1522978297921_0003
18/04/06 01:46:04 INFO impl.YarnClientImpl: Submitted application application_1522978297921_0003
18/04/06 01:46:04 INFO client.TezClient: The url to track the Tez Session: http://ip-172-31-28-12.ec2.internal:20888/proxy/application_1522978297921_0003/
607861 [PigTezLauncher-0] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJob - Submitting DAG PigLatin:DefaultJobName-0_scope-2
18/04/06 01:46:10 INFO tez.TezJob: Submitting DAG PigLatin:DefaultJobName-0_scope-2
18/04/06 01:46:10 INFO client.TezClient: Submitting dag to TezSession, sessionName=PigLatin:DefaultJobName, applicationId=application_1522978297921_0003, dagName=PigLatin:DefaultJobName-0_scope-2, callerContext={ context=PIG, callerType=PIG_SCRIPT_ID, callerId=PIG-default-d73e19dc-5287-4ee2-a85d-e931327011dc }
18/04/06 01:46:10 INFO client.TezClient: Submitted dag to TezSession, sessionName=PigLatin:DefaultJobName, applicationId=application_1522978297921_0003, dagName=PigLatin:DefaultJobName-0_scope-2
18/04/06 01:46:10 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-28-12.ec2.internal/172.31.28.12:8032
608409 [PigTezLauncher-0] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJob - Submitted DAG PigLatin:DefaultJobName-0_scope-2. Application id: application_1522978297921_0003
18/04/06 01:46:10 INFO tez.TezJob: Submitted DAG PigLatin:DefaultJobName-0_scope-2. Application id: application_1522978297921_0003
608528 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezLauncher - HadoopJobId: job_1522978297921_0003
18/04/06 01:46:11 INFO tez.TezLauncher: HadoopJobId: job_1522978297921_0003
609410 [Timer-1] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJob - DAG Status: status=RUNNING, progress=TotalTasks: 2 Succeeded: 0 Running: 0 Failed: 0 Killed: 0, diagnostics=, counters=null
18/04/06 01:46:11 INFO tez.TezJob: DAG Status: status=RUNNING, progress=TotalTasks: 2 Succeeded: 0 Running: 0 Failed: 0 Killed: 0, diagnostics=, counters=null
629410 [Timer-1] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJob - DAG Status: status=RUNNING, progress=TotalTasks: 2 Succeeded: 0 Running: 1 Failed: 0 Killed: 0, diagnostics=, counters=null
18/04/06 01:46:31 INFO tez.TezJob: DAG Status: status=RUNNING, progress=TotalTasks: 2 Succeeded: 0 Running: 1 Failed: 0 Killed: 0, diagnostics=, counters=null
646404 [pool-1-thread-1] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezSessionManager - Shutting down Tez session org.apache.tez.client.TezClient#3a371843
18/04/06 01:46:48 INFO tez.TezSessionManager: Shutting down Tez session org.apache.tez.client.TezClient#3a371843
2018-04-06 01:46:48 Shutting down Tez session , sessionName=PigLatin:DefaultJobName, applicationId=application_1522978297921_0003
18/04/06 01:46:48 INFO client.TezClient: Shutting down Tez Session, sessionName=PigLatin:DefaultJobName, applicationId=application_1522978297921_0003
How do I fix this error? Dump commands work for the previous lines other than roundto. And What exactly is the Tez client?
I can't replicate your output, because I get an error as soon as I try this line:
roundto = FOREACH sum_occ GENERATE sum_occ.ngram, ROUND_TO( sum_occ.ntry , 2 );
You don't need to use the dot operator to refer to these fields (e.g. sum_occ.ngram) because they are not nested in a tuple or bag. Try the above line without the dot operator:
roundto = FOREACH sum_occ GENERATE ngram, ROUND_TO( ntry , 2 );
To answer your second question, MapReduce and Tez are both frameworks that can be used to run Pig scripts. Tez can sometimes speed up the time it takes Pig scripts to run. You can explicitly use MapReduce or Tez by starting your Pig shell with pig -x mapreduce or pig -x tez. MapReduce is the default, so if you haven't specified Tez, your Hadoop cluster must be set up to run Pig in Tez.
Related
I have a war file of my application which works fine when executed from the command line in local. I´m uploading it to Amazon´s Elastic Beanstalk using Tomcat but when I try to access the URL I receive a 404 error.
The problem is something related to my war file or I have to change Amazon´s configuration?
Many thanks.
Logs:
-------------------------------------
/var/log/httpd/elasticbeanstalk-access_log
-------------------------------------
88.26.90.37 (88.26.90.37) - - [23/Jul/2017:18:55:23 +0000] "GET / HTTP/1.1" 404 1004 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36"
-------------------------------------
/var/log/httpd/error_log
-------------------------------------
[Sun Jul 23 18:54:15 2017] [notice] Apache/2.2.32 (Unix) configured -- resuming normal operations
[Sun Jul 23 18:55:23 2017] [error] server is within MinSpareThreads of MaxClients, consider raising the MaxClients setting
-------------------------------------
/var/log/tomcat8/host-manager.2017-07-23.log
-------------------------------------
-------------------------------------
/var/log/httpd/access_log
-------------------------------------
88.26.90.37 - - [23/Jul/2017:18:55:23 +0000] "GET / HTTP/1.1" 404 1004 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36"
-------------------------------------
/var/log/tomcat8/tomcat8-initd.log
-------------------------------------
-------------------------------------
/var/log/tomcat8/localhost_access_log.txt
-------------------------------------
127.0.0.1 - - [23/Jul/2017:18:55:24 +0000] "GET / HTTP/1.1" 404 1004
-------------------------------------
/var/log/tomcat8/manager.2017-07-23.log
-------------------------------------
-------------------------------------
/var/log/eb-activity.log
-------------------------------------
+ EB_APP_DEPLOY_BASE_DIR=/var/lib/tomcat8/webapps
+ rm -rf /var/lib/tomcat8/webapps/ROOT
+ rm -rf '/usr/share/tomcat8/conf/Catalina/localhost/*'
+ rm -rf '/usr/share/tomcat8/work/Catalina/*'
+ mkdir -p /var/lib/tomcat8/webapps/ROOT
[2017-07-23T18:54:13.069Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/02start_xray.sh] : Starting activity...
[2017-07-23T18:54:13.290Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/02start_xray.sh] : Completed activity.
[2017-07-23T18:54:13.291Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/03_stop_proxy.sh] : Starting activity...
[2017-07-23T18:54:13.629Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/03_stop_proxy.sh] : Completed activity. Result:
Executing: service nginx stop
Executing: service httpd stop
Stopping httpd: [FAILED]
[2017-07-23T18:54:13.629Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/03deploy.sh] : Starting activity...
[2017-07-23T18:54:14.089Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/03deploy.sh] : Completed activity. Result:
++ /opt/elasticbeanstalk/bin/get-config container -k app_staging_dir
+ EB_APP_STAGING_DIR=/tmp/deployment/application/ROOT
++ /opt/elasticbeanstalk/bin/get-config container -k app_deploy_dir
+ EB_APP_DEPLOY_DIR=/var/lib/tomcat8/webapps/ROOT
++ wc -l
++ find /tmp/deployment/application/ROOT -maxdepth 1 -type f
+ FILE_COUNT=0
++ grep -Pi '\.war$'
++ find /tmp/deployment/application/ROOT -maxdepth 1 -type f
++ echo ''
+ WAR_FILES=
+ WAR_FILE_COUNT=0
+ [[ 0 > 0 ]]
++ readlink -f /var/lib/tomcat8/webapps/ROOT/../
+ EB_APP_DEPLOY_BASE=/var/lib/tomcat8/webapps
+ rm -rf /var/lib/tomcat8/webapps/ROOT
+ [[ 0 == 0 ]]
+ [[ 0 > 1 ]]
+ cp -R /tmp/deployment/application/ROOT /var/lib/tomcat8/webapps/ROOT
+ chown -R tomcat:tomcat /var/lib/tomcat8/webapps
[2017-07-23T18:54:14.089Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/04config_deploy.sh] : Starting activity...
[2017-07-23T18:54:14.652Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/04config_deploy.sh] : Completed activity. Result:
++ /opt/elasticbeanstalk/bin/get-config container -k config_staging_dir
+ EB_CONFIG_STAGING_DIR=/tmp/deployment/config
++ /opt/elasticbeanstalk/bin/get-config container -k config_deploy_dir
+ EB_CONFIG_DEPLOY_DIR=/etc/sysconfig
++ /opt/elasticbeanstalk/bin/get-config container -k config_filename
+ EB_CONFIG_FILENAME=tomcat8
+ cp /tmp/deployment/config/tomcat8 /etc/sysconfig/tomcat8
[2017-07-23T18:54:14.653Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/05start.sh] : Starting activity...
[2017-07-23T18:54:15.081Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/05start.sh] : Completed activity. Result:
++ /opt/elasticbeanstalk/bin/get-config container -k tomcat_version
+ TOMCAT_VERSION=8
+ TOMCAT_NAME=tomcat8
+ /etc/init.d/tomcat8 status
tomcat8 is stopped
[ OK ]
+ /etc/init.d/tomcat8 start
Starting tomcat8: [ OK ]
+ /usr/bin/monit monitor tomcat
monit: generated unique Monit id ae33689ef3cf376bf23fa3b09041524e and stored to '/root/.monit.id'
[2017-07-23T18:54:15.082Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/09_start_proxy.sh] : Starting activity...
[2017-07-23T18:54:19.022Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/09_start_proxy.sh] : Completed activity. Result:
Executing: service httpd stop
Stopping httpd: [FAILED]
Executing: service httpd start
Starting httpd: [ OK ]
Executing: /bin/chmod 755 /var/run/httpd
Executing: /opt/elasticbeanstalk/bin/healthd-track-pidfile --proxy httpd
Executing: /opt/elasticbeanstalk/bin/healthd-configure --appstat-log-path /var/log/httpd/healthd/application.log --appstat-unit usec --appstat-timestamp-on 'arrival'
Executing: /opt/elasticbeanstalk/bin/healthd-restart
[2017-07-23T18:54:19.022Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/appdeploy/enact.
[2017-07-23T18:54:19.022Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployPostHook] : Starting activity...
[2017-07-23T18:54:19.023Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployPostHook/03monitor_pids.sh] : Starting activity...
[2017-07-23T18:54:20.047Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployPostHook/03monitor_pids.sh] : Completed activity.
[2017-07-23T18:54:20.047Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployPostHook] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/appdeploy/post.
[2017-07-23T18:54:20.047Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/PostInitHook] : Starting activity...
[2017-07-23T18:54:20.048Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/PostInitHook/01processmgrstart.sh] : Starting activity...
[2017-07-23T18:54:20.097Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/PostInitHook/01processmgrstart.sh] : Completed activity. Result:
+ /usr/bin/monit
[2017-07-23T18:54:20.097Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/PostInitHook] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/postinit.
[2017-07-23T18:54:20.097Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1] : Completed activity. Result:
Application deployment - Command CMD-Startup stage 1 completed
[2017-07-23T18:54:20.098Z] INFO [2048] - [Application deployment OnFocus2307#1/AddonsAfter] : Starting activity...
[2017-07-23T18:54:20.098Z] INFO [2048] - [Application deployment OnFocus2307#1/AddonsAfter/ConfigLogRotation] : Starting activity...
[2017-07-23T18:54:20.098Z] INFO [2048] - [Application deployment OnFocus2307#1/AddonsAfter/ConfigLogRotation/10-config.sh] : Starting activity...
[2017-07-23T18:54:20.509Z] INFO [2048] - [Application deployment OnFocus2307#1/AddonsAfter/ConfigLogRotation/10-config.sh] : Completed activity. Result:
Disabled forced hourly log rotation.
[2017-07-23T18:54:20.510Z] INFO [2048] - [Application deployment OnFocus2307#1/AddonsAfter/ConfigLogRotation] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/addons/logpublish/hooks/config.
[2017-07-23T18:54:20.510Z] INFO [2048] - [Application deployment OnFocus2307#1/AddonsAfter] : Completed activity.
[2017-07-23T18:54:20.510Z] INFO [2048] - [Application deployment OnFocus2307#1] : Completed activity. Result:
Application deployment - Command CMD-Startup succeeded
[2017-07-23T18:55:36.363Z] INFO [2808] - [CMD-TailLogs] : Starting activity...
[2017-07-23T18:55:36.363Z] INFO [2808] - [CMD-TailLogs/AddonsBefore] : Starting activity...
[2017-07-23T18:55:36.364Z] INFO [2808] - [CMD-TailLogs/AddonsBefore] : Completed activity.
[2017-07-23T18:55:36.364Z] INFO [2808] - [CMD-TailLogs/TailLogs] : Starting activity...
[2017-07-23T18:55:36.364Z] INFO [2808] - [CMD-TailLogs/TailLogs/TailLogs] : Starting activity...
-------------------------------------
/var/log/tomcat8/catalina.2017-07-23.log
-------------------------------------
23-Jul-2017 18:54:17.868 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version: Apache Tomcat/8.0.44
23-Jul-2017 18:54:17.886 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Jul 5 2017 19:02:51 UTC
23-Jul-2017 18:54:17.886 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server number: 8.0.44.0
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name: Linux
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version: 4.9.32-15.41.amzn1.x86_64
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture: amd64
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.131-2.b11.30.amzn1.x86_64/jre
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version: 1.8.0_131-b11
23-Jul-2017 18:54:17.888 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor: Oracle Corporation
23-Jul-2017 18:54:17.888 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE: /usr/share/tomcat8
23-Jul-2017 18:54:17.888 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME: /usr/share/tomcat8
23-Jul-2017 18:54:17.888 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -DJDBC_CONNECTION_STRING=
23-Jul-2017 18:54:17.890 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Xms256m
23-Jul-2017 18:54:17.891 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Xmx256m
23-Jul-2017 18:54:17.891 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -XX:MaxPermSize=64m
23-Jul-2017 18:54:17.891 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.base=/usr/share/tomcat8
23-Jul-2017 18:54:17.891 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.home=/usr/share/tomcat8
23-Jul-2017 18:54:17.891 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.awt.headless=true
23-Jul-2017 18:54:17.892 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.endorsed.dirs=
23-Jul-2017 18:54:17.892 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.io.tmpdir=/var/cache/tomcat8/temp
23-Jul-2017 18:54:17.897 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.config.file=/usr/share/tomcat8/conf/logging.properties
23-Jul-2017 18:54:17.897 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
23-Jul-2017 18:54:17.897 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
23-Jul-2017 18:54:18.284 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"]
23-Jul-2017 18:54:18.352 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
23-Jul-2017 18:54:18.371 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["ajp-nio-8009"]
23-Jul-2017 18:54:18.374 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
23-Jul-2017 18:54:18.377 INFO [main] org.apache.catalina.startup.Catalina.load Initialization processed in 2288 ms
23-Jul-2017 18:54:18.477 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service Catalina
23-Jul-2017 18:54:18.479 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.0.44
23-Jul-2017 18:54:18.512 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /var/lib/tomcat8/webapps/ROOT
23-Jul-2017 18:54:27.482 INFO [localhost-startStop-1] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
23-Jul-2017 18:54:27.680 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /var/lib/tomcat8/webapps/ROOT has finished in 9,162 ms
23-Jul-2017 18:54:27.691 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
23-Jul-2017 18:54:27.724 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
23-Jul-2017 18:54:27.747 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 9369 ms
-------------------------------------
/var/log/eb-commandprocessor.log
-------------------------------------
[2017-07-23T18:52:59.225Z] INFO [1780] : Found enabled addons: ["logpublish", "logstreaming"].
[2017-07-23T18:52:59.227Z] INFO [1780] : Updating Command definition of addon logpublish.
[2017-07-23T18:52:59.227Z] INFO [1780] : Updating Command definition of addon logstreaming.
[2017-07-23T18:52:59.227Z] DEBUG [1780] : Retrieving metadata for key: AWS::CloudFormation::Init||Infra-WriteApplication2||files..
[2017-07-23T18:52:59.232Z] DEBUG [1780] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||ManifestFileS3Key..
[2017-07-23T18:52:59.515Z] INFO [1780] : Finding latest manifest from bucket 'elasticbeanstalk-us-west-2-253743328849' with prefix 'resources/environments/e-mrdyfipmbp/_runtime/versions/manifest_'.
[2017-07-23T18:52:59.801Z] INFO [1780] : Found manifest with key 'resources/environments/e-mrdyfipmbp/_runtime/versions/manifest_1500835914428'.
[2017-07-23T18:52:59.818Z] INFO [1780] : Updated manifest cache: deployment ID 1 and serial 1.
[2017-07-23T18:52:59.818Z] DEBUG [1780] : Loaded definition of Command CMD-PreInit.
[2017-07-23T18:52:59.818Z] INFO [1780] : Executing Initialization
[2017-07-23T18:52:59.819Z] INFO [1780] : Executing command: CMD-PreInit...
[2017-07-23T18:52:59.819Z] INFO [1780] : Executing command CMD-PreInit activities...
[2017-07-23T18:52:59.819Z] DEBUG [1780] : Setting environment variables..
[2017-07-23T18:52:59.819Z] INFO [1780] : Running AddonsBefore for command CMD-PreInit...
[2017-07-23T18:53:04.333Z] DEBUG [1780] : Running stages of Command CMD-PreInit from stage 0 to stage 0...
[2017-07-23T18:53:04.333Z] INFO [1780] : Running stage 0 of command CMD-PreInit...
[2017-07-23T18:53:04.333Z] DEBUG [1780] : Loaded 3 actions for stage 0.
[2017-07-23T18:53:04.333Z] INFO [1780] : Running 1 of 3 actions: InfraWriteConfig...
[2017-07-23T18:53:04.345Z] INFO [1780] : Running 2 of 3 actions: DownloadSourceBundle...
[2017-07-23T18:53:05.730Z] INFO [1780] : Running 3 of 3 actions: PreInitHook...
[2017-07-23T18:53:07.650Z] INFO [1780] : Running AddonsAfter for command CMD-PreInit...
[2017-07-23T18:53:07.650Z] INFO [1780] : Command CMD-PreInit succeeded!
[2017-07-23T18:53:07.651Z] INFO [1780] : Command processor returning results:
{"status":"SUCCESS","api_version":"1.0","results":[{"status":"SUCCESS","msg":"","returncode":0,"events":[]}]}
[2017-07-23T18:54:05.518Z] DEBUG [2048] : Reading config file: /etc/elasticbeanstalk/.aws-eb-stack.properties
[2017-07-23T18:54:05.518Z] DEBUG [2048] : Checking if the command processor should execute...
[2017-07-23T18:54:05.520Z] DEBUG [2048] : Checking whether the command is applicable to instance (i-04e322b065f1ab8d7)..
[2017-07-23T18:54:05.520Z] INFO [2048] : Command is applicable to this instance (i-04e322b065f1ab8d7)..
[2017-07-23T18:54:05.520Z] DEBUG [2048] : Checking if the received command stage is valid..
[2017-07-23T18:54:05.520Z] INFO [2048] : No stage_num in command. Valid stage..
[2017-07-23T18:54:05.520Z] INFO [2048] : Received command CMD-Startup: {"execution_data":"{\"leader_election\":\"true\"}","instance_ids":["i-04e322b065f1ab8d7"],"command_name":"CMD-Startup","api_version":"1.0","resource_name":"AWSEBAutoScalingGroup","request_id":"fb2b25c7-6fd7-11e7-87da-0d1616730116"}
[2017-07-23T18:54:05.520Z] INFO [2048] : Command processor should execute command.
[2017-07-23T18:54:05.520Z] DEBUG [2048] : Storing current stage..
[2017-07-23T18:54:05.520Z] DEBUG [2048] : Stage_num does not exist. Not saving null stage. Returning..
[2017-07-23T18:54:05.520Z] DEBUG [2048] : Reading config file: /etc/elasticbeanstalk/.aws-eb-stack.properties
[2017-07-23T18:54:05.520Z] DEBUG [2048] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_ContainerConfigFileContent||commands..
[2017-07-23T18:54:05.521Z] DEBUG [2048] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_API||_Commands..
[2017-07-23T18:54:05.522Z] INFO [2048] : Found enabled addons: ["logpublish", "logstreaming"].
[2017-07-23T18:54:05.524Z] INFO [2048] : Updating Command definition of addon logpublish.
[2017-07-23T18:54:05.525Z] INFO [2048] : Updating Command definition of addon logstreaming.
[2017-07-23T18:54:05.525Z] DEBUG [2048] : Refreshing metadata...
[2017-07-23T18:54:05.954Z] DEBUG [2048] : Refreshed environment metadata.
[2017-07-23T18:54:05.954Z] DEBUG [2048] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_ContainerConfigFileContent||commands..
[2017-07-23T18:54:05.956Z] DEBUG [2048] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_API||_Commands..
[2017-07-23T18:54:05.957Z] INFO [2048] : Found enabled addons: ["logpublish", "logstreaming"].
[2017-07-23T18:54:05.961Z] INFO [2048] : Updating Command definition of addon logpublish.
[2017-07-23T18:54:05.961Z] INFO [2048] : Updating Command definition of addon logstreaming.
[2017-07-23T18:54:05.962Z] DEBUG [2048] : Loaded definition of Command CMD-Startup.
[2017-07-23T18:54:05.963Z] INFO [2048] : Executing Application deployment
[2017-07-23T18:54:05.964Z] INFO [2048] : Executing command: CMD-Startup...
[2017-07-23T18:54:05.964Z] INFO [2048] : Executing command CMD-Startup activities...
[2017-07-23T18:54:05.964Z] DEBUG [2048] : Setting environment variables..
[2017-07-23T18:54:05.964Z] INFO [2048] : Running AddonsBefore for command CMD-Startup...
[2017-07-23T18:54:06.242Z] DEBUG [2048] : Running stages of Command CMD-Startup from stage 0 to stage 1...
[2017-07-23T18:54:06.242Z] INFO [2048] : Running stage 0 of command CMD-Startup...
[2017-07-23T18:54:06.242Z] INFO [2048] : Running leader election...
[2017-07-23T18:54:06.665Z] INFO [2048] : Instance is Leader.
[2017-07-23T18:54:06.666Z] DEBUG [2048] : Loaded 7 actions for stage 0.
[2017-07-23T18:54:06.666Z] INFO [2048] : Running 1 of 7 actions: HealthdLogRotation...
[2017-07-23T18:54:06.678Z] INFO [2048] : Running 2 of 7 actions: HealthdHTTPDLogging...
[2017-07-23T18:54:06.680Z] INFO [2048] : Running 3 of 7 actions: HealthdNginxLogging...
[2017-07-23T18:54:06.681Z] INFO [2048] : Running 4 of 7 actions: EbExtensionPreBuild...
[2017-07-23T18:54:07.163Z] INFO [2048] : Running 5 of 7 actions: AppDeployPreHook...
[2017-07-23T18:54:09.688Z] INFO [2048] : Running 6 of 7 actions: EbExtensionPostBuild...
[2017-07-23T18:54:10.176Z] INFO [2048] : Running 7 of 7 actions: InfraCleanEbExtension...
[2017-07-23T18:54:10.181Z] INFO [2048] : Running stage 1 of command CMD-Startup...
[2017-07-23T18:54:10.181Z] DEBUG [2048] : Loaded 3 actions for stage 1.
[2017-07-23T18:54:10.181Z] INFO [2048] : Running 1 of 3 actions: AppDeployEnactHook...
[2017-07-23T18:54:19.022Z] INFO [2048] : Running 2 of 3 actions: AppDeployPostHook...
[2017-07-23T18:54:20.047Z] INFO [2048] : Running 3 of 3 actions: PostInitHook...
[2017-07-23T18:54:20.097Z] INFO [2048] : Running AddonsAfter for command CMD-Startup...
[2017-07-23T18:54:20.510Z] INFO [2048] : Command CMD-Startup succeeded!
[2017-07-23T18:54:20.511Z] INFO [2048] : Command processor returning results:
{"status":"SUCCESS","api_version":"1.0","results":[{"status":"SUCCESS","msg":"","returncode":0,"events":[]}]}
[2017-07-23T18:55:36.353Z] DEBUG [2808] : Reading config file: /etc/elasticbeanstalk/.aws-eb-stack.properties
[2017-07-23T18:55:36.354Z] DEBUG [2808] : Checking if the command processor should execute...
[2017-07-23T18:55:36.356Z] DEBUG [2808] : Checking whether the command is applicable to instance (i-04e322b065f1ab8d7)..
[2017-07-23T18:55:36.356Z] INFO [2808] : Command is applicable to this instance (i-04e322b065f1ab8d7)..
[2017-07-23T18:55:36.356Z] DEBUG [2808] : Checking if the received command stage is valid..
[2017-07-23T18:55:36.356Z] INFO [2808] : No stage_num in command. Valid stage..
[2017-07-23T18:55:36.356Z] INFO [2808] : Received command CMD-TailLogs: {"execution_data":"{\"aws_access_key_id\":\"ASIAJSUYLCIZFOIKPO2A\",\"signature\":\"RPf86lrs\\\/c0114+EODhe8jxRJhs=\",\"security_token\":\"FQoDYXdzENz\\\/\\\/\\\/\\\/\\\/\\\/\\\/\\\/\\\/\\\/wEaDCSrox2Xx3QiuRUnziLcAxEo3H8dpDcz3tKFZriPXlqq595Xpcm6LsBYoPAwWWcm7bDE38KE8kwDhnSMHttNJl1yNd5kofzZ9J5pf9gRSQdXGHWXghfw8+Bt3IVKutzn7tni2NaXFMlZxSxOpkvVxRYUph9et1kFsDlX2ml2ONCPDGqGYFBatI1mMPbvdTVViz7YbMiGDx88kQQF9W9wghJ63FkxG0JGscE1ugXc840xjzTmSIT7bNPmlkaLI4iBLor9Whn4a1fiDuZq2EB8lDxKMd+hjWmMSbMYjPvdGusVbuvLu1KC8mvFMx29BVLoo+xvxMc2JzO03\\\/WVo50oWnM8nSG04UtfkNGapLnbVO1NWoMWD107qHSeyWqAi1HO83KmxW4E5gvtF5IGNd98yJkcSmwDv0BNJDZnP8DTZNP+AHrCW\\\/mC6ybEjNxkh\\\/La\\\/YpPmfWAcbOG61IKqIyZHrhGO65nvYRxsz5TJ9B5sbGvDmhlGEJ1thAP\\\/xcaTOAUn006DxGlO+aVrz6ie9uU6Mt4wNos4qdftSce5mszp4Gc3gYOpfzqq4lIpnB2GUY9ImVMclLI60VtaOMkzMNsNJTRtl1X1NuiUa7sefP8Rsod\\\/yeev3ueDLJsfhJozF\\\/w4MtijFfP547w1KfxKOeb08sF\",\"policy\":\"eyJleHBpcmF0aW9uIjoiMjAxNy0wNy0yM1QxOToyNTozNC41ODNaIiwiY29uZGl0aW9ucyI6W1sic3RhcnRzLXdpdGgiLCIkeC1hbXotbWV0YS10aW1lX3N0YW1wIiwiIl0sWyJzdGFydHMtd2l0aCIsIiR4LWFtei1tZXRhLXB1Ymxpc2hfbWVjaGFuaXNtIiwiIl0sWyJzdGFydHMtd2l0aCIsIiRrZXkiLCJyZXNvdXJjZXNcL2Vudmlyb25tZW50c1wvbG9nc1wvIl0sWyJzdGFydHMtd2l0aCIsIiR4LWFtei1tZXRhLWJhdGNoX2lkIiwiIl0sWyJzdGFydHMtd2l0aCIsIiR4LWFtei1tZXRhLWZpbGVfbmFtZSIsIiJdLFsic3RhcnRzLXdpdGgiLCIkeC1hbXotc2VjdXJpdHktdG9rZW4iLCIiXSxbInN0YXJ0cy13aXRoIiwiJENvbnRlbnQtVHlwZSIsIiJdLFsiZXEiLCIkYnVja2V0IiwiZWxhc3RpY2JlYW5zdGFsay11cy13ZXN0LTItMjUzNzQzMzI4ODQ5Il0sWyJlcSIsIiRhY2wiLCJwcml2YXRlIl1dfQ==\"}","instance_ids":["i-04e322b065f1ab8d7"],"data":"822325d4-6fd8-11e7-8e3e-c7d19f2d4234","command_name":"CMD-TailLogs","api_version":"1.0","resource_name":"AWSEBAutoScalingGroup","request_id":"822325d4-6fd8-11e7-8e3e-c7d19f2d4234"}
[2017-07-23T18:55:36.356Z] INFO [2808] : Command processor should execute command.
[2017-07-23T18:55:36.356Z] DEBUG [2808] : Storing current stage..
[2017-07-23T18:55:36.356Z] DEBUG [2808] : Stage_num does not exist. Not saving null stage. Returning..
[2017-07-23T18:55:36.356Z] DEBUG [2808] : Reading config file: /etc/elasticbeanstalk/.aws-eb-stack.properties
[2017-07-23T18:55:36.356Z] DEBUG [2808] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_ContainerConfigFileContent||commands..
[2017-07-23T18:55:36.358Z] DEBUG [2808] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_API||_Commands..
[2017-07-23T18:55:36.359Z] INFO [2808] : Found enabled addons: ["logpublish", "logstreaming"].
[2017-07-23T18:55:36.362Z] INFO [2808] : Updating Command definition of addon logpublish.
[2017-07-23T18:55:36.362Z] INFO [2808] : Updating Command definition of addon logstreaming.
[2017-07-23T18:55:36.362Z] DEBUG [2808] : Loaded definition of Command CMD-TailLogs.
[2017-07-23T18:55:36.362Z] INFO [2808] : Executing CMD-TailLogs
[2017-07-23T18:55:36.363Z] INFO [2808] : Executing command: CMD-TailLogs...
[2017-07-23T18:55:36.363Z] INFO [2808] : Executing command CMD-TailLogs activities...
[2017-07-23T18:55:36.363Z] DEBUG [2808] : Setting environment variables..
[2017-07-23T18:55:36.363Z] INFO [2808] : Running AddonsBefore for command CMD-TailLogs...
[2017-07-23T18:55:36.364Z] DEBUG [2808] : Running stages of Command CMD-TailLogs from stage 0 to stage 0...
[2017-07-23T18:55:36.364Z] INFO [2808] : Running stage 0 of command CMD-TailLogs...
[2017-07-23T18:55:36.364Z] DEBUG [2808] : Loaded 1 actions for stage 0.
[2017-07-23T18:55:36.364Z] INFO [2808] : Running 1 of 1 actions: TailLogs...
-------------------------------------
/var/log/httpd/elasticbeanstalk-error_log
-------------------------------------
-------------------------------------
/var/log/tomcat8/localhost.2017-07-23.log
-------------------------------------
23-Jul-2017 18:54:27.562 INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log Spring WebApplicationInitializers detected on classpath: [org.springframework.boot.autoconfigure.jersey.JerseyAutoConfiguration$JerseyWebApplicationInitializer#161183dc]
-------------------------------------
/var/log/tomcat8/catalina.out
-------------------------------------
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=64m; support was removed in 8.0
23-Jul-2017 18:54:17.868 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version: Apache Tomcat/8.0.44
23-Jul-2017 18:54:17.886 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Jul 5 2017 19:02:51 UTC
23-Jul-2017 18:54:17.886 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server number: 8.0.44.0
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name: Linux
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version: 4.9.32-15.41.amzn1.x86_64
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture: amd64
Check your logs for the reason: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.logging.html
Perhaps you are missing a JDBC driver? ;D
I was trying run this example on the aws website for node.js http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/samples/nodejs-express-hiking-v1.zip
from here
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/nodejs-getstarted.html
(I tired the tutorial that had the same error)
502 Bad Gateway
nginx/1.10.1
Is what is shown.
I also tried changing the port to 5000 but that did not help either.
Here are the logs:
I think the second one is the problem but I have no idea what to do about it.
/var/log/nodejs/nodejs.log
Server running at http://127.0.0.1:8081/
/var/log/nginx/error.log
2017/02/20 04:10:25 [warn] 2488#0: duplicate MIME type "text/html" in /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf:42
/var/log/eb-activity.log
Did not find to find status of init job. Assuming stopped.
Did not find to find status of init job. Assuming stopped.
[2017-02-20T04:10:21.090Z] INFO [2036] - [Application deployment Sample Application#1/StartupStage1/AppDeployEnactHook/20clean.sh] : Starting activity...
[2017-02-20T04:10:21.285Z] INFO [2036] - [Application deployment Sample Application#1/StartupStage1/AppDeployEnactHook/20clean.sh] : Completed activity. Result:
++ /opt/elasticbeanstalk/bin/get-config container -k app_base_dir
+ EB_APP_BASE_DIR=/var/app
+ rm -rf /var/app
[2017-02-20T04:10:21.286Z] INFO [2036] - [Application deployment Sample Application#1/StartupStage1/AppDeployEnactHook/30app_deploy.sh] : Starting activity...
[2017-02-20T04:10:22.003Z] INFO [2036] - [Application deployment Sample Application#1/StartupStage1/AppDeployEnactHook/30app_deploy.sh] : Completed activity. Result:
++ /opt/elasticbeanstalk/bin/get-config container -k app_base_dir
+ EB_APP_BASE_DIR=/var/app
++ /opt/elasticbeanstalk/bin/get-config container -k app_staging_dir
+ EB_APP_STAGING_DIR=/tmp/deployment/application
++ /opt/elasticbeanstalk/bin/get-config container -k app_deploy_dir
+ EB_APP_DEPLOY_DIR=/var/app/current
++ /opt/elasticbeanstalk/bin/get-config container -k app_user
+ EB_APP_USER=nodejs
+ mkdir /var/app
+ mv /tmp/deployment/application /var/app/current
+ chown -R nodejs:nodejs /var/app/current
[2017-02-20T04:10:22.003Z] INFO [2036] - [Application deployment Sample Application#1/StartupStage1/AppDeployEnactHook/40config_deploy.sh] : Starting activity...
[2017-02-20T04:10:22.230Z] INFO [2036] - [Application deployment Sample Application#1/StartupStage1/AppDeployEnactHook/40config_deploy.sh] : Completed activity. Result:
++ /opt/elasticbeanstalk/bin/get-config container -k config_staging_dir
+ EB_CONFIG_STAGING_DIR=/tmp/deployment/config
++ ls /tmp/deployment/config
+ for i in '$(ls $EB_CONFIG_STAGING_DIR)'
++ sed -e 's/#/\//g'
++ echo '#etc#init#nginx.conf'
+ FILE_NAME=/etc/init/nginx.conf
+ /bin/cp /tmp/deployment/config/#etc#init#nginx.conf /etc/init/nginx.conf
+ for i in '$(ls $EB_CONFIG_STAGING_DIR)'
++ sed -e 's/#/\//g'
++ echo '#etc#init#nodejs.conf'
+ FILE_NAME=/etc/init/nodejs.conf
+ /bin/cp /tmp/deployment/config/#etc#init#nodejs.conf /etc/init/nodejs.conf
+ for i in '$(ls $EB_CONFIG_STAGING_DIR)'
++ sed -e 's/#/\//g'
++ echo '#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf'
+ FILE_NAME=/etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
+ /bin/cp /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
+ for i in '$(ls $EB_CONFIG_STAGING_DIR)'
++ sed -e 's/#/\//g'
++ echo '#etc#nginx#nginx.conf'
+ FILE_NAME=/etc/nginx/nginx.conf
+ /bin/cp /tmp/deployment/config/#etc#nginx#nginx.conf /etc/nginx/nginx.conf
[2017-02-20T04:10:22.230Z] INFO [2036] - [Application deployment Sample Application#1/StartupStage1/AppDeployEnactHook/50start.sh] : Starting activity...
[2017-02-20T04:10:25.232Z] INFO [2036] - [Application deployment Sample Application#1/StartupStage1/AppDeployEnactHook/50start.sh] : Completed activity. Result:
+ /opt/elasticbeanstalk/containerfiles/ebnode.py --action start-all
nodejs start/running, process 2473
nginx start/running, process 2488
[2017-02-20T04:10:25.232Z] INFO [2036] - [Application deployment Sample Application#1/StartupStage1/AppDeployEnactHook/60monitor_pids.sh] : Starting activity...
[2017-02-20T04:10:25.794Z] INFO [2036] - [Application deployment Sample Application#1/StartupStage1/AppDeployEnactHook/60monitor_pids.sh] : Completed activity. Result:
+ '[' -d /etc/healthd ']'
++ /opt/elasticbeanstalk/bin/get-config optionsettings --namespace aws:elasticbeanstalk:container:nodejs --option-name ProxyServer
+ PROXY_SERVER=nginx
+ case "$PROXY_SERVER" in
+ /opt/elasticbeanstalk/bin/healthd-track-pidfile --proxy nginx
+ /opt/elasticbeanstalk/bin/healthd-track-pidfile --name application --location /var/run/nodejs.pid
[2017-02-20T04:10:25.794Z] INFO [2036] - [Application deployment Sample Application#1/StartupStage1/AppDeployEnactHook/70restart_healthd.sh] : Starting activity...
[2017-02-20T04:10:25.988Z] INFO [2036] - [Application deployment Sample Application#1/StartupStage1/AppDeployEnactHook/70restart_healthd.sh] : Completed activity. Result:
+ '[' -d /etc/healthd ']'
++ /opt/elasticbeanstalk/bin/get-config optionsettings --namespace aws:elasticbeanstalk:container:nodejs --option-name ProxyServer
+ PROXY_SERVER=nginx
+ '[' -f /var/elasticbeanstalk/healthd/current_proxy_server ']'
+ CURRENT_PROXY_SERVER=nginx
+ '[' nginx '!=' nginx ']'
+ echo nginx
[2017-02-20T04:10:25.988Z] INFO [2036] - [Application deployment Sample Application#1/StartupStage1/AppDeployEnactHook] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/appdeploy/enact.
[2017-02-20T04:10:25.988Z] INFO [2036] - [Application deployment Sample Application#1/StartupStage1/AppDeployPostHook] : Starting activity...
[2017-02-20T04:10:25.988Z] INFO [2036] - [Application deployment Sample Application#1/StartupStage1/AppDeployPostHook] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/appdeploy/post.
[2017-02-20T04:10:25.989Z] INFO [2036] - [Application deployment Sample Application#1/StartupStage1/PostInitHook] : Starting activity...
[2017-02-20T04:10:25.989Z] INFO [2036] - [Application deployment Sample Application#1/StartupStage1/PostInitHook] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/postinit.
[2017-02-20T04:10:25.989Z] INFO [2036] - [Application deployment Sample Application#1/StartupStage1] : Completed activity. Result:
Application deployment - Command CMD-Startup stage 1 completed
[2017-02-20T04:10:25.989Z] INFO [2036] - [Application deployment Sample Application#1/AddonsAfter] : Starting activity...
[2017-02-20T04:10:25.990Z] INFO [2036] - [Application deployment Sample Application#1/AddonsAfter/ConfigLogRotation] : Starting activity...
[2017-02-20T04:10:25.990Z] INFO [2036] - [Application deployment Sample Application#1/AddonsAfter/ConfigLogRotation/10-config.sh] : Starting activity...
[2017-02-20T04:10:26.163Z] INFO [2036] - [Application deployment Sample Application#1/AddonsAfter/ConfigLogRotation/10-config.sh] : Completed activity. Result:
Disabled forced hourly log rotation.
[2017-02-20T04:10:26.164Z] INFO [2036] - [Application deployment Sample Application#1/AddonsAfter/ConfigLogRotation] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/addons/logpublish/hooks/config.
[2017-02-20T04:10:26.164Z] INFO [2036] - [Application deployment Sample Application#1/AddonsAfter/ConfigCWLAgent] : Starting activity...
[2017-02-20T04:10:26.164Z] INFO [2036] - [Application deployment Sample Application#1/AddonsAfter/ConfigCWLAgent/10-config.sh] : Starting activity...
[2017-02-20T04:10:26.426Z] INFO [2036] - [Application deployment Sample Application#1/AddonsAfter/ConfigCWLAgent/10-config.sh] : Completed activity. Result:
Log streaming option setting is not specified, ignore cloudwatch logs setup.
Disabled log streaming.
[2017-02-20T04:10:26.427Z] INFO [2036] - [Application deployment Sample Application#1/AddonsAfter/ConfigCWLAgent] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/addons/logstreaming/hooks/config.
[2017-02-20T04:10:26.427Z] INFO [2036] - [Application deployment Sample Application#1/AddonsAfter] : Completed activity.
[2017-02-20T04:10:26.427Z] INFO [2036] - [Application deployment Sample Application#1] : Completed activity. Result:
Application deployment - Command CMD-Startup succeeded
[2017-02-20T04:32:43.707Z] INFO [3166] - [CMD-TailLogs] : Starting activity...
[2017-02-20T04:32:43.708Z] INFO [3166] - [CMD-TailLogs/AddonsBefore] : Starting activity...
[2017-02-20T04:32:43.708Z] INFO [3166] - [CMD-TailLogs/AddonsBefore] : Completed activity.
[2017-02-20T04:32:43.708Z] INFO [3166] - [CMD-TailLogs/TailLogs] : Starting activity...
[2017-02-20T04:32:43.708Z] INFO [3166] - [CMD-TailLogs/TailLogs/TailLogs] : Starting activity...
-------------------------------------
/var/log/nginx/access.log
-------------------------------------
-------------------------------------
/var/log/eb-commandprocessor.log
-------------------------------------
[2017-02-20T04:09:21.232Z] INFO [1745] : Found enabled addons: ["logpublish", "logstreaming"].
[2017-02-20T04:09:21.234Z] INFO [1745] : Updating Command definition of addon logpublish.
[2017-02-20T04:09:21.234Z] INFO [1745] : Updating Command definition of addon logstreaming.
[2017-02-20T04:09:21.234Z] DEBUG [1745] : Retrieving metadata for key: AWS::CloudFormation::Init||Infra-WriteApplication2||files..
[2017-02-20T04:09:21.237Z] DEBUG [1745] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||ManifestFileS3Key..
[2017-02-20T04:09:21.531Z] INFO [1745] : Finding latest manifest from bucket 'elasticbeanstalk-us-west-2-648812771825' with prefix 'resources/environments/e-evnmznnmnr/_runtime/versions/manifest_'.
[2017-02-20T04:09:21.827Z] INFO [1745] : Found manifest with key 'resources/environments/e-evnmznnmnr/_runtime/versions/manifest_1487563665867'.
[2017-02-20T04:09:21.839Z] INFO [1745] : Updated manifest cache: deployment ID 1 and serial 1.
[2017-02-20T04:09:21.840Z] DEBUG [1745] : Loaded definition of Command CMD-PreInit.
[2017-02-20T04:09:21.840Z] INFO [1745] : Executing Initialization
[2017-02-20T04:09:21.841Z] INFO [1745] : Executing command: CMD-PreInit...
[2017-02-20T04:09:21.841Z] INFO [1745] : Executing command CMD-PreInit activities...
[2017-02-20T04:09:21.841Z] DEBUG [1745] : Setting environment variables..
[2017-02-20T04:09:21.841Z] INFO [1745] : Running AddonsBefore for command CMD-PreInit...
[2017-02-20T04:09:21.841Z] DEBUG [1745] : Running stages of Command CMD-PreInit from stage 0 to stage 0...
[2017-02-20T04:09:21.841Z] INFO [1745] : Running stage 0 of command CMD-PreInit...
[2017-02-20T04:09:21.841Z] DEBUG [1745] : Loaded 3 actions for stage 0.
[2017-02-20T04:09:21.841Z] INFO [1745] : Running 1 of 3 actions: InfraWriteConfig...
[2017-02-20T04:09:21.846Z] INFO [1745] : Running 2 of 3 actions: DownloadSourceBundle...
[2017-02-20T04:09:22.518Z] INFO [1745] : Running 3 of 3 actions: PreInitHook...
[2017-02-20T04:09:28.779Z] INFO [1745] : Running AddonsAfter for command CMD-PreInit...
[2017-02-20T04:09:39.489Z] INFO [1745] : Command CMD-PreInit succeeded!
[2017-02-20T04:09:39.490Z] INFO [1745] : Command processor returning results:
{"status":"SUCCESS","api_version":"1.0","results":[{"status":"SUCCESS","msg":"","returncode":0,"events":[]}]}
[2017-02-20T04:09:57.958Z] DEBUG [2036] : Reading config file: /etc/elasticbeanstalk/.aws-eb-stack.properties
[2017-02-20T04:09:57.959Z] DEBUG [2036] : Checking if the command processor should execute...
[2017-02-20T04:09:57.961Z] DEBUG [2036] : Checking whether the command is applicable to instance (i-06a18c392aa73327c)..
[2017-02-20T04:09:57.961Z] INFO [2036] : Command is applicable to this instance (i-06a18c392aa73327c)..
[2017-02-20T04:09:57.962Z] DEBUG [2036] : Checking if the received command stage is valid..
[2017-02-20T04:09:57.962Z] INFO [2036] : No stage_num in command. Valid stage..
[2017-02-20T04:09:57.962Z] INFO [2036] : Received command CMD-Startup: {"execution_data":"{\"leader_election\":\"true\"}","instance_ids":["i-06a18c392aa73327c"],"command_name":"CMD-Startup","api_version":"1.0","resource_name":"AWSEBAutoScalingGroup","request_id":"1e9fb2c8-f722-11e6-a999-2d00f4179ab5","command_timeout":"600"}
[2017-02-20T04:09:57.962Z] INFO [2036] : Command processor should execute command.
[2017-02-20T04:09:57.962Z] DEBUG [2036] : Storing current stage..
[2017-02-20T04:09:57.962Z] DEBUG [2036] : Stage_num does not exist. Not saving null stage. Returning..
[2017-02-20T04:09:57.962Z] DEBUG [2036] : Reading config file: /etc/elasticbeanstalk/.aws-eb-stack.properties
[2017-02-20T04:09:57.962Z] DEBUG [2036] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_ContainerConfigFileContent||commands..
[2017-02-20T04:09:57.963Z] DEBUG [2036] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_API||_Commands..
[2017-02-20T04:09:57.964Z] INFO [2036] : Found enabled addons: ["logpublish", "logstreaming"].
[2017-02-20T04:09:57.966Z] INFO [2036] : Updating Command definition of addon logpublish.
[2017-02-20T04:09:57.966Z] INFO [2036] : Updating Command definition of addon logstreaming.
[2017-02-20T04:09:57.967Z] DEBUG [2036] : Refreshing metadata...
[2017-02-20T04:09:58.389Z] DEBUG [2036] : Refreshed environment metadata.
[2017-02-20T04:09:58.389Z] DEBUG [2036] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_ContainerConfigFileContent||commands..
[2017-02-20T04:09:58.390Z] DEBUG [2036] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_API||_Commands..
[2017-02-20T04:09:58.392Z] INFO [2036] : Found enabled addons: ["logpublish", "logstreaming"].
[2017-02-20T04:09:58.394Z] INFO [2036] : Updating Command definition of addon logpublish.
[2017-02-20T04:09:58.394Z] INFO [2036] : Updating Command definition of addon logstreaming.
[2017-02-20T04:09:58.394Z] DEBUG [2036] : Loaded definition of Command CMD-Startup.
[2017-02-20T04:09:58.395Z] INFO [2036] : Executing Application deployment
[2017-02-20T04:09:58.397Z] INFO [2036] : Executing command: CMD-Startup...
[2017-02-20T04:09:58.397Z] INFO [2036] : Executing command CMD-Startup activities...
[2017-02-20T04:09:58.397Z] DEBUG [2036] : Setting environment variables..
[2017-02-20T04:09:58.397Z] INFO [2036] : Running AddonsBefore for command CMD-Startup...
[2017-02-20T04:09:58.397Z] DEBUG [2036] : Running stages of Command CMD-Startup from stage 0 to stage 1...
[2017-02-20T04:09:58.397Z] INFO [2036] : Running stage 0 of command CMD-Startup...
[2017-02-20T04:09:58.397Z] INFO [2036] : Running leader election...
[2017-02-20T04:09:58.818Z] INFO [2036] : Instance is Leader.
[2017-02-20T04:09:58.818Z] DEBUG [2036] : Loaded 7 actions for stage 0.
[2017-02-20T04:09:58.818Z] INFO [2036] : Running 1 of 7 actions: HealthdLogRotation...
[2017-02-20T04:09:58.830Z] INFO [2036] : Running 2 of 7 actions: HealthdHTTPDLogging...
[2017-02-20T04:09:58.831Z] INFO [2036] : Running 3 of 7 actions: HealthdNginxLogging...
[2017-02-20T04:09:58.832Z] INFO [2036] : Running 4 of 7 actions: EbExtensionPreBuild...
[2017-02-20T04:09:59.345Z] INFO [2036] : Running 5 of 7 actions: AppDeployPreHook...
[2017-02-20T04:10:15.101Z] INFO [2036] : Running 6 of 7 actions: EbExtensionPostBuild...
[2017-02-20T04:10:15.598Z] INFO [2036] : Running 7 of 7 actions: InfraCleanEbExtension...
[2017-02-20T04:10:15.602Z] INFO [2036] : Running stage 1 of command CMD-Startup...
[2017-02-20T04:10:15.602Z] DEBUG [2036] : Loaded 3 actions for stage 1.
[2017-02-20T04:10:15.602Z] INFO [2036] : Running 1 of 3 actions: AppDeployEnactHook...
[2017-02-20T04:10:25.988Z] INFO [2036] : Running 2 of 3 actions: AppDeployPostHook...
[2017-02-20T04:10:25.989Z] INFO [2036] : Running 3 of 3 actions: PostInitHook...
[2017-02-20T04:10:25.989Z] INFO [2036] : Running AddonsAfter for command CMD-Startup...
[2017-02-20T04:10:26.427Z] INFO [2036] : Command CMD-Startup succeeded!
[2017-02-20T04:10:26.428Z] INFO [2036] : Command processor returning results:
{"status":"SUCCESS","api_version":"1.0","results":[{"status":"SUCCESS","msg":"","returncode":0,"events":[]}]}
[2017-02-20T04:32:43.697Z] DEBUG [3166] : Reading config file: /etc/elasticbeanstalk/.aws-eb-stack.properties
[2017-02-20T04:32:43.698Z] DEBUG [3166] : Checking if the command processor should execute...
[2017-02-20T04:32:43.701Z] DEBUG [3166] : Checking whether the command is applicable to instance (i-06a18c392aa73327c)..
[2017-02-20T04:32:43.701Z] INFO [3166] : Command is applicable to this instance (i-06a18c392aa73327c)..
[2017-02-20T04:32:43.701Z] DEBUG [3166] : Checking if the received command stage is valid..
[2017-02-20T04:32:43.702Z] INFO [3166] : No stage_num in command. Valid stage..
[2017-02-20T04:32:43.702Z] INFO [3166] : Received command CMD-TailLogs: {"execution_data":"{\"aws_access_key_id\":\"ASIAIYTXAC25BPPMVRGA\",\"signature\":\"jc+aHFw1y\\\/O+07rVpCPEOKb3iTs=\",\"security_token\":\"FQoDYXdzEIb\\\/\\\/\\\/\\\/\\\/\\\/\\\/\\\/\\\/\\\/wEaDF\\\/k05drb1Y6NjfahyLIA5LxJEqXbliFnvfjnLiC9aIBKhPQkJCScp3L2\\\/TE\\\/nSgOw1VTAIgBG0rMbJ5F2+RPz\\\/cFwEyKd\\\/KuCAxbdcyGUGX+foJiahX2ZxFeaJQaNwBP63+YAzhOELoy\\\/CkPHSwQf8JbFdeytNUyc73Ce6fvpXavwcgbbkL+IHyg26ldGyBG42Evb9AHBMrSmPpuK2a\\\/KuHK\\\/q5ccZQC+qm67kBTriQi2VLALGuvrGZy\\\/tydSU2VsqGflvAoPZ3j\\\/+PVXVnmDGuZHYgvr+5ZbAbo1wlvCbgf+IxmR6VrURU40XVUUewgAEWVCwg04mKMlKj343eceVfI9dtEHYyjXKjKv\\\/xT3RUl0ksiYRtPV8RuDgr2BDPEMsImFXkGSCUMIyBWTwjYXARNcFfQZLZuMvEU2yC4P6nkHbWKw23DOPtxng0rES6Yzsm5G5Fvgb4+HUEl+Mu7YUJWt\\\/IQCDejXN9wUTIlo1JdGplZGjfXMCrhJQnQBXD\\\/qKksyufnY\\\/jyDC4GNCkHQEGmgFWntBeZVYhZftCrQDBy0F8PA7V\\\/D9iJvm9eJ1N29ZcdVM62m6NT4k+Hv5iFlDaSoeCSs9bGbay6I1ObbjKKSxvdtkHriigp6nFBQ==\",\"policy\":\"eyJleHBpcmF0aW9uIjoiMjAxNy0wMi0yMFQwNTowMjo0Mi4yNloiLCJjb25kaXRpb25zIjpbWyJzdGFydHMtd2l0aCIsIiR4LWFtei1tZXRhLXRpbWVfc3RhbXAiLCIiXSxbInN0YXJ0cy13aXRoIiwiJHgtYW16LW1ldGEtcHVibGlzaF9tZWNoYW5pc20iLCIiXSxbInN0YXJ0cy13aXRoIiwiJGtleSIsInJlc291cmNlc1wvZW52aXJvbm1lbnRzXC9sb2dzXC8iXSxbInN0YXJ0cy13aXRoIiwiJHgtYW16LW1ldGEtYmF0Y2hfaWQiLCIiXSxbInN0YXJ0cy13aXRoIiwiJHgtYW16LW1ldGEtZmlsZV9uYW1lIiwiIl0sWyJzdGFydHMtd2l0aCIsIiR4LWFtei1zZWN1cml0eS10b2tlbiIsIiJdLFsic3RhcnRzLXdpdGgiLCIkQ29udGVudC1UeXBlIiwiIl0sWyJlcSIsIiRidWNrZXQiLCJlbGFzdGljYmVhbnN0YWxrLXVzLXdlc3QtMi02NDg4MTI3NzE4MjUiXSxbImVxIiwiJGFjbCIsInByaXZhdGUiXV19\"}","instance_ids":["i-06a18c392aa73327c"],"data":"9e0cfab0-f725-11e6-8f0e-9f392d16cbe4","command_name":"CMD-TailLogs","api_version":"1.0","resource_name":"AWSEBAutoScalingGroup","request_id":"9e0cfab0-f725-11e6-8f0e-9f392d16cbe4","command_timeout":"600"}
[2017-02-20T04:32:43.702Z] INFO [3166] : Command processor should execute command.
[2017-02-20T04:32:43.702Z] DEBUG [3166] : Storing current stage..
[2017-02-20T04:32:43.702Z] DEBUG [3166] : Stage_num does not exist. Not saving null stage. Returning..
[2017-02-20T04:32:43.702Z] DEBUG [3166] : Reading config file: /etc/elasticbeanstalk/.aws-eb-stack.properties
[2017-02-20T04:32:43.702Z] DEBUG [3166] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_ContainerConfigFileContent||commands..
[2017-02-20T04:32:43.703Z] DEBUG [3166] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_API||_Commands..
[2017-02-20T04:32:43.704Z] INFO [3166] : Found enabled addons: ["logpublish", "logstreaming"].
[2017-02-20T04:32:43.706Z] INFO [3166] : Updating Command definition of addon logpublish.
[2017-02-20T04:32:43.706Z] INFO [3166] : Updating Command definition of addon logstreaming.
[2017-02-20T04:32:43.707Z] DEBUG [3166] : Loaded definition of Command CMD-TailLogs.
[2017-02-20T04:32:43.707Z] INFO [3166] : Executing CMD-TailLogs
[2017-02-20T04:32:43.707Z] INFO [3166] : Executing command: CMD-TailLogs...
[2017-02-20T04:32:43.708Z] INFO [3166] : Executing command CMD-TailLogs activities...
[2017-02-20T04:32:43.708Z] DEBUG [3166] : Setting environment variables..
[2017-02-20T04:32:43.708Z] INFO [3166] : Running AddonsBefore for command CMD-TailLogs...
[2017-02-20T04:32:43.708Z] DEBUG [3166] : Running stages of Command CMD-TailLogs from stage 0 to stage 0...
[2017-02-20T04:32:43.708Z] INFO [3166] : Running stage 0 of command CMD-TailLogs...
[2017-02-20T04:32:43.708Z] DEBUG [3166] : Loaded 1 actions for stage 0.
[2017-02-20T04:32:43.708Z] INFO [3166] : Running 1 of 1 actions: TailLogs...
I use Flink cluster(Standalone) and it disconnects task manager from Job manager after few hours.
I follows basic step of setup standalone cluster in flink.apache.org and I use 1 master node and 2 workers and they are Mac osx.
only 1 worker is disconnected from jobManager.
I share log file and please give me solution.
below is Job Manager's Log
2017-01-28 16:58:32,092 WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2017-01-28 16:58:32,195 INFO org.apache.flink.runtime.jobmanager.JobManager - --------------------------------------------------------------------------------
2017-01-28 16:58:32,195 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager (Version: 1.1.4, Rev:8fb0fc8, Date:19.12.2016 # 10:42:50 UTC)
2017-01-28 16:58:32,195 INFO org.apache.flink.runtime.jobmanager.JobManager - Current user: macmini
2017-01-28 16:58:32,195 INFO org.apache.flink.runtime.jobmanager.JobManager - JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.121-b13
2017-01-28 16:58:32,195 INFO org.apache.flink.runtime.jobmanager.JobManager - Maximum heap size: 491 MiBytes
2017-01-28 16:58:32,195 INFO org.apache.flink.runtime.jobmanager.JobManager - JAVA_HOME: (not set)
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - Hadoop version: 2.7.2
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - JVM Options:
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - -Xms512m
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - -Xmx512m
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - -Dlog.file=/server/flink-1.1.4/log/flink-macmini-jobmanager-0-macmini.local.log
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - -Dlog4j.configuration=file:/server/flink-1.1.4/conf/log4j.properties
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - -Dlogback.configurationFile=file:/server/flink-1.1.4/conf/logback.xml
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - Program Arguments:
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - --configDir
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - /server/flink-1.1.4/conf
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - --executionMode
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - cluster
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - Classpath: /server/flink-1.1.4/lib/flink-dist_2.11-1.1.4.jar:/server/flink-1.1.4/lib/flink-python_2.11-1.1.4.jar:/server/flink-1.1.4/lib/log4j-1.2.17.jar:/server/flink-1.1.4/lib/slf4j-log4j12-1.7.7.jar:::
2017-01-28 16:58:32,199 INFO org.apache.flink.runtime.jobmanager.JobManager - --------------------------------------------------------------------------------
2017-01-28 16:58:32,199 INFO org.apache.flink.runtime.jobmanager.JobManager - Registered UNIX signal handlers for [TERM, HUP, INT]
2017-01-28 16:58:32,354 INFO org.apache.flink.runtime.jobmanager.JobManager - Loading configuration from /server/flink-1.1.4/conf
2017-01-28 16:58:32,367 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, 192.168.0.15
2017-01-28 16:58:32,367 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123
2017-01-28 16:58:32,368 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.mb, 512
2017-01-28 16:58:32,368 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.mb, 1024
2017-01-28 16:58:32,368 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 8
2017-01-28 16:58:32,368 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.preallocate, false
2017-01-28 16:58:32,368 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 2
2017-01-28 16:58:32,368 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.web.port, 8081
2017-01-28 16:58:32,372 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager without high-availability
2017-01-28 16:58:32,380 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager on 192.168.0.15:6123 with execution mode CLUSTER
2017-01-28 16:58:32,416 INFO org.apache.flink.runtime.jobmanager.JobManager - Security is not enabled. Starting non-authenticated JobManager.
2017-01-28 16:58:32,467 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager
2017-01-28 16:58:32,468 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager actor system at 192.168.0.15:6123
2017-01-28 16:58:32,774 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
2017-01-28 16:58:32,814 INFO Remoting - Starting remoting
2017-01-28 16:58:32,999 INFO Remoting - Remoting started; listening on addresses :[akka.tcp://flink#192.168.0.15:6123]
2017-01-28 16:58:33,006 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager web frontend
2017-01-28 16:58:33,020 INFO org.apache.flink.runtime.webmonitor.WebMonitorUtils - Determined location of JobManager log file: /server/flink-1.1.4/log/flink-macmini-jobmanager-0-macmini.local.log
2017-01-28 16:58:33,020 INFO org.apache.flink.runtime.webmonitor.WebMonitorUtils - Determined location of JobManager stdout file: /server/flink-1.1.4/log/flink-macmini-jobmanager-0-macmini.local.out
2017-01-28 16:58:33,088 INFO org.apache.flink.runtime.webmonitor.WebRuntimeMonitor - Using directory /var/folders/8m/qw1dq5sx68d38w7xg2pqhnz80000gn/T/flink-web-e2793d07-860b-4854-8c16-5805320cbd62 for the web interface files
2017-01-28 16:58:33,089 INFO org.apache.flink.runtime.webmonitor.WebRuntimeMonitor - Using directory /var/folders/8m/qw1dq5sx68d38w7xg2pqhnz80000gn/T/flink-web-upload-b98128d7-8572-47e8-bdfb-b30d287d0868 for web frontend JAR file uploads
2017-01-28 16:58:33,371 INFO org.apache.flink.runtime.webmonitor.WebRuntimeMonitor - Web frontend listening at 0:0:0:0:0:0:0:0:8081
2017-01-28 16:58:33,371 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager actor
2017-01-28 16:58:33,379 INFO org.apache.flink.runtime.blob.BlobServer - Created BLOB server storage directory /var/folders/8m/qw1dq5sx68d38w7xg2pqhnz80000gn/T/blobStore-1b966770-440f-45e0-97cc-018357bdefb1
2017-01-28 16:58:33,384 INFO org.apache.flink.runtime.blob.BlobServer - Started BLOB server at 0.0.0.0:52505 - max concurrent requests: 50 - max backlog: 1000
2017-01-28 16:58:33,399 INFO org.apache.flink.runtime.checkpoint.savepoint.SavepointStoreFactory - Using job manager savepoint state backend.
2017-01-28 16:58:33,406 INFO org.apache.flink.runtime.metrics.MetricRegistry - No metrics reporter configured, no metrics will be exposed/reported.
2017-01-28 16:58:33,417 INFO org.apache.flink.runtime.jobmanager.MemoryArchivist - Started memory archivist akka://flink/user/archive
2017-01-28 16:58:33,424 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager at akka.tcp://flink#192.168.0.15:6123/user/jobmanager.
2017-01-28 16:58:33,434 INFO org.apache.flink.runtime.webmonitor.WebRuntimeMonitor - Starting with JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager on port 8081
2017-01-28 16:58:33,435 INFO org.apache.flink.runtime.webmonitor.JobManagerRetriever - New leader reachable under akka.tcp://flink#192.168.0.15:6123/user/jobmanager:null.
2017-01-28 16:58:33,448 INFO org.apache.flink.runtime.clusterframework.standalone.StandaloneResourceManager - Trying to associate with JobManager leader akka.tcp://flink#192.168.0.15:6123/user/jobmanager
2017-01-28 16:58:33,532 INFO org.apache.flink.runtime.jobmanager.JobManager - JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager was granted leadership with leader session ID None.
2017-01-28 16:58:33,549 INFO org.apache.flink.runtime.clusterframework.standalone.StandaloneResourceManager - Resource Manager associating with leading JobManager Actor[akka://flink/user/jobmanager#-320621701] - leader session null
2017-01-28 16:58:34,996 INFO org.apache.flink.runtime.clusterframework.standalone.StandaloneResourceManager - TaskManager ResourceID{resourceId='6817357d50e00e3879507d8a0ecd9f42'} has started.
2017-01-28 16:58:34,998 INFO org.apache.flink.runtime.instance.InstanceManager - Registered TaskManager at 192.168.0.17 (akka.tcp://flink#192.168.0.17:49684/user/taskmanager) as 0495d83fc1767f41770ba9feb3c7b8a6. Current number of registered hosts is 1. Current number of alive task slots is 8.
2017-01-28 16:58:37,464 INFO org.apache.flink.runtime.clusterframework.standalone.StandaloneResourceManager - TaskManager ResourceID{resourceId='31c1ebbfdbd41ae798718575aff2fb8c'} has started.
2017-01-28 16:58:37,465 INFO org.apache.flink.runtime.instance.InstanceManager - Registered TaskManager at 192.168.0.16 (akka.tcp://flink#192.168.0.16:50620/user/taskmanager) as 794bb699ab58aa3eab324924a8b8409e. Current number of registered hosts is 2. Current number of alive task slots is 16.
2017-01-28 17:05:04,426 INFO org.apache.flink.runtime.blob.BlobCache - Created BLOB cache storage directory /var/folders/8m/qw1dq5sx68d38w7xg2pqhnz80000gn/T/blobStore-8d18ed4c-6a33-4d62-a63b-0aa2a82569eb
2017-01-28 17:05:04,464 INFO org.apache.flink.runtime.blob.BlobCache - Downloading 8b816b83aa254a2853a32e51b37ee1b3c14ef82a from localhost/127.0.0.1:52505
2017-01-28 17:07:45,153 INFO org.apache.flink.runtime.blob.BlobCache - Downloading 8808acee6a1b7c225a9b471a1429a75ec75d4c16 from localhost/127.0.0.1:52505
2017-01-29 09:54:01,082 WARN akka.remote.RemoteWatcher - Detected unreachable: [akka.tcp://flink#192.168.0.17:49684]
2017-01-29 09:54:01,089 INFO org.apache.flink.runtime.jobmanager.JobManager - Task manager akka.tcp://flink#192.168.0.17:49684/user/taskmanager terminated.
2017-01-29 09:54:01,090 INFO org.apache.flink.runtime.instance.InstanceManager - Unregistered task manager akka.tcp://flink#192.168.0.17:49684/user/taskmanager. Number of registered task managers 1. Number of available slots 8.
TaskManager's log
2017-01-28 16:58:33,162 WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2017-01-28 16:58:33,268 INFO org.apache.flink.runtime.taskmanager.TaskManager - --------------------------------------------------------------------------------
2017-01-28 16:58:33,268 INFO org.apache.flink.runtime.taskmanager.TaskManager - Starting TaskManager (Version: 1.1.4, Rev:8fb0fc8, Date:19.12.2016 # 10:42:50 UTC)
2017-01-28 16:58:33,268 INFO org.apache.flink.runtime.taskmanager.TaskManager - Current user: macmini3
2017-01-28 16:58:33,268 INFO org.apache.flink.runtime.taskmanager.TaskManager - JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.121-b13
2017-01-28 16:58:33,268 INFO org.apache.flink.runtime.taskmanager.TaskManager - Maximum heap size: 1024 MiBytes
2017-01-28 16:58:33,268 INFO org.apache.flink.runtime.taskmanager.TaskManager - JAVA_HOME: (not set)
2017-01-28 16:58:33,271 INFO org.apache.flink.runtime.taskmanager.TaskManager - Hadoop version: 2.7.2
2017-01-28 16:58:33,271 INFO org.apache.flink.runtime.taskmanager.TaskManager - JVM Options:
2017-01-28 16:58:33,271 INFO org.apache.flink.runtime.taskmanager.TaskManager - -XX:+UseG1GC
2017-01-28 16:58:33,271 INFO org.apache.flink.runtime.taskmanager.TaskManager - -Xms1024M
2017-01-28 16:58:33,271 INFO org.apache.flink.runtime.taskmanager.TaskManager - -Xmx1024M
2017-01-28 16:58:33,272 INFO org.apache.flink.runtime.taskmanager.TaskManager - -XX:MaxDirectMemorySize=8388607T
2017-01-28 16:58:33,272 INFO org.apache.flink.runtime.taskmanager.TaskManager - -Dlog.file=/server/flink-1.1.4/log/flink-macmini3-taskmanager-0-macmini3.local.log
2017-01-28 16:58:33,272 INFO org.apache.flink.runtime.taskmanager.TaskManager - -Dlog4j.configuration=file:/server/flink-1.1.4/conf/log4j.properties
2017-01-28 16:58:33,272 INFO org.apache.flink.runtime.taskmanager.TaskManager - -Dlogback.configurationFile=file:/server/flink-1.1.4/conf/logback.xml
2017-01-28 16:58:33,272 INFO org.apache.flink.runtime.taskmanager.TaskManager - Program Arguments:
2017-01-28 16:58:33,272 INFO org.apache.flink.runtime.taskmanager.TaskManager - --configDir
2017-01-28 16:58:33,272 INFO org.apache.flink.runtime.taskmanager.TaskManager - /server/flink-1.1.4/conf
2017-01-28 16:58:33,272 INFO org.apache.flink.runtime.taskmanager.TaskManager - Classpath: /server/flink-1.1.4/lib/flink-dist_2.11-1.1.4.jar:/server/flink-1.1.4/lib/flink-python_2.11-1.1.4.jar:/server/flink-1.1.4/lib/log4j-1.2.17.jar:/server/flink-1.1.4/lib/slf4j-log4j12-1.7.7.jar:::
2017-01-28 16:58:33,272 INFO org.apache.flink.runtime.taskmanager.TaskManager - --------------------------------------------------------------------------------
2017-01-28 16:58:33,273 INFO org.apache.flink.runtime.taskmanager.TaskManager - Registered UNIX signal handlers for [TERM, HUP, INT]
2017-01-28 16:58:33,277 INFO org.apache.flink.runtime.taskmanager.TaskManager - Maximum number of open file descriptors is 10240
2017-01-28 16:58:33,295 INFO org.apache.flink.runtime.taskmanager.TaskManager - Loading configuration from /server/flink-1.1.4/conf
2017-01-28 16:58:33,305 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, 192.168.0.15
2017-01-28 16:58:33,305 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123
2017-01-28 16:58:33,305 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.mb, 512
2017-01-28 16:58:33,306 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.mb, 1024
2017-01-28 16:58:33,306 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 8
2017-01-28 16:58:33,306 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.preallocate, false
2017-01-28 16:58:33,306 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 2
2017-01-28 16:58:33,306 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.web.port, 8081
2017-01-28 16:58:33,342 INFO org.apache.flink.runtime.taskmanager.TaskManager - Security is not enabled. Starting non-authenticated TaskManager.
2017-01-28 16:58:33,368 INFO org.apache.flink.runtime.util.LeaderRetrievalUtils - Trying to select the network interface and address to use by connecting to the leading JobManager.
2017-01-28 16:58:33,369 INFO org.apache.flink.runtime.util.LeaderRetrievalUtils - TaskManager will try to connect for 10000 milliseconds before falling back to heuristics
2017-01-28 16:58:33,371 INFO org.apache.flink.runtime.net.ConnectionUtils - Retrieved new target address /192.168.0.15:6123.
2017-01-28 16:58:33,405 INFO org.apache.flink.runtime.taskmanager.TaskManager - TaskManager will use hostname/address '192.168.0.17' (192.168.0.17) for communication.
2017-01-28 16:58:33,406 INFO org.apache.flink.runtime.taskmanager.TaskManager - Starting TaskManager
2017-01-28 16:58:33,406 INFO org.apache.flink.runtime.taskmanager.TaskManager - Starting TaskManager actor system at 192.168.0.17:0
2017-01-28 16:58:33,745 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
2017-01-28 16:58:33,813 INFO Remoting - Starting remoting
2017-01-28 16:58:33,963 INFO Remoting - Remoting started; listening on addresses :[akka.tcp://flink#192.168.0.17:49684]
2017-01-28 16:58:33,969 INFO org.apache.flink.runtime.taskmanager.TaskManager - Starting TaskManager actor
2017-01-28 16:58:33,971 WARN org.apache.flink.runtime.instance.InstanceConnectionInfo - No hostname could be resolved for the IP address 192.168.0.17, using IP address as host name. Local input split assignment (such as for HDFS files) may be impacted.
2017-01-28 16:58:33,976 INFO org.apache.flink.runtime.io.network.netty.NettyConfig - NettyConfig [server address: /192.168.0.17, server port: 49685, memory segment size (bytes): 32768, transport type: NIO, number of server threads: 8 (manual), number of client threads: 8 (manual), server connect backlog: 0 (use Netty's default), client connect timeout (sec): 120, send/receive buffer size (bytes): 0 (use Netty's default)]
2017-01-28 16:58:33,977 INFO org.apache.flink.runtime.taskmanager.TaskManager - Messages between TaskManager and JobManager have a max timeout of 10000 milliseconds
2017-01-28 16:58:33,980 INFO org.apache.flink.runtime.taskmanager.TaskManager - Temporary file directory '/var/folders/9n/3xqj9xzx67sd5dx1z59506840000gn/T': total 118 GB, usable 105 GB (88.98% usable)
2017-01-28 16:58:34,041 INFO org.apache.flink.runtime.io.network.buffer.NetworkBufferPool - Allocated 64 MB for network buffer pool (number of memory segments: 2048, bytes per segment: 32768).
2017-01-28 16:58:34,080 INFO org.apache.flink.runtime.taskmanager.TaskManager - Limiting managed memory to 0.7 of the currently free heap space (669 MB), memory will be allocated lazily.
2017-01-28 16:58:34,090 INFO org.apache.flink.runtime.io.disk.iomanager.IOManager - I/O manager uses directory /var/folders/9n/3xqj9xzx67sd5dx1z59506840000gn/T/flink-io-8fe2058a-8e2b-4fa6-8c5c-7e1a5edd8993 for spill files.
2017-01-28 16:58:34,103 INFO org.apache.flink.runtime.filecache.FileCache - User file cache uses directory /var/folders/9n/3xqj9xzx67sd5dx1z59506840000gn/T/flink-dist-cache-a663c4dc-6c17-43ba-af73-6d798b0f5196
2017-01-28 16:58:34,294 INFO org.apache.flink.runtime.taskmanager.TaskManager - Starting TaskManager actor at akka://flink/user/taskmanager#-1292864950.
2017-01-28 16:58:34,295 INFO org.apache.flink.runtime.taskmanager.TaskManager - TaskManager data connection information: 192.168.0.17 (dataPort=49685)
2017-01-28 16:58:34,296 INFO org.apache.flink.runtime.taskmanager.TaskManager - TaskManager has 8 task slot(s).
2017-01-28 16:58:34,298 INFO org.apache.flink.runtime.taskmanager.TaskManager - Memory usage stats: [HEAP: 79/1024/1024 MB, NON HEAP: 32/33/-1 MB (used/committed/max)]
2017-01-28 16:58:34,304 INFO org.apache.flink.runtime.taskmanager.TaskManager - Trying to register at JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager (attempt 1, timeout: 500 milliseconds)
2017-01-28 16:58:34,825 INFO org.apache.flink.runtime.taskmanager.TaskManager - Trying to register at JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager (attempt 2, timeout: 1000 milliseconds)
2017-01-28 16:58:35,077 INFO org.apache.flink.runtime.taskmanager.TaskManager - Successful registration at JobManager (akka.tcp://flink#192.168.0.15:6123/user/jobmanager), starting network stack and library cache.
2017-01-28 16:58:35,248 INFO org.apache.flink.runtime.io.network.netty.NettyClient - Successful initialization (took 40 ms).
2017-01-28 16:58:35,285 INFO org.apache.flink.runtime.io.network.netty.NettyServer - Successful initialization (took 37 ms). Listening on SocketAddress /192.168.0.17:49685.
2017-01-28 16:58:35,286 INFO org.apache.flink.runtime.taskmanager.TaskManager - Determined BLOB server address to be /192.168.0.15:52505. Starting BLOB cache.
2017-01-28 16:58:35,289 INFO org.apache.flink.runtime.blob.BlobCache - Created BLOB cache storage directory /var/folders/9n/3xqj9xzx67sd5dx1z59506840000gn/T/blobStore-941b8481-5f25-4baf-9739-1ff7f28f9262
2017-01-28 16:58:35,298 INFO org.apache.flink.runtime.metrics.MetricRegistry - No metrics reporter configured, no metrics will be exposed/reported.
2017-01-29 09:54:01,399 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink#192.168.0.15:6123] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
2017-01-29 09:54:04,625 WARN akka.remote.RemoteWatcher - Detected unreachable: [akka.tcp://flink#192.168.0.15:6123]
2017-01-29 09:54:04,631 INFO org.apache.flink.runtime.taskmanager.TaskManager - TaskManager akka://flink/user/taskmanager disconnects from JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager: JobManager is no longer reachable
2017-01-29 09:54:04,631 INFO org.apache.flink.runtime.taskmanager.TaskManager - Disassociating from JobManager
2017-01-29 09:54:04,633 INFO org.apache.flink.runtime.blob.BlobCache - Shutting down BlobCache
2017-01-29 09:54:04,642 INFO org.apache.flink.runtime.io.network.netty.NettyClient - Successful shutdown (took 2 ms).
2017-01-29 09:54:04,644 INFO org.apache.flink.runtime.io.network.netty.NettyServer - Successful shutdown (took 2 ms).
2017-01-29 09:54:04,651 INFO org.apache.flink.runtime.taskmanager.TaskManager - Trying to register at JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager (attempt 1, timeout: 500 milliseconds)
2017-01-29 09:54:05,163 INFO org.apache.flink.runtime.taskmanager.TaskManager - Trying to register at JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager (attempt 2, timeout: 1000 milliseconds)
2017-01-29 09:54:06,183 INFO org.apache.flink.runtime.taskmanager.TaskManager - Trying to register at JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager (attempt 3, timeout: 2000 milliseconds)
2017-01-29 09:54:06,316 INFO Remoting - Quarantined address [akka.tcp://flink#192.168.0.15:6123] is still unreachable or has not been restarted. Keeping it quarantined.
2017-01-29 09:54:08,202 INFO org.apache.flink.runtime.taskmanager.TaskManager - Trying to register at JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager (attempt 4, timeout: 4000 milliseconds)
2017-01-29 09:54:10,295 INFO Remoting - Quarantined address [akka.tcp://flink#192.168.0.15:6123] is still unreachable or has not been restarted. Keeping it quarantined.
2017-01-29 09:54:12,222 INFO org.apache.flink.runtime.taskmanager.TaskManager - Trying to register at JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager (attempt 5, timeout: 8000 milliseconds)
2017-01-29 09:54:14,461 INFO Remoting - Quarantined address [akka.tcp://flink#192.168.0.15:6123] is still unreachable or has not been restarted. Keeping it quarantined.
2017-01-29 09:54:20,242 INFO org.apache.flink.runtime.taskmanager.TaskManager - Trying to register at JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager (attempt 6, timeout: 16000 milliseconds)
2017-01-29 09:54:20,438 INFO Remoting - Quarantined address [akka.tcp://flink#192.168.0.15:6123] is still unreachable or has not been restarted. Keeping it quarantined.
2017-01-29 09:54:36,263 INFO org.apache.flink.runtime.taskmanager.TaskManager - Trying to register at JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager (attempt 7, timeout: 30 seconds)
I am a new bee in Spark and trying to run Spark on Amazon EMR. Here's my code (I've copied from an example and did a little bit modification):
package test;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.PairFunction;
import scala.Tuple2;
import com.google.common.base.Optional;
public class SparkTest {
public static void main(String[] args) {
JavaSparkContext sc = new JavaSparkContext(new SparkConf().setAppName("Spark Count"));
sc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", "xxxxxxxxxxxxxxx");
sc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", "yyyyyyyyyyyyyyyyyyyyyyy");
JavaRDD<String> customerInputFile = sc.textFile("s3n://aws-logs-494322476419-ap-southeast-1/test/customers_data.txt");
JavaPairRDD<String, String> customerPairs = customerInputFile.mapToPair(new PairFunction<String, String, String>() {
public Tuple2<String, String> call(String s) {
String[] customerSplit = s.split(",");
return new Tuple2<String, String>(customerSplit[0], customerSplit[1]);
}
}).distinct();
JavaRDD<String> transactionInputFile = sc.textFile("s3n://aws-logs-494322476419-ap-southeast-1/test/transactions_data.txt");
JavaPairRDD<String, String> transactionPairs = transactionInputFile.mapToPair(new PairFunction<String, String, String>() {
public Tuple2<String, String> call(String s) {
String[] transactionSplit = s.split(",");
return new Tuple2<String, String>(transactionSplit[2], transactionSplit[3]+","+transactionSplit[1]);
}
});
//Default Join operation (Inner join)
JavaPairRDD<String, Tuple2<String, String>> joinsOutput = customerPairs.join(transactionPairs);
System.out.println("Joins function Output: "+joinsOutput.collect());
//Left Outer join operation
JavaPairRDD<String, Iterable<Tuple2<String, Optional<String>>>> leftJoinOutput = customerPairs.leftOuterJoin(transactionPairs).groupByKey().sortByKey();
System.out.println("LeftOuterJoins function Output: "+leftJoinOutput.collect());
//Right Outer join operation
JavaPairRDD<String, Iterable<Tuple2<Optional<String>, String>>> rightJoinOutput = customerPairs.rightOuterJoin(transactionPairs).groupByKey().sortByKey();
System.out.println("LeftOuterJoins function Output: "+rightJoinOutput.collect());
sc.close();
}
}
But after made a jar and setup a cluster and run, it always report such error and fail:
2015-07-24 12:22:41,550 INFO [main] client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at ip-10-0-0-61.ap-southeast-1.compute.internal/10.0.0.61:8032
2015-07-24 12:22:42,619 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Requesting a new application from cluster with 2 NodeManagers
2015-07-24 12:22:42,694 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Verifying our application has not requested more than the maximum memory capability of the cluster (2048 MB per container)
2015-07-24 12:22:42,698 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Will allocate AM container, with 896 MB memory including 384 MB overhead
2015-07-24 12:22:42,700 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Setting up container launch context for our AM
2015-07-24 12:22:42,707 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Preparing resources for our AM container
2015-07-24 12:22:45,445 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Uploading resource file:/usr/lib/spark/lib/spark-assembly-1.4.1-hadoop2.6.0-amzn-0.jar -> hdfs://10.0.0.61:8020/user/hadoop/.sparkStaging/application_1437740323036_0001/spark-assembly-1.4.1-hadoop2.6.0-amzn-0.jar
2015-07-24 12:22:47,701 INFO [main] metrics.MetricsSaver (MetricsSaver.java:showConfigRecord(643)) - MetricsConfigRecord disabledInCluster: false instanceEngineCycleSec: 60 clusterEngineCycleSec: 60 disableClusterEngine: false maxMemoryMb: 3072 maxInstanceCount: 500 lastModified: 1437740335527
2015-07-24 12:22:47,713 INFO [main] metrics.MetricsSaver (MetricsSaver.java:<init>(284)) - Created MetricsSaver j-1NM41B4W6K3IP:i-525f449f:SparkSubmit:06588 period:60 /mnt/var/em/raw/i-525f449f_20150724_SparkSubmit_06588_raw.bin
2015-07-24 12:22:49,047 INFO [DataStreamer for file /user/hadoop/.sparkStaging/application_1437740323036_0001/spark-assembly-1.4.1-hadoop2.6.0-amzn-0.jar block BP-1554902524-10.0.0.61-1437740270491:blk_1073741830_1015] metrics.MetricsSaver (MetricsSaver.java:compactRawValues(464)) - 1 aggregated HDFSWriteDelay 183 raw values into 1 aggregated values, total 1
2015-07-24 12:23:03,845 INFO [main] fs.EmrFileSystem (EmrFileSystem.java:initialize(107)) - Consistency disabled, using com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem as filesystem implementation
2015-07-24 12:23:06,316 INFO [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[200], ServiceName=[Amazon S3], AWSRequestID=[E987B96CAE12A2B2], ServiceEndpoint=[https://aws-logs-494322476419-ap-southeast-1.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=0, ClientExecuteTime=[2266.609], HttpRequestTime=[1805.926], HttpClientReceiveResponseTime=[17.096], RequestSigningTime=[187.361], ResponseProcessingTime=[0.66], HttpClientSendRequestTime=[1.065],
2015-07-24 12:23:06,329 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Uploading resource s3://aws-logs-494322476419-ap-southeast-1/test/spark-test.jar -> hdfs://10.0.0.61:8020/user/hadoop/.sparkStaging/application_1437740323036_0001/spark-test.jar
2015-07-24 12:23:06,568 INFO [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[200], ServiceName=[Amazon S3], AWSRequestID=[C40A7775223B6772], ServiceEndpoint=[https://aws-logs-494322476419-ap-southeast-1.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[237.557], HttpRequestTime=[20.943], HttpClientReceiveResponseTime=[13.247], RequestSigningTime=[29.321], ResponseProcessingTime=[186.674], HttpClientSendRequestTime=[1.998],
2015-07-24 12:23:07,265 INFO [main] s3n.S3NativeFileSystem (S3NativeFileSystem.java:open(1159)) - Opening 's3://aws-logs-494322476419-ap-southeast-1/test/spark-test.jar' for reading
2015-07-24 12:23:07,312 INFO [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[206], ServiceName=[Amazon S3], AWSRequestID=[FB5C0051C241A9AC], ServiceEndpoint=[https://aws-logs-494322476419-ap-southeast-1.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[42.753], HttpRequestTime=[31.778], HttpClientReceiveResponseTime=[20.426], RequestSigningTime=[1.266], ResponseProcessingTime=[7.357], HttpClientSendRequestTime=[1.065],
2015-07-24 12:23:07,330 INFO [main] metrics.MetricsSaver (MetricsSaver.java:<init>(915)) - Thread 1 created MetricsLockFreeSaver 1
2015-07-24 12:23:07,875 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Uploading resource file:/tmp/spark-91e17f5e-45f2-466a-b4cf-585174b9fa98/__hadoop_conf__3852777564911495008.zip -> hdfs://10.0.0.61:8020/user/hadoop/.sparkStaging/application_1437740323036_0001/__hadoop_conf__3852777564911495008.zip
2015-07-24 12:23:07,965 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Uploading resource s3://aws-logs-494322476419-ap-southeast-1/test/spark-assembly-1.4.1-hadoop2.6.0.jar -> hdfs://10.0.0.61:8020/user/hadoop/.sparkStaging/application_1437740323036_0001/spark-assembly-1.4.1-hadoop2.6.0.jar
2015-07-24 12:23:07,993 INFO [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[200], ServiceName=[Amazon S3], AWSRequestID=[25260792F013C91A], ServiceEndpoint=[https://aws-logs-494322476419-ap-southeast-1.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[23.713], HttpRequestTime=[15.297], HttpClientReceiveResponseTime=[12.147], RequestSigningTime=[6.568], ResponseProcessingTime=[0.312], HttpClientSendRequestTime=[1.033],
2015-07-24 12:23:08,003 INFO [main] s3n.S3NativeFileSystem (S3NativeFileSystem.java:open(1159)) - Opening 's3://aws-logs-494322476419-ap-southeast-1/test/spark-assembly-1.4.1-hadoop2.6.0.jar' for reading
2015-07-24 12:23:08,064 INFO [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[206], ServiceName=[Amazon S3], AWSRequestID=[DDF86EA9B896052A], ServiceEndpoint=[https://aws-logs-494322476419-ap-southeast-1.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[60.109], HttpRequestTime=[55.175], HttpClientReceiveResponseTime=[43.324], RequestSigningTime=[1.067], ResponseProcessingTime=[3.409], HttpClientSendRequestTime=[1.16],
2015-07-24 12:23:09,002 INFO [main] metrics.MetricsSaver (MetricsSaver.java:commitPendingKey(1043)) - 1 MetricsLockFreeSaver 2 comitted 556 matured S3ReadDelay values
2015-07-24 12:23:24,296 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Setting up the launch environment for our AM container
2015-07-24 12:23:24,724 INFO [main] spark.SecurityManager (Logging.scala:logInfo(59)) - Changing view acls to: hadoop
2015-07-24 12:23:24,727 INFO [main] spark.SecurityManager (Logging.scala:logInfo(59)) - Changing modify acls to: hadoop
2015-07-24 12:23:24,731 INFO [main] spark.SecurityManager (Logging.scala:logInfo(59)) - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
2015-07-24 12:23:24,912 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Submitting application 1 to ResourceManager
2015-07-24 12:23:25,818 INFO [main] impl.YarnClientImpl (YarnClientImpl.java:submitApplication(252)) - Submitted application application_1437740323036_0001
2015-07-24 12:23:26,872 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:26,893 INFO [main] yarn.Client (Logging.scala:logInfo(59)) -
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1437740605459
final status: UNDEFINED
tracking URL: http://ip-10-0-0-61.ap-southeast-1.compute.internal:20888/proxy/application_1437740323036_0001/
user: hadoop
2015-07-24 12:23:27,902 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:28,906 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:29,909 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:30,913 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:31,917 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:32,920 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:33,924 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:34,931 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:35,936 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:36,939 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:37,944 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:38,948 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:39,951 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:40,965 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:41,969 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:42,973 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:43,978 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:44,981 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:45,991 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:46,994 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:47,999 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: FAILED)
2015-07-24 12:23:48,002 INFO [main] yarn.Client (Logging.scala:logInfo(59)) -
client token: N/A
diagnostics: Application application_1437740323036_0001 failed 2 times due to AM Container for appattempt_1437740323036_0001_000002 exited with exitCode: -1000
For more detailed output, check application tracking page:http://ip-10-0-0-61.ap-southeast-1.compute.internal:20888/proxy/application_1437740323036_0001/Then, click on links to logs of each attempt.
Diagnostics: File does not exist: hdfs://10.0.0.61:8020/user/hadoop/.sparkStaging/application_1437740323036_0001/spark-assembly-1.4.1-hadoop2.6.0-amzn-0.jar
java.io.FileNotFoundException: File does not exist: hdfs://10.0.0.61:8020/user/hadoop/.sparkStaging/application_1437740323036_0001/spark-assembly-1.4.1-hadoop2.6.0-amzn-0.jar
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1437740605459
final status: FAILED
tracking URL: http://ip-10-0-0-61.ap-southeast-1.compute.internal:8088/cluster/app/application_1437740323036_0001
user: hadoop
2015-07-24 12:23:48,038 INFO [Thread-0] util.Utils (Logging.scala:logInfo(59)) - Shutdown hook called
2015-07-24 12:23:48,040 INFO [Thread-0] util.Utils (Logging.scala:logInfo(59)) - Deleting directory /tmp/spark-91e17f5e-45f2-466a-b4cf-585174b9fa98
Can anyone find out what the problem is?
Thank you very much.
My source type is spooldir and sink type is hdfs. There is no error but files are not copied.
Between I am completely aware of the NFS mount feature to copy data. I am in learning flume and I want to try this feature. Once this is working I would like try to write data using log4j, avro as source and hdfs as sink.
Any help is greatly appreciated
Regards
Mani
# Name the components of this agents
maprfs-agent.sources = spool-collect
maprfs-agent.sinks = maprfs-write
maprfs-agent.channels = memory-channel
# Describe/ Configure the sources
maprfs-agent.sources.spool-collect.type = spooldir
maprfs-agent.sources.spool-collect.spoolDir = /home/appdata/mani
maprfs-agent.sources.spool-collect.fileHeader = true
maprfs-agent.sources.spool-collect.bufferMaxLineLength = 500
maprfs-agent.sources.spool-collect.bufferMaxLines = 10000
maprfs-agent.sources.spool-collect.batchSize = 100000
# Describe/ Configure sink
maprfs-agent.sinks.maprfs-write.type = hdfs
maprfs-agent.sinks.maprfs-write.hdfs.fileType = DataStream
maprfs-agent.sinks.maprfs-write.hdfs.path = maprfs:///sample.node.com/user/hive/test
maprfs-agent.sinks.maprfs-write.writeFormat = Text
maprfs-agent.sinks.maprfs-write.hdfs.proxyUser = root
maprfs-agent.sinks.maprfs-write.hdfs.kerberosPrincipal = mapr
maprfs-agent.sinks.maprfs-write.hdfs.kerberosKeytab = /opt/mapr/conf/flume.keytab
maprfs-agent.sinks.maprfs-write.hdfs.filePrefix = %{file}
maprfs-agent.sinks.maprfs-write.hdfs.fileSuffix = .csv
maprfs-agent.sinks.maprfs-write.hdfs.rollInterval = 0
maprfs-agent.sinks.maprfs-write.hdfs.rollCount = 0
maprfs-agent.sinks.maprfs-write.hdfs.rollSize = 0
maprfs-agent.sinks.maprfs-write.hdfs.batchSize = 100
maprfs-agent.sinks.maprfs-write.hdfs.idleTimeout = 0
maprfs-agent.sinks.maprfs-write.hdfs.maxOpenFiles = 5
# Configure channel buffer
maprfs-agent.channels.memory-channel.type = memory
maprfs-agent.channels.memory-channel.capacity = 1000
# Bind the source and the sink to the channel
maprfs-agent.sources.spool-collect.channels = memory-channel
maprfs-agent.sinks.maprfs-write.channel = memory-channel
I am getting below message. no error and no files copied when I execute below command.
hadoop mfs -ls /user/hive/test
15/05/26 13:55:45 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider starting
15/05/26 13:55:45 INFO node.PollingPropertiesFileConfigurationProvider: Reloading configuration file:mapr-spool.conf
15/05/26 13:55:45 INFO conf.FlumeConfiguration: Added sinks: maprfs-write Agent: maprfs-agent
15/05/26 13:55:45 INFO conf.FlumeConfiguration: Processing:maprfs-write
15/05/26 13:55:45 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [maprfs-agent]
15/05/26 13:55:45 INFO node.AbstractConfigurationProvider: Creating channels
15/05/26 13:55:45 INFO channel.DefaultChannelFactory: Creating instance of channel memory-channel type memory
15/05/26 13:55:45 INFO node.AbstractConfigurationProvider: Created channel memory-channel
15/05/26 13:55:45 INFO source.DefaultSourceFactory: Creating instance of source spool-collect, type spooldir
15/05/26 13:55:45 INFO sink.DefaultSinkFactory: Creating instance of sink: maprfs-write, type: hdfs
15/05/26 13:55:47 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
15/05/26 13:55:47 INFO hdfs.HDFSEventSink: Auth method: PROXY
15/05/26 13:55:47 INFO hdfs.HDFSEventSink: User name: root
15/05/26 13:55:47 INFO hdfs.HDFSEventSink: Using keytab: false
15/05/26 13:55:47 INFO hdfs.HDFSEventSink: Superuser auth: SIMPLE
15/05/26 13:55:47 INFO hdfs.HDFSEventSink: Superuser name: root
15/05/26 13:55:47 INFO hdfs.HDFSEventSink: Superuser using keytab: false
15/05/26 13:55:47 INFO hdfs.HDFSEventSink: Logged in as user root
15/05/26 13:55:47 INFO node.AbstractConfigurationProvider: Channel memory-channel connected to [spool-collect, maprfs-write]
15/05/26 13:55:47 INFO node.Application: Starting new configuration:{ sourceRunners:{spool-collect=EventDrivenSourceRunner: { source:Spool Directory source spool-collect: { spoolDir: /home/appdata/mani } }} sinkRunners:{maprfs-write=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#7fc7efa0 counterGroup:{ name:null counters:{} } }} channels:{memory-channel=org.apache.flume.channel.MemoryChannel{name: memory-channel}} }
15/05/26 13:55:47 INFO node.Application: Starting Channel memory-channel
15/05/26 13:55:47 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: memory-channel: Successfully registered new MBean.
15/05/26 13:55:47 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: memory-channel started
15/05/26 13:55:47 INFO node.Application: Starting Sink maprfs-write
15/05/26 13:55:47 INFO node.Application: Starting Source spool-collect
15/05/26 13:55:47 INFO source.SpoolDirectorySource: SpoolDirectorySource source starting with directory: /home/appdata/mani
15/05/26 13:55:47 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: maprfs-write: Successfully registered new MBean.
15/05/26 13:55:47 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: maprfs-write started
15/05/26 13:55:47 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SOURCE, name: spool-collect: Successfully registered new MBean.
15/05/26 13:55:47 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: spool-collect started
15/05/26 13:55:47 INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /home/appdata/mani/cron-s3.log to /home/appdata/mani/cron-s3.log.COMPLETED
15/05/26 13:55:47 INFO hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false
15/05/26 13:55:48 INFO hdfs.BucketWriter: Creating maprfs:///sample.node.com/user/hive/test/.1432644947885.csv.tmp
15/05/26 13:57:08 INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /home/appdata/mani/network-usage.log to /home/appdata/mani/network-usage.log.COMPLETED
15/05/26 13:57:08 INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /home/appdata/mani/processor-usage-2014-10-17.log to /home/appdata/mani/processor-usage-2014-10-17.log.COMPLETED
15/05/26 13:57:25 INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /home/appdata/mani/total-processor-usage.log to /home/appdata/mani/total-processor-usage.log.COMPLETED
15/05/26 13:57:25 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.
15/05/26 13:57:26 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.
15/05/26 13:57:26 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.
15/05/26 13:57:27 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.
15/05/26 13:57:27 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.
15/05/26 13:57:28 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.