Run Spark in Amazon EMR - amazon-web-services

I am a new bee in Spark and trying to run Spark on Amazon EMR. Here's my code (I've copied from an example and did a little bit modification):
package test;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.PairFunction;
import scala.Tuple2;
import com.google.common.base.Optional;
public class SparkTest {
public static void main(String[] args) {
JavaSparkContext sc = new JavaSparkContext(new SparkConf().setAppName("Spark Count"));
sc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", "xxxxxxxxxxxxxxx");
sc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", "yyyyyyyyyyyyyyyyyyyyyyy");
JavaRDD<String> customerInputFile = sc.textFile("s3n://aws-logs-494322476419-ap-southeast-1/test/customers_data.txt");
JavaPairRDD<String, String> customerPairs = customerInputFile.mapToPair(new PairFunction<String, String, String>() {
public Tuple2<String, String> call(String s) {
String[] customerSplit = s.split(",");
return new Tuple2<String, String>(customerSplit[0], customerSplit[1]);
}
}).distinct();
JavaRDD<String> transactionInputFile = sc.textFile("s3n://aws-logs-494322476419-ap-southeast-1/test/transactions_data.txt");
JavaPairRDD<String, String> transactionPairs = transactionInputFile.mapToPair(new PairFunction<String, String, String>() {
public Tuple2<String, String> call(String s) {
String[] transactionSplit = s.split(",");
return new Tuple2<String, String>(transactionSplit[2], transactionSplit[3]+","+transactionSplit[1]);
}
});
//Default Join operation (Inner join)
JavaPairRDD<String, Tuple2<String, String>> joinsOutput = customerPairs.join(transactionPairs);
System.out.println("Joins function Output: "+joinsOutput.collect());
//Left Outer join operation
JavaPairRDD<String, Iterable<Tuple2<String, Optional<String>>>> leftJoinOutput = customerPairs.leftOuterJoin(transactionPairs).groupByKey().sortByKey();
System.out.println("LeftOuterJoins function Output: "+leftJoinOutput.collect());
//Right Outer join operation
JavaPairRDD<String, Iterable<Tuple2<Optional<String>, String>>> rightJoinOutput = customerPairs.rightOuterJoin(transactionPairs).groupByKey().sortByKey();
System.out.println("LeftOuterJoins function Output: "+rightJoinOutput.collect());
sc.close();
}
}
But after made a jar and setup a cluster and run, it always report such error and fail:
2015-07-24 12:22:41,550 INFO [main] client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at ip-10-0-0-61.ap-southeast-1.compute.internal/10.0.0.61:8032
2015-07-24 12:22:42,619 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Requesting a new application from cluster with 2 NodeManagers
2015-07-24 12:22:42,694 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Verifying our application has not requested more than the maximum memory capability of the cluster (2048 MB per container)
2015-07-24 12:22:42,698 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Will allocate AM container, with 896 MB memory including 384 MB overhead
2015-07-24 12:22:42,700 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Setting up container launch context for our AM
2015-07-24 12:22:42,707 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Preparing resources for our AM container
2015-07-24 12:22:45,445 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Uploading resource file:/usr/lib/spark/lib/spark-assembly-1.4.1-hadoop2.6.0-amzn-0.jar -> hdfs://10.0.0.61:8020/user/hadoop/.sparkStaging/application_1437740323036_0001/spark-assembly-1.4.1-hadoop2.6.0-amzn-0.jar
2015-07-24 12:22:47,701 INFO [main] metrics.MetricsSaver (MetricsSaver.java:showConfigRecord(643)) - MetricsConfigRecord disabledInCluster: false instanceEngineCycleSec: 60 clusterEngineCycleSec: 60 disableClusterEngine: false maxMemoryMb: 3072 maxInstanceCount: 500 lastModified: 1437740335527
2015-07-24 12:22:47,713 INFO [main] metrics.MetricsSaver (MetricsSaver.java:<init>(284)) - Created MetricsSaver j-1NM41B4W6K3IP:i-525f449f:SparkSubmit:06588 period:60 /mnt/var/em/raw/i-525f449f_20150724_SparkSubmit_06588_raw.bin
2015-07-24 12:22:49,047 INFO [DataStreamer for file /user/hadoop/.sparkStaging/application_1437740323036_0001/spark-assembly-1.4.1-hadoop2.6.0-amzn-0.jar block BP-1554902524-10.0.0.61-1437740270491:blk_1073741830_1015] metrics.MetricsSaver (MetricsSaver.java:compactRawValues(464)) - 1 aggregated HDFSWriteDelay 183 raw values into 1 aggregated values, total 1
2015-07-24 12:23:03,845 INFO [main] fs.EmrFileSystem (EmrFileSystem.java:initialize(107)) - Consistency disabled, using com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem as filesystem implementation
2015-07-24 12:23:06,316 INFO [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[200], ServiceName=[Amazon S3], AWSRequestID=[E987B96CAE12A2B2], ServiceEndpoint=[https://aws-logs-494322476419-ap-southeast-1.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=0, ClientExecuteTime=[2266.609], HttpRequestTime=[1805.926], HttpClientReceiveResponseTime=[17.096], RequestSigningTime=[187.361], ResponseProcessingTime=[0.66], HttpClientSendRequestTime=[1.065],
2015-07-24 12:23:06,329 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Uploading resource s3://aws-logs-494322476419-ap-southeast-1/test/spark-test.jar -> hdfs://10.0.0.61:8020/user/hadoop/.sparkStaging/application_1437740323036_0001/spark-test.jar
2015-07-24 12:23:06,568 INFO [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[200], ServiceName=[Amazon S3], AWSRequestID=[C40A7775223B6772], ServiceEndpoint=[https://aws-logs-494322476419-ap-southeast-1.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[237.557], HttpRequestTime=[20.943], HttpClientReceiveResponseTime=[13.247], RequestSigningTime=[29.321], ResponseProcessingTime=[186.674], HttpClientSendRequestTime=[1.998],
2015-07-24 12:23:07,265 INFO [main] s3n.S3NativeFileSystem (S3NativeFileSystem.java:open(1159)) - Opening 's3://aws-logs-494322476419-ap-southeast-1/test/spark-test.jar' for reading
2015-07-24 12:23:07,312 INFO [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[206], ServiceName=[Amazon S3], AWSRequestID=[FB5C0051C241A9AC], ServiceEndpoint=[https://aws-logs-494322476419-ap-southeast-1.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[42.753], HttpRequestTime=[31.778], HttpClientReceiveResponseTime=[20.426], RequestSigningTime=[1.266], ResponseProcessingTime=[7.357], HttpClientSendRequestTime=[1.065],
2015-07-24 12:23:07,330 INFO [main] metrics.MetricsSaver (MetricsSaver.java:<init>(915)) - Thread 1 created MetricsLockFreeSaver 1
2015-07-24 12:23:07,875 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Uploading resource file:/tmp/spark-91e17f5e-45f2-466a-b4cf-585174b9fa98/__hadoop_conf__3852777564911495008.zip -> hdfs://10.0.0.61:8020/user/hadoop/.sparkStaging/application_1437740323036_0001/__hadoop_conf__3852777564911495008.zip
2015-07-24 12:23:07,965 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Uploading resource s3://aws-logs-494322476419-ap-southeast-1/test/spark-assembly-1.4.1-hadoop2.6.0.jar -> hdfs://10.0.0.61:8020/user/hadoop/.sparkStaging/application_1437740323036_0001/spark-assembly-1.4.1-hadoop2.6.0.jar
2015-07-24 12:23:07,993 INFO [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[200], ServiceName=[Amazon S3], AWSRequestID=[25260792F013C91A], ServiceEndpoint=[https://aws-logs-494322476419-ap-southeast-1.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[23.713], HttpRequestTime=[15.297], HttpClientReceiveResponseTime=[12.147], RequestSigningTime=[6.568], ResponseProcessingTime=[0.312], HttpClientSendRequestTime=[1.033],
2015-07-24 12:23:08,003 INFO [main] s3n.S3NativeFileSystem (S3NativeFileSystem.java:open(1159)) - Opening 's3://aws-logs-494322476419-ap-southeast-1/test/spark-assembly-1.4.1-hadoop2.6.0.jar' for reading
2015-07-24 12:23:08,064 INFO [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[206], ServiceName=[Amazon S3], AWSRequestID=[DDF86EA9B896052A], ServiceEndpoint=[https://aws-logs-494322476419-ap-southeast-1.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[60.109], HttpRequestTime=[55.175], HttpClientReceiveResponseTime=[43.324], RequestSigningTime=[1.067], ResponseProcessingTime=[3.409], HttpClientSendRequestTime=[1.16],
2015-07-24 12:23:09,002 INFO [main] metrics.MetricsSaver (MetricsSaver.java:commitPendingKey(1043)) - 1 MetricsLockFreeSaver 2 comitted 556 matured S3ReadDelay values
2015-07-24 12:23:24,296 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Setting up the launch environment for our AM container
2015-07-24 12:23:24,724 INFO [main] spark.SecurityManager (Logging.scala:logInfo(59)) - Changing view acls to: hadoop
2015-07-24 12:23:24,727 INFO [main] spark.SecurityManager (Logging.scala:logInfo(59)) - Changing modify acls to: hadoop
2015-07-24 12:23:24,731 INFO [main] spark.SecurityManager (Logging.scala:logInfo(59)) - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
2015-07-24 12:23:24,912 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Submitting application 1 to ResourceManager
2015-07-24 12:23:25,818 INFO [main] impl.YarnClientImpl (YarnClientImpl.java:submitApplication(252)) - Submitted application application_1437740323036_0001
2015-07-24 12:23:26,872 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:26,893 INFO [main] yarn.Client (Logging.scala:logInfo(59)) -
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1437740605459
final status: UNDEFINED
tracking URL: http://ip-10-0-0-61.ap-southeast-1.compute.internal:20888/proxy/application_1437740323036_0001/
user: hadoop
2015-07-24 12:23:27,902 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:28,906 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:29,909 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:30,913 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:31,917 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:32,920 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:33,924 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:34,931 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:35,936 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:36,939 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:37,944 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:38,948 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:39,951 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:40,965 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:41,969 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:42,973 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:43,978 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:44,981 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:45,991 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:46,994 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: ACCEPTED)
2015-07-24 12:23:47,999 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Application report for application_1437740323036_0001 (state: FAILED)
2015-07-24 12:23:48,002 INFO [main] yarn.Client (Logging.scala:logInfo(59)) -
client token: N/A
diagnostics: Application application_1437740323036_0001 failed 2 times due to AM Container for appattempt_1437740323036_0001_000002 exited with exitCode: -1000
For more detailed output, check application tracking page:http://ip-10-0-0-61.ap-southeast-1.compute.internal:20888/proxy/application_1437740323036_0001/Then, click on links to logs of each attempt.
Diagnostics: File does not exist: hdfs://10.0.0.61:8020/user/hadoop/.sparkStaging/application_1437740323036_0001/spark-assembly-1.4.1-hadoop2.6.0-amzn-0.jar
java.io.FileNotFoundException: File does not exist: hdfs://10.0.0.61:8020/user/hadoop/.sparkStaging/application_1437740323036_0001/spark-assembly-1.4.1-hadoop2.6.0-amzn-0.jar
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1437740605459
final status: FAILED
tracking URL: http://ip-10-0-0-61.ap-southeast-1.compute.internal:8088/cluster/app/application_1437740323036_0001
user: hadoop
2015-07-24 12:23:48,038 INFO [Thread-0] util.Utils (Logging.scala:logInfo(59)) - Shutdown hook called
2015-07-24 12:23:48,040 INFO [Thread-0] util.Utils (Logging.scala:logInfo(59)) - Deleting directory /tmp/spark-91e17f5e-45f2-466a-b4cf-585174b9fa98
Can anyone find out what the problem is?
Thank you very much.

Related

AWS Enviroment fails to connect with the Database

I've created a simple Spring Boot application in IntelliJ and tested the connection with the AWS database, which gives me no error. After creating a mvn clean install, I test the jar in the command-prompt, which also gives me no error. Great, I can upload my JAR file to my AWS Environment. I go to Environment > Configuration > Edit Database, and select my database in the snapshot drop down. But my Environment wound connect to the database. I don't get why it would work. I've been trying to run my app on AWS for days now and I feel I'm close, but don't know how to solve this problem. Plz check out my log:
----------------------------------------
/var/log/web.stdout.log
----------------------------------------
Apr 20 06:15:19 ip-172-31-35-71 web: 2022-04-20 06:15:19.048:INFO::main: Logging initialized #367ms
Apr 20 06:15:19 ip-172-31-35-71 web: 2022-04-20 06:15:19.122:INFO:oejs.Server:main: jetty-9.2.z-SNAPSHOT
Apr 20 06:15:19 ip-172-31-35-71 web: 2022-04-20 06:15:19.201:INFO:oejs.ServerConnector:main: Started ServerConnector#2fcb66eb{HTTP/1.1}{0.0.0.0:5000}
Apr 20 08:34:45 ip-172-31-35-71 web: :: Spring Boot :: (v2.6.6)
Apr 20 08:34:45 ip-172-31-35-71 web: 2022-04-20 08:34:45.466 INFO 11107 --- [ main] c.e.S.SpringBootCrudExampleApplication : Starting SpringBootCrudExampleApplication v0.0.1-SNAPSHOT using Java 11.0.14.1 on ip-172-31-35-71.ec2.internal with PID 11107 (/var/app/current/application.jar started by webapp in /var/app/current)
Apr 20 08:34:45 ip-172-31-35-71 web: 2022-04-20 08:34:45.477 INFO 11107 --- [ main] c.e.S.SpringBootCrudExampleApplication : No active profile set, falling back to 1 default profile: "default"
Apr 20 08:34:47 ip-172-31-35-71 web: 2022-04-20 08:34:47.473 INFO 11107 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFAULT mode.
Apr 20 08:34:47 ip-172-31-35-71 web: 2022-04-20 08:34:47.963 INFO 11107 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 472 ms. Found 1 JPA repository interfaces.
Apr 20 08:34:49 ip-172-31-35-71 web: 2022-04-20 08:34:49.554 INFO 11107 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 5000 (http)
Apr 20 08:34:49 ip-172-31-35-71 web: 2022-04-20 08:34:49.582 INFO 11107 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
Apr 20 08:34:49 ip-172-31-35-71 web: 2022-04-20 08:34:49.583 INFO 11107 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.60]
Apr 20 08:34:49 ip-172-31-35-71 web: 2022-04-20 08:34:49.769 INFO 11107 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
Apr 20 08:34:49 ip-172-31-35-71 web: 2022-04-20 08:34:49.769 INFO 11107 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 4143 ms
Apr 20 08:34:50 ip-172-31-35-71 web: 2022-04-20 08:34:50.856 INFO 11107 --- [ main] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo [name: default]
Apr 20 08:34:50 ip-172-31-35-71 web: 2022-04-20 08:34:50.987 INFO 11107 --- [ main] org.hibernate.Version : HHH000412: Hibernate ORM core version 5.6.7.Final
Apr 20 08:34:51 ip-172-31-35-71 web: 2022-04-20 08:34:51.362 INFO 11107 --- [ main] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {5.1.2.Final}
Apr 20 08:34:51 ip-172-31-35-71 web: 2022-04-20 08:34:51.585 INFO 11107 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
Apr 20 08:34:51 ip-172-31-35-71 web: 2022-04-20 08:34:51.957 INFO 11107 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed.
Apr 20 08:34:51 ip-172-31-35-71 web: 2022-04-20 08:34:51.979 INFO 11107 --- [ main] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.dialect.MySQL8Dialect
Apr 20 08:34:53 ip-172-31-35-71 web: 2022-04-20 08:34:53.137 INFO 11107 --- [ main] o.h.e.t.j.p.i.JtaPlatformInitiator : HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform]
Apr 20 08:34:53 ip-172-31-35-71 web: 2022-04-20 08:34:53.148 INFO 11107 --- [ main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default'
Apr 20 08:34:54 ip-172-31-35-71 web: 2022-04-20 08:34:54.000 WARN 11107 --- [ main] JpaBaseConfiguration$JpaWebConfiguration : spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning
Apr 20 08:34:54 ip-172-31-35-71 web: 2022-04-20 08:34:54.946 INFO 11107 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 5000 (http) with context path ''
Apr 20 08:34:54 ip-172-31-35-71 web: 2022-04-20 08:34:54.965 INFO 11107 --- [ main] c.e.S.SpringBootCrudExampleApplication : Started SpringBootCrudExampleApplication in 11.253 seconds (JVM running for 12.875)
Apr 20 08:36:29 ip-172-31-35-71 web: 2022-04-20 08:36:29.654 INFO 11107 --- [ionShutdownHook] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
Apr 20 08:36:29 ip-172-31-35-71 web: 2022-04-20 08:36:29.658 INFO 11107 --- [ionShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
Apr 20 08:36:29 ip-172-31-35-71 web: 2022-04-20 08:36:29.690 INFO 11107 --- [ionShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
Events
INFO Environment health has transitioned from Warning to Ok.
WARN Environment health has transitioned from Warning to Ok.
INFO The environment was reverted to the previous configuration setting.
INFO Environment health has transitioned from Info to Warning. Configuration update failed 18 seconds ago and took 67 seconds.
The environment was reverted to the previous configuration setting.
Environment health has transitioned from Ok to Info. Configuration update in progress (running for 15 seconds).
ERROR Failed to deploy configuration.
ERROR Creating RDS database failed Reason: DB Instance class db.t2.micro does not support encryption at rest
ERROR Service:AmazonCloudFormation, Message:Stack named 'awseb-e-3a2mk3bca7-stack' aborted operation. Current state: 'UPDATE_ROLLBACK_IN_PROGRESS' Reason: The following resource(s) failed to create: [AWSEBRDSDatabase].
INFO Updating environment Invoice-env's configuration settings.
INFO Environment update is starting.

wso2 identity server startup error v 5.11.0 windows 10

TID: [-1234] [] [2022-02-26 15:41:04,837] [] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Starting WSO2 Carbon...
TID: [-1] [] [2022-02-26 15:41:04,837] [] INFO {org.ops4j.pax.logging.spi.support.EventAdminConfigurationNotifier} - Sending Event Admin nofification (configuration successful) to org/ops4j/pax/logging/Configuration
TID: [-1234] [] [2022-02-26 15:41:04,843] [] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Operating System : Windows 10 10.0, amd64
TID: [-1234] [] [2022-02-26 15:41:04,843] [] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java Home : C:\Program Files\Java\jdk1.8.0_321\jre
TID: [-1234] [] [2022-02-26 15:41:04,843] [] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java Version : 1.8.0_321
TID: [-1234] [] [2022-02-26 15:41:04,844] [] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java VM : Java HotSpot(TM) 64-Bit Server VM 25.321-b07,Oracle Corporation
TID: [-1234] [] [2022-02-26 15:41:04,844] [] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Carbon Home : \PROGRA~1\WSO2\IDENTI~1\5.11.0
TID: [-1234] [] [2022-02-26 15:41:04,844] [] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java Temp Dir : \PROGRA~1\WSO2\IDENTI~1\5.11.0\tmp
TID: [-1234] [] [2022-02-26 15:41:04,844] [] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - User : Sampath Kumar, en-IN, Asia/Calcutta
TID: [-1] [] [2022-02-26 15:41:05,070] [] INFO {org.wso2.carbon.event.output.adapter.kafka.internal.ds.KafkaEventAdapterServiceDS} - Successfully deployed the Kafka output event adaptor service
TID: [-1] [] [2022-02-26 15:41:05,451] [] INFO {org.wso2.carbon.identity.oauth.uma.grant.internal.UMA2GrantServiceComponent} - Policy evaluator registered successfully: DefaultPolicyEvaluator
TID: [-1] [] [2022-02-26 15:41:05,452] [] INFO {org.wso2.carbon.identity.oauth.uma.grant.internal.UMA2GrantServiceComponent} - UMA Grant component activated successfully.
TID: [-1234] [] [2022-02-26 15:41:05,801] [] INFO {org.wso2.carbon.ldap.server.configuration.LDAPConfigurationBuilder} - KDC server is disabled.
TID: [-1234] [] [2022-02-26 15:41:08,518] [] INFO {org.apache.directory.server.KERBEROS_LOG} - KeyDerivation Interceptor initialized
TID: [-1] [] [2022-02-26 15:41:09,571] [] INFO {org.wso2.carbon.mex.internal.Office365SupportMexComponent} - Office365Support MexServiceComponent bundle activated successfully..
TID: [-1] [] [2022-02-26 15:41:09,580] [] INFO {org.wso2.carbon.mex2.internal.DynamicCRMCustomMexComponent} - DynamicCRMSupport MexServiceComponent bundle activated successfully.
TID: [-1] [] [2022-02-26 15:41:10,624] [] ERROR {org.apache.catalina.core.ContainerBase} - A child container failed during start java.util.concurrent.ExecutionException: org.apache.catalina.LifecycleException: Failed to start component [org.apache.catalina.webresources.StandardRoot#9f03abb]
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:916)
at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:841)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1384)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1374)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:909)
at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:262)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.wso2.carbon.tomcat.ext.service.ExtendedStandardService.startInternal(ExtendedStandardService.java:52)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:930)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.wso2.carbon.tomcat.internal.CarbonTomcat.start(CarbonTomcat.java:113)
at org.wso2.carbon.tomcat.internal.ServerManager$1.run(ServerManager.java:167)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.catalina.LifecycleException: Failed to start component [org.apache.catalina.webresources.StandardRoot#9f03abb]
at org.apache.catalina.util.LifecycleBase.handleSubClassException(LifecycleBase.java:440)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:198)
at org.apache.catalina.core.StandardContext.resourcesStart(StandardContext.java:4805)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:4940)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1384)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1374)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:909)
... 17 more
Caused by: java.lang.IllegalArgumentException: The main resource set specified [C:\Program Files\WSO2\Identity Server\5.11.0\lib\tomcat\PROGRA~1\WSO2\IDENTI~1\5.11.0\repository\deployment\server\webapps\PROGRA~1\WSO2\IDENTI~1\5.11.0\repository\conf\tomcat\carbon] is not valid
at org.apache.catalina.webresources.StandardRoot.createMainResourceSet(StandardRoot.java:752)
at org.apache.catalina.webresources.StandardRoot.startInternal(StandardRoot.java:709)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
... 26 more
TID: [-1] [] [2022-02-26 15:41:10,645] [] ERROR {org.apache.catalina.core.ContainerBase} - A child container failed during start java.util.concurrent.ExecutionException: org.apache.catalina.LifecycleException: A child container failed during start
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:916)
at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:262)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.wso2.carbon.tomcat.ext.service.ExtendedStandardService.startInternal(ExtendedStandardService.java:52)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:930)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.wso2.carbon.tomcat.internal.CarbonTomcat.start(CarbonTomcat.java:113)
at org.wso2.carbon.tomcat.internal.ServerManager$1.run(ServerManager.java:167)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.catalina.LifecycleException: A child container failed during start
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:928)
at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:841)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1384)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1374)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:909)
... 9 more
Caused by: java.util.concurrent.ExecutionException: org.apache.catalina.LifecycleException: Failed to start component [org.apache.catalina.webresources.StandardRoot#9f03abb]
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:916)
... 17 more
Caused by: org.apache.catalina.LifecycleException: Failed to start component [org.apache.catalina.webresources.StandardRoot#9f03abb]
at org.apache.catalina.util.LifecycleBase.handleSubClassException(LifecycleBase.java:440)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:198)
at org.apache.catalina.core.StandardContext.resourcesStart(StandardContext.java:4805)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:4940)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1384)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1374)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:909)
... 17 more
Caused by: java.lang.IllegalArgumentException: The main resource set specified [C:\Program Files\WSO2\Identity Server\5.11.0\lib\tomcat\PROGRA~1\WSO2\IDENTI~1\5.11.0\repository\deployment\server\webapps\PROGRA~1\WSO2\IDENTI~1\5.11.0\repository\conf\tomcat\carbon] is not valid
at org.apache.catalina.webresources.StandardRoot.createMainResourceSet(StandardRoot.java:752)
at org.apache.catalina.webresources.StandardRoot.startInternal(StandardRoot.java:709)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
... 26 more
TID: [-1] [] [2022-02-26 15:41:10,656] [] ERROR {org.wso2.carbon.tomcat.internal.ServerManager} - tomcat life-cycle exception org.apache.catalina.LifecycleException: A child container failed during start
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:928)
at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:262)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.wso2.carbon.tomcat.ext.service.ExtendedStandardService.startInternal(ExtendedStandardService.java:52)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:930)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.wso2.carbon.tomcat.internal.CarbonTomcat.start(CarbonTomcat.java:113)
at org.wso2.carbon.tomcat.internal.ServerManager$1.run(ServerManager.java:167)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.util.concurrent.ExecutionException: org.apache.catalina.LifecycleException: A child container failed during start
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:916)
... 9 more
Caused by: org.apache.catalina.LifecycleException: A child container failed during start
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:928)
at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:841)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1384)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1374)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:909)
... 9 more
Caused by: java.util.concurrent.ExecutionException: org.apache.catalina.LifecycleException: Failed to start component [org.apache.catalina.webresources.StandardRoot#9f03abb]
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:916)
... 17 more
Caused by: org.apache.catalina.LifecycleException: Failed to start component [org.apache.catalina.webresources.StandardRoot#9f03abb]
at org.apache.catalina.util.LifecycleBase.handleSubClassException(LifecycleBase.java:440)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:198)
at org.apache.catalina.core.StandardContext.resourcesStart(StandardContext.java:4805)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:4940)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1384)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1374)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:909)
... 17 more
Caused by: java.lang.IllegalArgumentException: The main resource set specified [C:\Program Files\WSO2\Identity Server\5.11.0\lib\tomcat\PROGRA~1\WSO2\IDENTI~1\5.11.0\repository\deployment\server\webapps\PROGRA~1\WSO2\IDENTI~1\5.11.0\repository\conf\tomcat\carbon] is not valid
at org.apache.catalina.webresources.StandardRoot.createMainResourceSet(StandardRoot.java:752)
at org.apache.catalina.webresources.StandardRoot.startInternal(StandardRoot.java:709)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
... 26 more
TID: [-1234] [] [2022-02-26 15:41:11,021] [] INFO {org.wso2.carbon.user.core.ldap.UniqueIDReadWriteLDAPUserStoreManager} - LDAP connection created successfully in read-write mode
TID: [-1234] [] [2022-02-26 15:41:11,090] [] INFO {org.wso2.carbon.consent.mgt.core.internal.ConsentManagerComponent} - ConsentManagerComponent is activated.
TID: [-1234] [] [2022-02-26 15:41:11,477] [] INFO {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService} - Configured Registry in 47ms
TID: [-1234] [] [2022-02-26 15:41:11,500] [] INFO {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService} - Connected to mount at configregistry in 4ms
TID: [-1234] [] [2022-02-26 15:41:11,501] [] INFO {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService} - Connected to mount at govregistry in 5ms
TID: [-1234] [] [2022-02-26 15:41:11,546] [] INFO {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService} - Connected to mount at configregistry in 0ms
TID: [-1234] [] [2022-02-26 15:41:11,552] [] INFO {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService} - Connected to mount at govregistry in 1ms
TID: [-1234] [] [2022-02-26 15:41:11,560] [] INFO {org.wso2.carbon.registry.core.internal.RegistryCoreServiceComponent} - Registry Mode : READ-WRITE
TID: [-1234] [] [2022-02-26 15:41:11,616] [] INFO {org.wso2.carbon.metrics.impl.util.JmxReporterBuilder} - Creating JMX reporter for Metrics with domain 'org.wso2.carbon.metrics'
TID: [-1234] [] [2022-02-26 15:41:11,621] [] INFO {org.wso2.carbon.metrics.impl.reporter.AbstractReporter} - Started JMX reporter for Metrics
TID: [-1234] [] [2022-02-26 15:41:13,086] [] INFO {org.wso2.carbon.registry.indexing.solr.SolrClient} - Default Embedded Solr Server Initialized
TID: [-1234] [] [2022-02-26 15:41:13,214] [] INFO {org.wso2.carbon.user.core.internal.UserStoreMgtDSComponent} - Carbon UserStoreMgtDSComponent activated successfully.
As the error indicates, there is some problem with tomcat configuration.
Cross check if you have made any changes to your <IS_HOME>\repository\conf\deployment.toml wrt tomcat
try starting the wso2 IS after installing before making any change
you are getting this error, after any particular configuration? provide details to help.

Pig schema tuple not set. Will not generate code

I had run the following commands on pig on the google n-grams dataset:
inp = LOAD 'link to file' AS (ngram:chararray, year:int, occurences:float, books:float);
filter_input = FILTER inp BY (occurences >= 400) AND (books >= 8);
groupinp = GROUP filter_input BY ngram;
sum_occ = FOREACH groupinp GENERATE FLATTEN(group) as ngram, SUM(filter_input.occurences) / SUM(filter_input.books) AS ntry;
roundto = FOREACH sum_occ GENERATE sum_occ.ngram, ROUND_TO( sum_occ.ntry , 2 );
However I get the following error:
DUMP roundto;
601062 [main] WARN org.apache.pig.newplan.BaseOperatorPlan - Encountered Warning IMPLICIT_CAST_TO_FLOAT 2 time(s).
18/04/06 01:46:03 WARN newplan.BaseOperatorPlan: Encountered Warning IMPLICIT_CAST_TO_FLOAT 2 time(s).
601067 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: GROUP_BY,FILTER
18/04/06 01:46:03 INFO pigstats.ScriptState: Pig features used in the script: GROUP_BY,FILTER
601111 [main] INFO org.apache.pig.data.SchemaTupleBackend - Key [pig.schematuple] was not set... will not generate code.
18/04/06 01:46:03 INFO data.SchemaTupleBackend: Key [pig.schematuple] was not set... will not generate code.
601111 [main] INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, NestedLimitOptimizer, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
18/04/06 01:46:03 INFO optimizer.LogicalPlanOptimizer: {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, NestedLimitOptimizer, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
601238 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezLauncher - Tez staging directory is /tmp/temp-336429202 and resources directory is /tmp/temp-336429202
18/04/06 01:46:03 INFO tez.TezLauncher: Tez staging directory is /tmp/temp-336429202 and resources directory is /tmp/temp-336429202
601239 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.plan.TezCompiler - File concatenation threshold: 100 optimistic? false
18/04/06 01:46:03 INFO plan.TezCompiler: File concatenation threshold: 100 optimistic? false
601241 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.CombinerOptimizerUtil - Choosing to move algebraic foreach to combiner
18/04/06 01:46:03 INFO util.CombinerOptimizerUtil: Choosing to move algebraic foreach to combiner
601265 [main] INFO org.apache.pig.builtin.PigStorage - Using PigTextInputFormat
18/04/06 01:46:03 INFO builtin.PigStorage: Using PigTextInputFormat
18/04/06 01:46:03 INFO input.FileInputFormat: Total input files to process : 1
601285 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
18/04/06 01:46:03 INFO util.MapRedUtil: Total input paths to process : 1
601285 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
18/04/06 01:46:03 INFO util.MapRedUtil: Total input paths (combined) to process : 1
18/04/06 01:46:03 INFO hadoop.MRInputHelpers: NumSplits: 1, SerializedSize: 408
601322 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - Local resource: joda-time-2.9.4.jar
18/04/06 01:46:03 INFO tez.TezJobCompiler: Local resource: joda-time-2.9.4.jar
601322 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - Local resource: pig-0.17.0-core-h2.jar
18/04/06 01:46:03 INFO tez.TezJobCompiler: Local resource: pig-0.17.0-core-h2.jar
601322 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - Local resource: antlr-runtime-3.4.jar
18/04/06 01:46:03 INFO tez.TezJobCompiler: Local resource: antlr-runtime-3.4.jar
601322 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - Local resource: automaton-1.11-8.jar
18/04/06 01:46:03 INFO tez.TezJobCompiler: Local resource: automaton-1.11-8.jar
601402 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - For vertex - scope-141: parallelism=1, memory=1536, java opts=-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx1229m -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator -Dlog4j.configuration=tez-container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dtez.root.logger=INFO,CLA
18/04/06 01:46:03 INFO tez.TezDagBuilder: For vertex - scope-141: parallelism=1, memory=1536, java opts=-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx1229m -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator -Dlog4j.configuration=tez-container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dtez.root.logger=INFO,CLA
601402 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Processing aliases: filter_input,groupinp,inp,sum_occ
18/04/06 01:46:03 INFO tez.TezDagBuilder: Processing aliases: filter_input,groupinp,inp,sum_occ
601402 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Detailed locations: inp[1,6],inp[-1,-1],filter_input[2,15],sum_occ[4,10],groupinp[3,11]
18/04/06 01:46:03 INFO tez.TezDagBuilder: Detailed locations: inp[1,6],inp[-1,-1],filter_input[2,15],sum_occ[4,10],groupinp[3,11]
601402 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Pig features in the vertex:
18/04/06 01:46:03 INFO tez.TezDagBuilder: Pig features in the vertex:
601449 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Set auto parallelism for vertex scope-142
18/04/06 01:46:03 INFO tez.TezDagBuilder: Set auto parallelism for vertex scope-142
601450 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - For vertex - scope-142: parallelism=1, memory=3072, java opts=-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx2458m -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator -Dlog4j.configuration=tez-container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dtez.root.logger=INFO,CLA
18/04/06 01:46:03 INFO tez.TezDagBuilder: For vertex - scope-142: parallelism=1, memory=3072, java opts=-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx2458m -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator -Dlog4j.configuration=tez-container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dtez.root.logger=INFO,CLA
601450 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Processing aliases: roundto,sum_occ
18/04/06 01:46:03 INFO tez.TezDagBuilder: Processing aliases: roundto,sum_occ
601450 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Detailed locations: sum_occ[4,10],roundto[6,10]
18/04/06 01:46:03 INFO tez.TezDagBuilder: Detailed locations: sum_occ[4,10],roundto[6,10]
601450 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Pig features in the vertex: GROUP_BY
18/04/06 01:46:03 INFO tez.TezDagBuilder: Pig features in the vertex: GROUP_BY
601489 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - Total estimated parallelism is 2
18/04/06 01:46:04 INFO tez.TezJobCompiler: Total estimated parallelism is 2
601531 [PigTezLauncher-0] INFO org.apache.pig.tools.pigstats.tez.TezScriptState - Pig script settings are added to the job
18/04/06 01:46:04 INFO tez.TezScriptState: Pig script settings are added to the job
18/04/06 01:46:04 INFO client.TezClient: Tez Client Version: [ component=tez-api, version=0.8.4, revision=300391394352b074b85b529e870816a72c6f314a, SCM-URL=scm:git:https://git-wip-us.apache.org/repos/asf/tez.git, buildTime=2018-03-21T23:55:28Z ]
18/04/06 01:46:04 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-28-12.ec2.internal/172.31.28.12:8032
18/04/06 01:46:04 INFO client.TezClient: Using org.apache.tez.dag.history.ats.acls.ATSHistoryACLPolicyManager to manage Timeline ACLs
18/04/06 01:46:04 INFO impl.TimelineClientImpl: Timeline service address: http://ip-172-31-28-12.ec2.internal:8188/ws/v1/timeline/
18/04/06 01:46:04 INFO client.TezClient: Session mode. Starting session.
18/04/06 01:46:04 INFO client.TezClientUtils: Using tez.lib.uris value from configuration: hdfs:///apps/tez/tez.tar.gz
18/04/06 01:46:04 INFO client.TezClientUtils: Using tez.lib.uris.classpath value from configuration: null
18/04/06 01:46:04 INFO client.TezClient: Tez system stage directory hdfs://ip-172-31-28-12.ec2.internal:8020/tmp/temp-336429202/.tez/application_1522978297921_0003 doesn't exist and is created
18/04/06 01:46:04 INFO acls.ATSHistoryACLPolicyManager: Created Timeline Domain for History ACLs, domainId=Tez_ATS_application_1522978297921_0003
18/04/06 01:46:04 INFO impl.YarnClientImpl: Submitted application application_1522978297921_0003
18/04/06 01:46:04 INFO client.TezClient: The url to track the Tez Session: http://ip-172-31-28-12.ec2.internal:20888/proxy/application_1522978297921_0003/
607861 [PigTezLauncher-0] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJob - Submitting DAG PigLatin:DefaultJobName-0_scope-2
18/04/06 01:46:10 INFO tez.TezJob: Submitting DAG PigLatin:DefaultJobName-0_scope-2
18/04/06 01:46:10 INFO client.TezClient: Submitting dag to TezSession, sessionName=PigLatin:DefaultJobName, applicationId=application_1522978297921_0003, dagName=PigLatin:DefaultJobName-0_scope-2, callerContext={ context=PIG, callerType=PIG_SCRIPT_ID, callerId=PIG-default-d73e19dc-5287-4ee2-a85d-e931327011dc }
18/04/06 01:46:10 INFO client.TezClient: Submitted dag to TezSession, sessionName=PigLatin:DefaultJobName, applicationId=application_1522978297921_0003, dagName=PigLatin:DefaultJobName-0_scope-2
18/04/06 01:46:10 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-28-12.ec2.internal/172.31.28.12:8032
608409 [PigTezLauncher-0] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJob - Submitted DAG PigLatin:DefaultJobName-0_scope-2. Application id: application_1522978297921_0003
18/04/06 01:46:10 INFO tez.TezJob: Submitted DAG PigLatin:DefaultJobName-0_scope-2. Application id: application_1522978297921_0003
608528 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezLauncher - HadoopJobId: job_1522978297921_0003
18/04/06 01:46:11 INFO tez.TezLauncher: HadoopJobId: job_1522978297921_0003
609410 [Timer-1] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJob - DAG Status: status=RUNNING, progress=TotalTasks: 2 Succeeded: 0 Running: 0 Failed: 0 Killed: 0, diagnostics=, counters=null
18/04/06 01:46:11 INFO tez.TezJob: DAG Status: status=RUNNING, progress=TotalTasks: 2 Succeeded: 0 Running: 0 Failed: 0 Killed: 0, diagnostics=, counters=null
629410 [Timer-1] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJob - DAG Status: status=RUNNING, progress=TotalTasks: 2 Succeeded: 0 Running: 1 Failed: 0 Killed: 0, diagnostics=, counters=null
18/04/06 01:46:31 INFO tez.TezJob: DAG Status: status=RUNNING, progress=TotalTasks: 2 Succeeded: 0 Running: 1 Failed: 0 Killed: 0, diagnostics=, counters=null
646404 [pool-1-thread-1] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezSessionManager - Shutting down Tez session org.apache.tez.client.TezClient#3a371843
18/04/06 01:46:48 INFO tez.TezSessionManager: Shutting down Tez session org.apache.tez.client.TezClient#3a371843
2018-04-06 01:46:48 Shutting down Tez session , sessionName=PigLatin:DefaultJobName, applicationId=application_1522978297921_0003
18/04/06 01:46:48 INFO client.TezClient: Shutting down Tez Session, sessionName=PigLatin:DefaultJobName, applicationId=application_1522978297921_0003
How do I fix this error? Dump commands work for the previous lines other than roundto. And What exactly is the Tez client?
I can't replicate your output, because I get an error as soon as I try this line:
roundto = FOREACH sum_occ GENERATE sum_occ.ngram, ROUND_TO( sum_occ.ntry , 2 );
You don't need to use the dot operator to refer to these fields (e.g. sum_occ.ngram) because they are not nested in a tuple or bag. Try the above line without the dot operator:
roundto = FOREACH sum_occ GENERATE ngram, ROUND_TO( ntry , 2 );
To answer your second question, MapReduce and Tez are both frameworks that can be used to run Pig scripts. Tez can sometimes speed up the time it takes Pig scripts to run. You can explicitly use MapReduce or Tez by starting your Pig shell with pig -x mapreduce or pig -x tez. MapReduce is the default, so if you haven't specified Tez, your Hadoop cluster must be set up to run Pig in Tez.

Deploying war file in Amazon Elastic Beanstalk

I have a war file of my application which works fine when executed from the command line in local. I´m uploading it to Amazon´s Elastic Beanstalk using Tomcat but when I try to access the URL I receive a 404 error.
The problem is something related to my war file or I have to change Amazon´s configuration?
Many thanks.
Logs:
-------------------------------------
/var/log/httpd/elasticbeanstalk-access_log
-------------------------------------
88.26.90.37 (88.26.90.37) - - [23/Jul/2017:18:55:23 +0000] "GET / HTTP/1.1" 404 1004 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36"
-------------------------------------
/var/log/httpd/error_log
-------------------------------------
[Sun Jul 23 18:54:15 2017] [notice] Apache/2.2.32 (Unix) configured -- resuming normal operations
[Sun Jul 23 18:55:23 2017] [error] server is within MinSpareThreads of MaxClients, consider raising the MaxClients setting
-------------------------------------
/var/log/tomcat8/host-manager.2017-07-23.log
-------------------------------------
-------------------------------------
/var/log/httpd/access_log
-------------------------------------
88.26.90.37 - - [23/Jul/2017:18:55:23 +0000] "GET / HTTP/1.1" 404 1004 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36"
-------------------------------------
/var/log/tomcat8/tomcat8-initd.log
-------------------------------------
-------------------------------------
/var/log/tomcat8/localhost_access_log.txt
-------------------------------------
127.0.0.1 - - [23/Jul/2017:18:55:24 +0000] "GET / HTTP/1.1" 404 1004
-------------------------------------
/var/log/tomcat8/manager.2017-07-23.log
-------------------------------------
-------------------------------------
/var/log/eb-activity.log
-------------------------------------
+ EB_APP_DEPLOY_BASE_DIR=/var/lib/tomcat8/webapps
+ rm -rf /var/lib/tomcat8/webapps/ROOT
+ rm -rf '/usr/share/tomcat8/conf/Catalina/localhost/*'
+ rm -rf '/usr/share/tomcat8/work/Catalina/*'
+ mkdir -p /var/lib/tomcat8/webapps/ROOT
[2017-07-23T18:54:13.069Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/02start_xray.sh] : Starting activity...
[2017-07-23T18:54:13.290Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/02start_xray.sh] : Completed activity.
[2017-07-23T18:54:13.291Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/03_stop_proxy.sh] : Starting activity...
[2017-07-23T18:54:13.629Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/03_stop_proxy.sh] : Completed activity. Result:
Executing: service nginx stop
Executing: service httpd stop
Stopping httpd: [FAILED]
[2017-07-23T18:54:13.629Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/03deploy.sh] : Starting activity...
[2017-07-23T18:54:14.089Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/03deploy.sh] : Completed activity. Result:
++ /opt/elasticbeanstalk/bin/get-config container -k app_staging_dir
+ EB_APP_STAGING_DIR=/tmp/deployment/application/ROOT
++ /opt/elasticbeanstalk/bin/get-config container -k app_deploy_dir
+ EB_APP_DEPLOY_DIR=/var/lib/tomcat8/webapps/ROOT
++ wc -l
++ find /tmp/deployment/application/ROOT -maxdepth 1 -type f
+ FILE_COUNT=0
++ grep -Pi '\.war$'
++ find /tmp/deployment/application/ROOT -maxdepth 1 -type f
++ echo ''
+ WAR_FILES=
+ WAR_FILE_COUNT=0
+ [[ 0 > 0 ]]
++ readlink -f /var/lib/tomcat8/webapps/ROOT/../
+ EB_APP_DEPLOY_BASE=/var/lib/tomcat8/webapps
+ rm -rf /var/lib/tomcat8/webapps/ROOT
+ [[ 0 == 0 ]]
+ [[ 0 > 1 ]]
+ cp -R /tmp/deployment/application/ROOT /var/lib/tomcat8/webapps/ROOT
+ chown -R tomcat:tomcat /var/lib/tomcat8/webapps
[2017-07-23T18:54:14.089Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/04config_deploy.sh] : Starting activity...
[2017-07-23T18:54:14.652Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/04config_deploy.sh] : Completed activity. Result:
++ /opt/elasticbeanstalk/bin/get-config container -k config_staging_dir
+ EB_CONFIG_STAGING_DIR=/tmp/deployment/config
++ /opt/elasticbeanstalk/bin/get-config container -k config_deploy_dir
+ EB_CONFIG_DEPLOY_DIR=/etc/sysconfig
++ /opt/elasticbeanstalk/bin/get-config container -k config_filename
+ EB_CONFIG_FILENAME=tomcat8
+ cp /tmp/deployment/config/tomcat8 /etc/sysconfig/tomcat8
[2017-07-23T18:54:14.653Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/05start.sh] : Starting activity...
[2017-07-23T18:54:15.081Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/05start.sh] : Completed activity. Result:
++ /opt/elasticbeanstalk/bin/get-config container -k tomcat_version
+ TOMCAT_VERSION=8
+ TOMCAT_NAME=tomcat8
+ /etc/init.d/tomcat8 status
tomcat8 is stopped
[ OK ]
+ /etc/init.d/tomcat8 start
Starting tomcat8: [ OK ]
+ /usr/bin/monit monitor tomcat
monit: generated unique Monit id ae33689ef3cf376bf23fa3b09041524e and stored to '/root/.monit.id'
[2017-07-23T18:54:15.082Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/09_start_proxy.sh] : Starting activity...
[2017-07-23T18:54:19.022Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/09_start_proxy.sh] : Completed activity. Result:
Executing: service httpd stop
Stopping httpd: [FAILED]
Executing: service httpd start
Starting httpd: [ OK ]
Executing: /bin/chmod 755 /var/run/httpd
Executing: /opt/elasticbeanstalk/bin/healthd-track-pidfile --proxy httpd
Executing: /opt/elasticbeanstalk/bin/healthd-configure --appstat-log-path /var/log/httpd/healthd/application.log --appstat-unit usec --appstat-timestamp-on 'arrival'
Executing: /opt/elasticbeanstalk/bin/healthd-restart
[2017-07-23T18:54:19.022Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/appdeploy/enact.
[2017-07-23T18:54:19.022Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployPostHook] : Starting activity...
[2017-07-23T18:54:19.023Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployPostHook/03monitor_pids.sh] : Starting activity...
[2017-07-23T18:54:20.047Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployPostHook/03monitor_pids.sh] : Completed activity.
[2017-07-23T18:54:20.047Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployPostHook] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/appdeploy/post.
[2017-07-23T18:54:20.047Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/PostInitHook] : Starting activity...
[2017-07-23T18:54:20.048Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/PostInitHook/01processmgrstart.sh] : Starting activity...
[2017-07-23T18:54:20.097Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/PostInitHook/01processmgrstart.sh] : Completed activity. Result:
+ /usr/bin/monit
[2017-07-23T18:54:20.097Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/PostInitHook] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/postinit.
[2017-07-23T18:54:20.097Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1] : Completed activity. Result:
Application deployment - Command CMD-Startup stage 1 completed
[2017-07-23T18:54:20.098Z] INFO [2048] - [Application deployment OnFocus2307#1/AddonsAfter] : Starting activity...
[2017-07-23T18:54:20.098Z] INFO [2048] - [Application deployment OnFocus2307#1/AddonsAfter/ConfigLogRotation] : Starting activity...
[2017-07-23T18:54:20.098Z] INFO [2048] - [Application deployment OnFocus2307#1/AddonsAfter/ConfigLogRotation/10-config.sh] : Starting activity...
[2017-07-23T18:54:20.509Z] INFO [2048] - [Application deployment OnFocus2307#1/AddonsAfter/ConfigLogRotation/10-config.sh] : Completed activity. Result:
Disabled forced hourly log rotation.
[2017-07-23T18:54:20.510Z] INFO [2048] - [Application deployment OnFocus2307#1/AddonsAfter/ConfigLogRotation] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/addons/logpublish/hooks/config.
[2017-07-23T18:54:20.510Z] INFO [2048] - [Application deployment OnFocus2307#1/AddonsAfter] : Completed activity.
[2017-07-23T18:54:20.510Z] INFO [2048] - [Application deployment OnFocus2307#1] : Completed activity. Result:
Application deployment - Command CMD-Startup succeeded
[2017-07-23T18:55:36.363Z] INFO [2808] - [CMD-TailLogs] : Starting activity...
[2017-07-23T18:55:36.363Z] INFO [2808] - [CMD-TailLogs/AddonsBefore] : Starting activity...
[2017-07-23T18:55:36.364Z] INFO [2808] - [CMD-TailLogs/AddonsBefore] : Completed activity.
[2017-07-23T18:55:36.364Z] INFO [2808] - [CMD-TailLogs/TailLogs] : Starting activity...
[2017-07-23T18:55:36.364Z] INFO [2808] - [CMD-TailLogs/TailLogs/TailLogs] : Starting activity...
-------------------------------------
/var/log/tomcat8/catalina.2017-07-23.log
-------------------------------------
23-Jul-2017 18:54:17.868 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version: Apache Tomcat/8.0.44
23-Jul-2017 18:54:17.886 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Jul 5 2017 19:02:51 UTC
23-Jul-2017 18:54:17.886 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server number: 8.0.44.0
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name: Linux
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version: 4.9.32-15.41.amzn1.x86_64
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture: amd64
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.131-2.b11.30.amzn1.x86_64/jre
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version: 1.8.0_131-b11
23-Jul-2017 18:54:17.888 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor: Oracle Corporation
23-Jul-2017 18:54:17.888 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE: /usr/share/tomcat8
23-Jul-2017 18:54:17.888 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME: /usr/share/tomcat8
23-Jul-2017 18:54:17.888 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -DJDBC_CONNECTION_STRING=
23-Jul-2017 18:54:17.890 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Xms256m
23-Jul-2017 18:54:17.891 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Xmx256m
23-Jul-2017 18:54:17.891 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -XX:MaxPermSize=64m
23-Jul-2017 18:54:17.891 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.base=/usr/share/tomcat8
23-Jul-2017 18:54:17.891 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.home=/usr/share/tomcat8
23-Jul-2017 18:54:17.891 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.awt.headless=true
23-Jul-2017 18:54:17.892 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.endorsed.dirs=
23-Jul-2017 18:54:17.892 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.io.tmpdir=/var/cache/tomcat8/temp
23-Jul-2017 18:54:17.897 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.config.file=/usr/share/tomcat8/conf/logging.properties
23-Jul-2017 18:54:17.897 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
23-Jul-2017 18:54:17.897 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
23-Jul-2017 18:54:18.284 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"]
23-Jul-2017 18:54:18.352 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
23-Jul-2017 18:54:18.371 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["ajp-nio-8009"]
23-Jul-2017 18:54:18.374 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
23-Jul-2017 18:54:18.377 INFO [main] org.apache.catalina.startup.Catalina.load Initialization processed in 2288 ms
23-Jul-2017 18:54:18.477 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service Catalina
23-Jul-2017 18:54:18.479 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.0.44
23-Jul-2017 18:54:18.512 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /var/lib/tomcat8/webapps/ROOT
23-Jul-2017 18:54:27.482 INFO [localhost-startStop-1] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
23-Jul-2017 18:54:27.680 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /var/lib/tomcat8/webapps/ROOT has finished in 9,162 ms
23-Jul-2017 18:54:27.691 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
23-Jul-2017 18:54:27.724 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
23-Jul-2017 18:54:27.747 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 9369 ms
-------------------------------------
/var/log/eb-commandprocessor.log
-------------------------------------
[2017-07-23T18:52:59.225Z] INFO [1780] : Found enabled addons: ["logpublish", "logstreaming"].
[2017-07-23T18:52:59.227Z] INFO [1780] : Updating Command definition of addon logpublish.
[2017-07-23T18:52:59.227Z] INFO [1780] : Updating Command definition of addon logstreaming.
[2017-07-23T18:52:59.227Z] DEBUG [1780] : Retrieving metadata for key: AWS::CloudFormation::Init||Infra-WriteApplication2||files..
[2017-07-23T18:52:59.232Z] DEBUG [1780] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||ManifestFileS3Key..
[2017-07-23T18:52:59.515Z] INFO [1780] : Finding latest manifest from bucket 'elasticbeanstalk-us-west-2-253743328849' with prefix 'resources/environments/e-mrdyfipmbp/_runtime/versions/manifest_'.
[2017-07-23T18:52:59.801Z] INFO [1780] : Found manifest with key 'resources/environments/e-mrdyfipmbp/_runtime/versions/manifest_1500835914428'.
[2017-07-23T18:52:59.818Z] INFO [1780] : Updated manifest cache: deployment ID 1 and serial 1.
[2017-07-23T18:52:59.818Z] DEBUG [1780] : Loaded definition of Command CMD-PreInit.
[2017-07-23T18:52:59.818Z] INFO [1780] : Executing Initialization
[2017-07-23T18:52:59.819Z] INFO [1780] : Executing command: CMD-PreInit...
[2017-07-23T18:52:59.819Z] INFO [1780] : Executing command CMD-PreInit activities...
[2017-07-23T18:52:59.819Z] DEBUG [1780] : Setting environment variables..
[2017-07-23T18:52:59.819Z] INFO [1780] : Running AddonsBefore for command CMD-PreInit...
[2017-07-23T18:53:04.333Z] DEBUG [1780] : Running stages of Command CMD-PreInit from stage 0 to stage 0...
[2017-07-23T18:53:04.333Z] INFO [1780] : Running stage 0 of command CMD-PreInit...
[2017-07-23T18:53:04.333Z] DEBUG [1780] : Loaded 3 actions for stage 0.
[2017-07-23T18:53:04.333Z] INFO [1780] : Running 1 of 3 actions: InfraWriteConfig...
[2017-07-23T18:53:04.345Z] INFO [1780] : Running 2 of 3 actions: DownloadSourceBundle...
[2017-07-23T18:53:05.730Z] INFO [1780] : Running 3 of 3 actions: PreInitHook...
[2017-07-23T18:53:07.650Z] INFO [1780] : Running AddonsAfter for command CMD-PreInit...
[2017-07-23T18:53:07.650Z] INFO [1780] : Command CMD-PreInit succeeded!
[2017-07-23T18:53:07.651Z] INFO [1780] : Command processor returning results:
{"status":"SUCCESS","api_version":"1.0","results":[{"status":"SUCCESS","msg":"","returncode":0,"events":[]}]}
[2017-07-23T18:54:05.518Z] DEBUG [2048] : Reading config file: /etc/elasticbeanstalk/.aws-eb-stack.properties
[2017-07-23T18:54:05.518Z] DEBUG [2048] : Checking if the command processor should execute...
[2017-07-23T18:54:05.520Z] DEBUG [2048] : Checking whether the command is applicable to instance (i-04e322b065f1ab8d7)..
[2017-07-23T18:54:05.520Z] INFO [2048] : Command is applicable to this instance (i-04e322b065f1ab8d7)..
[2017-07-23T18:54:05.520Z] DEBUG [2048] : Checking if the received command stage is valid..
[2017-07-23T18:54:05.520Z] INFO [2048] : No stage_num in command. Valid stage..
[2017-07-23T18:54:05.520Z] INFO [2048] : Received command CMD-Startup: {"execution_data":"{\"leader_election\":\"true\"}","instance_ids":["i-04e322b065f1ab8d7"],"command_name":"CMD-Startup","api_version":"1.0","resource_name":"AWSEBAutoScalingGroup","request_id":"fb2b25c7-6fd7-11e7-87da-0d1616730116"}
[2017-07-23T18:54:05.520Z] INFO [2048] : Command processor should execute command.
[2017-07-23T18:54:05.520Z] DEBUG [2048] : Storing current stage..
[2017-07-23T18:54:05.520Z] DEBUG [2048] : Stage_num does not exist. Not saving null stage. Returning..
[2017-07-23T18:54:05.520Z] DEBUG [2048] : Reading config file: /etc/elasticbeanstalk/.aws-eb-stack.properties
[2017-07-23T18:54:05.520Z] DEBUG [2048] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_ContainerConfigFileContent||commands..
[2017-07-23T18:54:05.521Z] DEBUG [2048] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_API||_Commands..
[2017-07-23T18:54:05.522Z] INFO [2048] : Found enabled addons: ["logpublish", "logstreaming"].
[2017-07-23T18:54:05.524Z] INFO [2048] : Updating Command definition of addon logpublish.
[2017-07-23T18:54:05.525Z] INFO [2048] : Updating Command definition of addon logstreaming.
[2017-07-23T18:54:05.525Z] DEBUG [2048] : Refreshing metadata...
[2017-07-23T18:54:05.954Z] DEBUG [2048] : Refreshed environment metadata.
[2017-07-23T18:54:05.954Z] DEBUG [2048] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_ContainerConfigFileContent||commands..
[2017-07-23T18:54:05.956Z] DEBUG [2048] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_API||_Commands..
[2017-07-23T18:54:05.957Z] INFO [2048] : Found enabled addons: ["logpublish", "logstreaming"].
[2017-07-23T18:54:05.961Z] INFO [2048] : Updating Command definition of addon logpublish.
[2017-07-23T18:54:05.961Z] INFO [2048] : Updating Command definition of addon logstreaming.
[2017-07-23T18:54:05.962Z] DEBUG [2048] : Loaded definition of Command CMD-Startup.
[2017-07-23T18:54:05.963Z] INFO [2048] : Executing Application deployment
[2017-07-23T18:54:05.964Z] INFO [2048] : Executing command: CMD-Startup...
[2017-07-23T18:54:05.964Z] INFO [2048] : Executing command CMD-Startup activities...
[2017-07-23T18:54:05.964Z] DEBUG [2048] : Setting environment variables..
[2017-07-23T18:54:05.964Z] INFO [2048] : Running AddonsBefore for command CMD-Startup...
[2017-07-23T18:54:06.242Z] DEBUG [2048] : Running stages of Command CMD-Startup from stage 0 to stage 1...
[2017-07-23T18:54:06.242Z] INFO [2048] : Running stage 0 of command CMD-Startup...
[2017-07-23T18:54:06.242Z] INFO [2048] : Running leader election...
[2017-07-23T18:54:06.665Z] INFO [2048] : Instance is Leader.
[2017-07-23T18:54:06.666Z] DEBUG [2048] : Loaded 7 actions for stage 0.
[2017-07-23T18:54:06.666Z] INFO [2048] : Running 1 of 7 actions: HealthdLogRotation...
[2017-07-23T18:54:06.678Z] INFO [2048] : Running 2 of 7 actions: HealthdHTTPDLogging...
[2017-07-23T18:54:06.680Z] INFO [2048] : Running 3 of 7 actions: HealthdNginxLogging...
[2017-07-23T18:54:06.681Z] INFO [2048] : Running 4 of 7 actions: EbExtensionPreBuild...
[2017-07-23T18:54:07.163Z] INFO [2048] : Running 5 of 7 actions: AppDeployPreHook...
[2017-07-23T18:54:09.688Z] INFO [2048] : Running 6 of 7 actions: EbExtensionPostBuild...
[2017-07-23T18:54:10.176Z] INFO [2048] : Running 7 of 7 actions: InfraCleanEbExtension...
[2017-07-23T18:54:10.181Z] INFO [2048] : Running stage 1 of command CMD-Startup...
[2017-07-23T18:54:10.181Z] DEBUG [2048] : Loaded 3 actions for stage 1.
[2017-07-23T18:54:10.181Z] INFO [2048] : Running 1 of 3 actions: AppDeployEnactHook...
[2017-07-23T18:54:19.022Z] INFO [2048] : Running 2 of 3 actions: AppDeployPostHook...
[2017-07-23T18:54:20.047Z] INFO [2048] : Running 3 of 3 actions: PostInitHook...
[2017-07-23T18:54:20.097Z] INFO [2048] : Running AddonsAfter for command CMD-Startup...
[2017-07-23T18:54:20.510Z] INFO [2048] : Command CMD-Startup succeeded!
[2017-07-23T18:54:20.511Z] INFO [2048] : Command processor returning results:
{"status":"SUCCESS","api_version":"1.0","results":[{"status":"SUCCESS","msg":"","returncode":0,"events":[]}]}
[2017-07-23T18:55:36.353Z] DEBUG [2808] : Reading config file: /etc/elasticbeanstalk/.aws-eb-stack.properties
[2017-07-23T18:55:36.354Z] DEBUG [2808] : Checking if the command processor should execute...
[2017-07-23T18:55:36.356Z] DEBUG [2808] : Checking whether the command is applicable to instance (i-04e322b065f1ab8d7)..
[2017-07-23T18:55:36.356Z] INFO [2808] : Command is applicable to this instance (i-04e322b065f1ab8d7)..
[2017-07-23T18:55:36.356Z] DEBUG [2808] : Checking if the received command stage is valid..
[2017-07-23T18:55:36.356Z] INFO [2808] : No stage_num in command. Valid stage..
[2017-07-23T18:55:36.356Z] INFO [2808] : Received command CMD-TailLogs: {"execution_data":"{\"aws_access_key_id\":\"ASIAJSUYLCIZFOIKPO2A\",\"signature\":\"RPf86lrs\\\/c0114+EODhe8jxRJhs=\",\"security_token\":\"FQoDYXdzENz\\\/\\\/\\\/\\\/\\\/\\\/\\\/\\\/\\\/\\\/wEaDCSrox2Xx3QiuRUnziLcAxEo3H8dpDcz3tKFZriPXlqq595Xpcm6LsBYoPAwWWcm7bDE38KE8kwDhnSMHttNJl1yNd5kofzZ9J5pf9gRSQdXGHWXghfw8+Bt3IVKutzn7tni2NaXFMlZxSxOpkvVxRYUph9et1kFsDlX2ml2ONCPDGqGYFBatI1mMPbvdTVViz7YbMiGDx88kQQF9W9wghJ63FkxG0JGscE1ugXc840xjzTmSIT7bNPmlkaLI4iBLor9Whn4a1fiDuZq2EB8lDxKMd+hjWmMSbMYjPvdGusVbuvLu1KC8mvFMx29BVLoo+xvxMc2JzO03\\\/WVo50oWnM8nSG04UtfkNGapLnbVO1NWoMWD107qHSeyWqAi1HO83KmxW4E5gvtF5IGNd98yJkcSmwDv0BNJDZnP8DTZNP+AHrCW\\\/mC6ybEjNxkh\\\/La\\\/YpPmfWAcbOG61IKqIyZHrhGO65nvYRxsz5TJ9B5sbGvDmhlGEJ1thAP\\\/xcaTOAUn006DxGlO+aVrz6ie9uU6Mt4wNos4qdftSce5mszp4Gc3gYOpfzqq4lIpnB2GUY9ImVMclLI60VtaOMkzMNsNJTRtl1X1NuiUa7sefP8Rsod\\\/yeev3ueDLJsfhJozF\\\/w4MtijFfP547w1KfxKOeb08sF\",\"policy\":\"eyJleHBpcmF0aW9uIjoiMjAxNy0wNy0yM1QxOToyNTozNC41ODNaIiwiY29uZGl0aW9ucyI6W1sic3RhcnRzLXdpdGgiLCIkeC1hbXotbWV0YS10aW1lX3N0YW1wIiwiIl0sWyJzdGFydHMtd2l0aCIsIiR4LWFtei1tZXRhLXB1Ymxpc2hfbWVjaGFuaXNtIiwiIl0sWyJzdGFydHMtd2l0aCIsIiRrZXkiLCJyZXNvdXJjZXNcL2Vudmlyb25tZW50c1wvbG9nc1wvIl0sWyJzdGFydHMtd2l0aCIsIiR4LWFtei1tZXRhLWJhdGNoX2lkIiwiIl0sWyJzdGFydHMtd2l0aCIsIiR4LWFtei1tZXRhLWZpbGVfbmFtZSIsIiJdLFsic3RhcnRzLXdpdGgiLCIkeC1hbXotc2VjdXJpdHktdG9rZW4iLCIiXSxbInN0YXJ0cy13aXRoIiwiJENvbnRlbnQtVHlwZSIsIiJdLFsiZXEiLCIkYnVja2V0IiwiZWxhc3RpY2JlYW5zdGFsay11cy13ZXN0LTItMjUzNzQzMzI4ODQ5Il0sWyJlcSIsIiRhY2wiLCJwcml2YXRlIl1dfQ==\"}","instance_ids":["i-04e322b065f1ab8d7"],"data":"822325d4-6fd8-11e7-8e3e-c7d19f2d4234","command_name":"CMD-TailLogs","api_version":"1.0","resource_name":"AWSEBAutoScalingGroup","request_id":"822325d4-6fd8-11e7-8e3e-c7d19f2d4234"}
[2017-07-23T18:55:36.356Z] INFO [2808] : Command processor should execute command.
[2017-07-23T18:55:36.356Z] DEBUG [2808] : Storing current stage..
[2017-07-23T18:55:36.356Z] DEBUG [2808] : Stage_num does not exist. Not saving null stage. Returning..
[2017-07-23T18:55:36.356Z] DEBUG [2808] : Reading config file: /etc/elasticbeanstalk/.aws-eb-stack.properties
[2017-07-23T18:55:36.356Z] DEBUG [2808] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_ContainerConfigFileContent||commands..
[2017-07-23T18:55:36.358Z] DEBUG [2808] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_API||_Commands..
[2017-07-23T18:55:36.359Z] INFO [2808] : Found enabled addons: ["logpublish", "logstreaming"].
[2017-07-23T18:55:36.362Z] INFO [2808] : Updating Command definition of addon logpublish.
[2017-07-23T18:55:36.362Z] INFO [2808] : Updating Command definition of addon logstreaming.
[2017-07-23T18:55:36.362Z] DEBUG [2808] : Loaded definition of Command CMD-TailLogs.
[2017-07-23T18:55:36.362Z] INFO [2808] : Executing CMD-TailLogs
[2017-07-23T18:55:36.363Z] INFO [2808] : Executing command: CMD-TailLogs...
[2017-07-23T18:55:36.363Z] INFO [2808] : Executing command CMD-TailLogs activities...
[2017-07-23T18:55:36.363Z] DEBUG [2808] : Setting environment variables..
[2017-07-23T18:55:36.363Z] INFO [2808] : Running AddonsBefore for command CMD-TailLogs...
[2017-07-23T18:55:36.364Z] DEBUG [2808] : Running stages of Command CMD-TailLogs from stage 0 to stage 0...
[2017-07-23T18:55:36.364Z] INFO [2808] : Running stage 0 of command CMD-TailLogs...
[2017-07-23T18:55:36.364Z] DEBUG [2808] : Loaded 1 actions for stage 0.
[2017-07-23T18:55:36.364Z] INFO [2808] : Running 1 of 1 actions: TailLogs...
-------------------------------------
/var/log/httpd/elasticbeanstalk-error_log
-------------------------------------
-------------------------------------
/var/log/tomcat8/localhost.2017-07-23.log
-------------------------------------
23-Jul-2017 18:54:27.562 INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log Spring WebApplicationInitializers detected on classpath: [org.springframework.boot.autoconfigure.jersey.JerseyAutoConfiguration$JerseyWebApplicationInitializer#161183dc]
-------------------------------------
/var/log/tomcat8/catalina.out
-------------------------------------
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=64m; support was removed in 8.0
23-Jul-2017 18:54:17.868 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version: Apache Tomcat/8.0.44
23-Jul-2017 18:54:17.886 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Jul 5 2017 19:02:51 UTC
23-Jul-2017 18:54:17.886 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server number: 8.0.44.0
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name: Linux
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version: 4.9.32-15.41.amzn1.x86_64
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture: amd64
Check your logs for the reason: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.logging.html
Perhaps you are missing a JDBC driver? ;D

Flume - sink as Maprfs. Files not writing

My source type is spooldir and sink type is hdfs. There is no error but files are not copied.
Between I am completely aware of the NFS mount feature to copy data. I am in learning flume and I want to try this feature. Once this is working I would like try to write data using log4j, avro as source and hdfs as sink.
Any help is greatly appreciated
Regards
Mani
# Name the components of this agents
maprfs-agent.sources = spool-collect
maprfs-agent.sinks = maprfs-write
maprfs-agent.channels = memory-channel
# Describe/ Configure the sources
maprfs-agent.sources.spool-collect.type = spooldir
maprfs-agent.sources.spool-collect.spoolDir = /home/appdata/mani
maprfs-agent.sources.spool-collect.fileHeader = true
maprfs-agent.sources.spool-collect.bufferMaxLineLength = 500
maprfs-agent.sources.spool-collect.bufferMaxLines = 10000
maprfs-agent.sources.spool-collect.batchSize = 100000
# Describe/ Configure sink
maprfs-agent.sinks.maprfs-write.type = hdfs
maprfs-agent.sinks.maprfs-write.hdfs.fileType = DataStream
maprfs-agent.sinks.maprfs-write.hdfs.path = maprfs:///sample.node.com/user/hive/test
maprfs-agent.sinks.maprfs-write.writeFormat = Text
maprfs-agent.sinks.maprfs-write.hdfs.proxyUser = root
maprfs-agent.sinks.maprfs-write.hdfs.kerberosPrincipal = mapr
maprfs-agent.sinks.maprfs-write.hdfs.kerberosKeytab = /opt/mapr/conf/flume.keytab
maprfs-agent.sinks.maprfs-write.hdfs.filePrefix = %{file}
maprfs-agent.sinks.maprfs-write.hdfs.fileSuffix = .csv
maprfs-agent.sinks.maprfs-write.hdfs.rollInterval = 0
maprfs-agent.sinks.maprfs-write.hdfs.rollCount = 0
maprfs-agent.sinks.maprfs-write.hdfs.rollSize = 0
maprfs-agent.sinks.maprfs-write.hdfs.batchSize = 100
maprfs-agent.sinks.maprfs-write.hdfs.idleTimeout = 0
maprfs-agent.sinks.maprfs-write.hdfs.maxOpenFiles = 5
# Configure channel buffer
maprfs-agent.channels.memory-channel.type = memory
maprfs-agent.channels.memory-channel.capacity = 1000
# Bind the source and the sink to the channel
maprfs-agent.sources.spool-collect.channels = memory-channel
maprfs-agent.sinks.maprfs-write.channel = memory-channel
I am getting below message. no error and no files copied when I execute below command.
hadoop mfs -ls /user/hive/test
15/05/26 13:55:45 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider starting
15/05/26 13:55:45 INFO node.PollingPropertiesFileConfigurationProvider: Reloading configuration file:mapr-spool.conf
15/05/26 13:55:45 INFO conf.FlumeConfiguration: Added sinks: maprfs-write Agent: maprfs-agent
15/05/26 13:55:45 INFO conf.FlumeConfiguration: Processing:maprfs-write
15/05/26 13:55:45 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [maprfs-agent]
15/05/26 13:55:45 INFO node.AbstractConfigurationProvider: Creating channels
15/05/26 13:55:45 INFO channel.DefaultChannelFactory: Creating instance of channel memory-channel type memory
15/05/26 13:55:45 INFO node.AbstractConfigurationProvider: Created channel memory-channel
15/05/26 13:55:45 INFO source.DefaultSourceFactory: Creating instance of source spool-collect, type spooldir
15/05/26 13:55:45 INFO sink.DefaultSinkFactory: Creating instance of sink: maprfs-write, type: hdfs
15/05/26 13:55:47 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
15/05/26 13:55:47 INFO hdfs.HDFSEventSink: Auth method: PROXY
15/05/26 13:55:47 INFO hdfs.HDFSEventSink: User name: root
15/05/26 13:55:47 INFO hdfs.HDFSEventSink: Using keytab: false
15/05/26 13:55:47 INFO hdfs.HDFSEventSink: Superuser auth: SIMPLE
15/05/26 13:55:47 INFO hdfs.HDFSEventSink: Superuser name: root
15/05/26 13:55:47 INFO hdfs.HDFSEventSink: Superuser using keytab: false
15/05/26 13:55:47 INFO hdfs.HDFSEventSink: Logged in as user root
15/05/26 13:55:47 INFO node.AbstractConfigurationProvider: Channel memory-channel connected to [spool-collect, maprfs-write]
15/05/26 13:55:47 INFO node.Application: Starting new configuration:{ sourceRunners:{spool-collect=EventDrivenSourceRunner: { source:Spool Directory source spool-collect: { spoolDir: /home/appdata/mani } }} sinkRunners:{maprfs-write=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#7fc7efa0 counterGroup:{ name:null counters:{} } }} channels:{memory-channel=org.apache.flume.channel.MemoryChannel{name: memory-channel}} }
15/05/26 13:55:47 INFO node.Application: Starting Channel memory-channel
15/05/26 13:55:47 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: memory-channel: Successfully registered new MBean.
15/05/26 13:55:47 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: memory-channel started
15/05/26 13:55:47 INFO node.Application: Starting Sink maprfs-write
15/05/26 13:55:47 INFO node.Application: Starting Source spool-collect
15/05/26 13:55:47 INFO source.SpoolDirectorySource: SpoolDirectorySource source starting with directory: /home/appdata/mani
15/05/26 13:55:47 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: maprfs-write: Successfully registered new MBean.
15/05/26 13:55:47 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: maprfs-write started
15/05/26 13:55:47 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SOURCE, name: spool-collect: Successfully registered new MBean.
15/05/26 13:55:47 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: spool-collect started
15/05/26 13:55:47 INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /home/appdata/mani/cron-s3.log to /home/appdata/mani/cron-s3.log.COMPLETED
15/05/26 13:55:47 INFO hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false
15/05/26 13:55:48 INFO hdfs.BucketWriter: Creating maprfs:///sample.node.com/user/hive/test/.1432644947885.csv.tmp
15/05/26 13:57:08 INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /home/appdata/mani/network-usage.log to /home/appdata/mani/network-usage.log.COMPLETED
15/05/26 13:57:08 INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /home/appdata/mani/processor-usage-2014-10-17.log to /home/appdata/mani/processor-usage-2014-10-17.log.COMPLETED
15/05/26 13:57:25 INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /home/appdata/mani/total-processor-usage.log to /home/appdata/mani/total-processor-usage.log.COMPLETED
15/05/26 13:57:25 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.
15/05/26 13:57:26 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.
15/05/26 13:57:26 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.
15/05/26 13:57:27 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.
15/05/26 13:57:27 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.
15/05/26 13:57:28 INFO source.SpoolDirectorySource: Spooling Directory Source runner has shutdown.