Error installing Flink in DCOS - digital-ocean

I've deployed DCOS on DigitalOcean with the following configuration
digitalocean_token = "***"
region = "fra1"
master_size = "4GB"
agent_size = "4GB"
boot_size = "4GB"
dcos_cluster_name = "digitalocean-dcos"
dcos_master_count = "1"
dcos_agent_count = "4"
dcos_public_agent_count = "1"
dcos_installer_url = "https://downloads.dcos.io/dcos/stable/dcos_generate_config.sh"
dcos_ssh_key_path = "./do-key"
dcos_ssh_public_key_path = "./do-key.pub"
ssh_key_fingerprint = "***"
Installing Flink fails both through the GUI and through the DCOS CLI.
Apparently the error is this:
2017-07-29 17:10:05,553 ERROR org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - Mesos JobManager initialization failed
java.net.UnknownHostException: digitalocean-dcos-agent-00: digitalocean-dcos-agent-00: Name or service not known
at java.net.InetAddress.getLocalHost(InetAddress.java:1505)
Copied from here:
flink--mesos-appmaster-digitalocean-dcos-agent-00.log
2017-07-29 17:10:04,930 WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2017-07-29 17:10:05,223 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - --------------------------------------------------------------------------------
2017-07-29 17:10:05,224 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - Starting Mesos AppMaster (Version: 1.3.1, Rev:1ca6e5b, Date:20.06.2017 # 10:08:43 PDT)
2017-07-29 17:10:05,224 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - Current user: root
2017-07-29 17:10:05,224 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - JVM: OpenJDK 64-Bit Server VM - Oracle Corporation - 1.8/25.111-b14
2017-07-29 17:10:05,224 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - Maximum heap size: 880 MiBytes
2017-07-29 17:10:05,224 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - JAVA_HOME: /usr/lib/jvm/java-8-openjdk-amd64/jre
2017-07-29 17:10:05,229 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - Hadoop version: 2.3.0
2017-07-29 17:10:05,229 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - JVM Options:
2017-07-29 17:10:05,229 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - -Dlog.file=/mnt/mesos/sandbox/flink--mesos-appmaster-digitalocean-dcos-agent-00.log
2017-07-29 17:10:05,230 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - -Dlog4j.configuration=file:/flink-1.3.1/conf/log4j.properties
2017-07-29 17:10:05,230 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - -Dlogback.configurationFile=file:/flink-1.3.1/conf/logback.xml
2017-07-29 17:10:05,230 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - Program Arguments:
2017-07-29 17:10:05,230 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - -Dblob.server.port=20262
2017-07-29 17:10:05,230 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - -Djobmanager.heap.mb=256
2017-07-29 17:10:05,230 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - -Djobmanager.rpc.port=20261
2017-07-29 17:10:05,230 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - -Djobmanager.web.port=20260
2017-07-29 17:10:05,230 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - -Dmesos.artifact-server.port=20263
2017-07-29 17:10:05,230 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - -Dmesos.initial-tasks=1
2017-07-29 17:10:05,231 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - -Dmesos.resourcemanager.tasks.cpus=1
2017-07-29 17:10:05,231 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - -Dmesos.resourcemanager.tasks.mem=1024
2017-07-29 17:10:05,231 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - -Dtaskmanager.heap.mb=512
2017-07-29 17:10:05,231 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - -Dtaskmanager.memory.preallocate=true
2017-07-29 17:10:05,231 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - -Dtaskmanager.numberOfTaskSlots=1
2017-07-29 17:10:05,231 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - -Dparallelism.default=1
2017-07-29 17:10:05,231 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - -Dmesos.resourcemanager.framework.role=*
2017-07-29 17:10:05,231 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - -Dsecurity.kerberos.login.use-ticket-cache=true
2017-07-29 17:10:05,231 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - Classpath: /flink-1.3.1/lib/flink-python_2.10-1.3.1.jar:/flink-1.3.1/lib/flink-shaded-hadoop2-uber-1.3.1.jar:/flink-1.3.1/lib/log4j-1.2.17.jar:/flink-1.3.1/lib/slf4j-log4j12-1.7.7.jar:/flink-1.3.1/lib/flink-dist_2.10-1.3.1.jar::/etc/hadoop/conf/:
2017-07-29 17:10:05,231 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - --------------------------------------------------------------------------------
2017-07-29 17:10:05,234 INFO org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - Registered UNIX signal handlers for [TERM, HUP, INT]
2017-07-29 17:10:05,252 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: mesos.master, zk://leader.mesos:2181/mesos
2017-07-29 17:10:05,252 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: mesos.failover-timeout, 60
2017-07-29 17:10:05,254 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: mesos.initial-tasks, 1
2017-07-29 17:10:05,254 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: mesos.resourcemanager.tasks.container.type, mesos
2017-07-29 17:10:05,254 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: mesos.resourcemanager.tasks.container.image.name, openjdk:8-jre
2017-07-29 17:10:05,255 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: mesos.resourcemanager.tasks.cpus, 1
2017-07-29 17:10:05,255 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: mesos.resourcemanager.tasks.mem, 1024
2017-07-29 17:10:05,257 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123
2017-07-29 17:10:05,258 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.mb, 256
2017-07-29 17:10:05,258 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.mb, 512
2017-07-29 17:10:05,258 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 1
2017-07-29 17:10:05,258 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.preallocate, false
2017-07-29 17:10:05,258 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 1
2017-07-29 17:10:05,259 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.web.port, 8081
2017-07-29 17:10:05,306 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: mesos.master, zk://leader.mesos:2181/mesos
2017-07-29 17:10:05,307 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: mesos.failover-timeout, 60
2017-07-29 17:10:05,307 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: mesos.initial-tasks, 1
2017-07-29 17:10:05,307 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: mesos.resourcemanager.tasks.container.type, mesos
2017-07-29 17:10:05,307 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: mesos.resourcemanager.tasks.container.image.name, openjdk:8-jre
2017-07-29 17:10:05,307 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: mesos.resourcemanager.tasks.cpus, 1
2017-07-29 17:10:05,308 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: mesos.resourcemanager.tasks.mem, 1024
2017-07-29 17:10:05,308 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123
2017-07-29 17:10:05,308 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.mb, 256
2017-07-29 17:10:05,308 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.mb, 512
2017-07-29 17:10:05,308 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 1
2017-07-29 17:10:05,308 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.preallocate, false
2017-07-29 17:10:05,308 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 1
2017-07-29 17:10:05,309 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.web.port, 8081
2017-07-29 17:10:05,402 INFO org.apache.flink.runtime.security.modules.HadoopModule - Hadoop user set to root (auth:SIMPLE)
2017-07-29 17:10:05,553 ERROR org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - Mesos JobManager initialization failed
java.net.UnknownHostException: digitalocean-dcos-agent-00: digitalocean-dcos-agent-00: Name or service not known
at java.net.InetAddress.getLocalHost(InetAddress.java:1505)
at org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner.runPrivileged(MesosApplicationMasterRunner.java:216)
at org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner$1.call(MesosApplicationMasterRunner.java:181)
at org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner$1.call(MesosApplicationMasterRunner.java:178)
at org.apache.flink.runtime.security.HadoopSecurityContext$1.run(HadoopSecurityContext.java:43)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:40)
at org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner.run(MesosApplicationMasterRunner.java:178)
at org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner.main(MesosApplicationMasterRunner.java:139)
Caused by: java.net.UnknownHostException: digitalocean-dcos-agent-00: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getLocalHost(InetAddress.java:1500)
... 10 more
Log (this is looping infinitely)
+ '[' '' '!=' '' ']'
+ add_mesos_configurations
++ hostname -f
hostname: Name or service not known
+ add_if_non_empty jobmanager.rpc.address
+ '[' -n '' ']'
+ add_if_non_empty mesos.resourcemanager.framework.role '*'
+ '[' -n '*' ']'
+ export 'FLINK_JAVA_OPTS= -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=512 -Dtaskmanager.memory.preallocate=true -Dtaskmanager.numberOfTaskSlots=1 -Dparallelism.default=1 -Dmesos.resourcemanager.framework.role=*'
+ FLINK_JAVA_OPTS=' -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=512 -Dtaskmanager.memory.preallocate=true -Dtaskmanager.numberOfTaskSlots=1 -Dparallelism.default=1 -Dmesos.resourcemanager.framework.role=*'
+ add_if_non_empty mesos.resourcemanager.framework.principal ''
+ '[' -n '' ']'
+ add_if_non_empty mesos.resourcemanager.framework.secret ''
+ '[' -n '' ']'
+ add_ssl_configurations
+ [[ '' == true ]]
+ add_kerberos_configurations
+ add_if_non_empty security.kerberos.login.use-ticket-cache true
+ '[' -n true ']'
+ export 'FLINK_JAVA_OPTS= -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=512 -Dtaskmanager.memory.preallocate=true -Dtaskmanager.numberOfTaskSlots=1 -Dparallelism.default=1 -Dmesos.resourcemanager.framework.role=* -Dsecurity.kerberos.login.use-ticket-cache=true'
+ FLINK_JAVA_OPTS=' -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=512 -Dtaskmanager.memory.preallocate=true -Dtaskmanager.numberOfTaskSlots=1 -Dparallelism.default=1 -Dmesos.resourcemanager.framework.role=* -Dsecurity.kerberos.login.use-ticket-cache=true'
+ '[' '' '!=' '' ']'
+ add_if_non_empty security.kerberos.login.principal ''
+ '[' -n '' ']'
+ [[ '' != '' ]]
+ update_log_level
+ [[ INFO != '' ]]
+ sed -ie 's/log4j.rootLogger=INFO, file/log4j.rootLogger=INFO, file/g' /flink-1.3.1/conf/log4j.properties
+ exec /flink-1.3.1/bin/mesos-appmaster.sh -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=512 -Dtaskmanager.memory.preallocate=true -Dtaskmanager.numberOfTaskSlots=1 -Dparallelism.default=1 '-Dmesos.resourcemanager.framework.role=*' -Dsecurity.kerberos.login.use-ticket-cache=true
+ FLINK_SECURITY_DIR=/etc/security/flink
+ mkdir -p /etc/security/flink
+ export APPLICATION_WEB_PROXY_BASE=/service/flink
+ APPLICATION_WEB_PROXY_BASE=/service/flink
+ add_flink_configurations
+ export FLINK_JAVA_OPTS=
+ FLINK_JAVA_OPTS=
+ export 'FLINK_JAVA_OPTS= -Dblob.server.port=20262'
+ FLINK_JAVA_OPTS=' -Dblob.server.port=20262'
+ export 'FLINK_JAVA_OPTS= -Dblob.server.port=20262 -Djobmanager.heap.mb=256'
+ FLINK_JAVA_OPTS=' -Dblob.server.port=20262 -Djobmanager.heap.mb=256'
+ export 'FLINK_JAVA_OPTS= -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261'
+ FLINK_JAVA_OPTS=' -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261'
+ export 'FLINK_JAVA_OPTS= -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260'
+ FLINK_JAVA_OPTS=' -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260'
+ export 'FLINK_JAVA_OPTS= -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263'
+ FLINK_JAVA_OPTS=' -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263'
+ export 'FLINK_JAVA_OPTS= -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1'
+ FLINK_JAVA_OPTS=' -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1'
+ export 'FLINK_JAVA_OPTS= -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1'
+ FLINK_JAVA_OPTS=' -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1'
+ export 'FLINK_JAVA_OPTS= -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024'
+ FLINK_JAVA_OPTS=' -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024'
+ export 'FLINK_JAVA_OPTS= -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=512'
+ FLINK_JAVA_OPTS=' -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=512'
+ export 'FLINK_JAVA_OPTS= -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=512 -Dtaskmanager.memory.preallocate=true'
+ FLINK_JAVA_OPTS=' -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=512 -Dtaskmanager.memory.preallocate=true'
+ export 'FLINK_JAVA_OPTS= -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=512 -Dtaskmanager.memory.preallocate=true -Dtaskmanager.numberOfTaskSlots=1'
+ FLINK_JAVA_OPTS=' -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=512 -Dtaskmanager.memory.preallocate=true -Dtaskmanager.numberOfTaskSlots=1'
+ export 'FLINK_JAVA_OPTS= -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=512 -Dtaskmanager.memory.preallocate=true -Dtaskmanager.numberOfTaskSlots=1 -Dparallelism.default=1'
+ FLINK_JAVA_OPTS=' -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=512 -Dtaskmanager.memory.preallocate=true -Dtaskmanager.numberOfTaskSlots=1 -Dparallelism.default=1'
+ '[' '' '!=' '' ']'
+ add_mesos_configurations
++ hostname -f
hostname: Name or service not known
+ add_if_non_empty jobmanager.rpc.address
+ '[' -n '' ']'
+ add_if_non_empty mesos.resourcemanager.framework.role '*'
+ '[' -n '*' ']'
+ export 'FLINK_JAVA_OPTS= -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=512 -Dtaskmanager.memory.preallocate=true -Dtaskmanager.numberOfTaskSlots=1 -Dparallelism.default=1 -Dmesos.resourcemanager.framework.role=*'
+ FLINK_JAVA_OPTS=' -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=512 -Dtaskmanager.memory.preallocate=true -Dtaskmanager.numberOfTaskSlots=1 -Dparallelism.default=1 -Dmesos.resourcemanager.framework.role=*'
+ add_if_non_empty mesos.resourcemanager.framework.principal ''
+ '[' -n '' ']'
+ add_if_non_empty mesos.resourcemanager.framework.secret ''
+ '[' -n '' ']'
+ add_ssl_configurations
+ [[ '' == true ]]
+ add_kerberos_configurations
+ add_if_non_empty security.kerberos.login.use-ticket-cache true
+ '[' -n true ']'
+ export 'FLINK_JAVA_OPTS= -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=512 -Dtaskmanager.memory.preallocate=true -Dtaskmanager.numberOfTaskSlots=1 -Dparallelism.default=1 -Dmesos.resourcemanager.framework.role=* -Dsecurity.kerberos.login.use-ticket-cache=true'
+ FLINK_JAVA_OPTS=' -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=512 -Dtaskmanager.memory.preallocate=true -Dtaskmanager.numberOfTaskSlots=1 -Dparallelism.default=1 -Dmesos.resourcemanager.framework.role=* -Dsecurity.kerberos.login.use-ticket-cache=true'
+ '[' '' '!=' '' ']'
+ add_if_non_empty security.kerberos.login.principal ''
+ '[' -n '' ']'
+ [[ '' != '' ]]
+ update_log_level
+ [[ INFO != '' ]]
+ sed -ie 's/log4j.rootLogger=INFO, file/log4j.rootLogger=INFO, file/g' /flink-1.3.1/conf/log4j.properties
+ exec /flink-1.3.1/bin/mesos-appmaster.sh -Dblob.server.port=20262 -Djobmanager.heap.mb=256 -Djobmanager.rpc.port=20261 -Djobmanager.web.port=20260 -Dmesos.artifact-server.port=20263 -Dmesos.initial-tasks=1 -Dmesos.resourcemanager.tasks.cpus=1 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=512 -Dtaskmanager.memory.preallocate=true -Dtaskmanager.numberOfTaskSlots=1 -Dparallelism.default=1 '-Dmesos.resourcemanager.framework.role=*' -Dsecurity.kerberos.login.use-ticket-cache=true
I get a continuously running "Deploying 1 of 1" with status Unhealthy.
On the other hand, installing other packages like Kafka and Redis succeeds.

I was working off an on-premises installation and got similar errors. Turns out CentOS 7 doesn't add its hostname to the /etc/hosts file during installation. I simply had to add the line
127.0.0.1 myhostname
Or add your hostname to that line if it already exists
Started working straight away

The issue is explained here under Troubleshooting for AWS. Although can be adapted to DigitalOcean.
Citing from the link above:
There is a situation which can occur where the JobMaster is not able
to resolve its hostname. This causes the TaskManager container that
launches to never communicate with the JobManager and the cluster
never enters the ready state. In the logs will contain something
similar to
2017-07-29 17:10:05,553 ERROR org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner - Mesos JobManager initialization failed
java.net.UnknownHostException: agentname: agentname: Name or service not known
at java.net.InetAddress.getLocalHost(InetAddress.java:1505)
This can be resolved by enabling "DNS Hostname" support in the
VPC for the agents.
aws ec2 modify-vpc-attribute --vpc-id vpc-a01106c2 --enable-dns-hostnames "{\"Value\":true}"

I believe this is due to a bug in the dcos-flink package that was fixed a few days ago.
fix: Set jobmanger.rpc.address to current host
Assumedly the fix hasn't been deployed to your DCOS universe yet.

Related

AWS glue error in converting dynamic dataframe to spark

I am using AWS Glue crawler to read some data from S3 into a table.
I would like to then use the AWS Glue jobs to do some transformations. I am able to modify and run the script of a small file, but when I try to run it on larger data, I get the following error that seems to be complaining about converting Dynamic Frame to spark dataframe. I am not even sure how to start debugging it.
I didn't see many posts on this here -- only about sparkDF->Dynamic frames.
No older events found for the selected filter. clear filter.

18:49:31
er$$anonfun$init$1.apply(GrokReader.scala:62) at scala.collection.Iterator$$anon$9.next(Iterator.scala:162) at scala.collection.Iterator$$anon$16.hasNext(Iterator.scala:599) at com.amazonaws.services.glue.readers.GrokReader.hasNext(GrokReader.scala:117) at com.amazonaws.services.glue.hadoop.TapeHadoopRecordReader.nextKeyValue(TapeHadoopRecordReader.scala:73) at org.apache.spark.rdd.NewHadoopR
er$$anonfun$init$1.apply(GrokReader.scala:62)
at scala.collection.Iterator$$anon$9.next(Iterator.scala:162)
at scala.collection.Iterator$$anon$16.hasNext(Iterator.scala:599)
at com.amazonaws.services.glue.readers.GrokReader.hasNext(GrokReader.scala:117)
at com.amazonaws.services.glue.hadoop.TapeHadoopRecordReader.nextKeyValue(TapeHadoopRecordReader.scala:73)
at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:230)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:214)
at scala.collection.AbstractIterator.aggregate(Iterator.scala:1334)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1145)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1145)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1146)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1146)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
2020-05-12 18:49:22,434 INFO [Thread-9] scheduler.DAGScheduler (Logging.scala:logInfo(54)) - Job 0 failed: fromRDD at DynamicFrame.scala:241, took 8327.404883 s
2020-05-12 18:49:22,450 WARN [task-result-getter-1] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9983.0 in stage 0.0 (TID 9986, ip-172-32-50-149.us-west-2.compute.internal, executor 2): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,451 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 10000.0 in stage 0.0 (TID 10003, ip-172-32-50-149.us-west-2.compute.internal, executor 2): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,451 WARN [task-result-getter-3] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9986.0 in stage 0.0 (TID 9989, ip-172-32-50-149.us-west-2.compute.internal, executor 2): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,451 WARN [task-result-getter-0] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9985.0 in stage 0.0 (TID 9988, ip-172-32-50-149.us-west-2.compute.internal, executor 2): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,451 WARN [task-result-getter-1] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9864.0 in stage 0.0 (TID 9864, ip-172-32-62-222.us-west-2.compute.internal, executor 5): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,454 INFO [dispatcher-event-loop-3] storage.BlockManagerInfo (Logging.scala:logInfo(54)) - Added broadcast_25_piece0 in memory on ip-172-32-56-53.us-west-2.compute.internal:34837 (size: 32.1 KB, free: 2.8 GB)
2020-05-12 18:49:22,455 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9900.0 in stage 0.0 (TID 9900, ip-172-32-62-222.us-west-2.compute.internal, executor 5): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,456 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9991.0 in stage 0.0 (TID 9994, ip-172-32-56-53.us-west-2.compute.internal, executor 4): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,456 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9949.0 in stage 0.0 (TID 9949, ip-172-32-56-53.us-west-2.compute.internal, executor 4): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,456 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9975.0 in stage 0.0 (TID 9977, ip-172-32-62-222.us-west-2.compute.internal, executor 5): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,456 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9995.0 in stage 0.0 (TID 9998, ip-172-32-62-222.us-west-2.compute.internal, executor 7): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,456 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 10001.0 in stage 0.0 (TID 10004, ip-172-32-62-222.us-west-2.compute.internal, executor 5): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,457 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9993.0 in stage 0.0 (TID 9996, ip-172-32-62-222.us-west-2.compute.internal, executor 7): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,457 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9939.0 in stage 0.0 (TID 9939, ip-172-32-62-222.us-west-2.compute.internal, executor 7): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,457 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9930.0 in stage 0.0 (TID 9930, ip-172-32-62-222.us-west-2.compute.internal, executor 7): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,457 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9998.0 in stage 0.0 (TID 10001, ip-172-32-54-163.us-west-2.compute.internal, executor 6): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,462 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9965.0 in stage 0.0 (TID 9967, ip-172-32-56-53.us-west-2.compute.internal, executor 1): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,463 WARN [task-result-getter-3] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9934.0 in stage 0.0 (TID 9934, ip-172-32-56-53.us-west-2.compute.internal, executor 1): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,463 WARN [task-result-getter-0] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9990.0 in stage 0.0 (TID 9993, ip-172-32-56-53.us-west-2.compute.internal, executor 1): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,464 WARN [task-result-getter-1] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9992.0 in stage 0.0 (TID 9995, ip-172-32-56-53.us-west-2.compute.internal, executor 4): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,464 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9967.0 in stage 0.0 (TID 9969, ip-172-32-56-53.us-west-2.compute.internal, executor 4): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,464 WARN [task-result-getter-0] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9982.0 in stage 0.0 (TID 9985, ip-172-32-54-163.us-west-2.compute.internal, executor 3): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,464 WARN [task-result-getter-3] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9999.0 in stage 0.0 (TID 10002, ip-172-32-54-163.us-west-2.compute.internal, executor 3): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,464 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9984.0 in stage 0.0 (TID 9987, ip-172-32-54-163.us-west-2.compute.internal, executor 3): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,464 WARN [task-result-getter-0] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9966.0 in stage 0.0 (TID 9968, ip-172-32-54-163.us-west-2.compute.internal, executor 3): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,467 WARN [task-result-getter-3] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9996.0 in stage 0.0 (TID 9999, ip-172-32-54-163.us-west-2.compute.internal, executor 6): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,474 WARN [task-result-getter-1] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9960.0 in stage 0.0 (TID 9962, ip-172-32-54-163.us-west-2.compute.internal, executor 6): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,474 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9980.0 in stage 0.0 (TID 9982, ip-172-32-54-163.us-west-2.compute.internal, executor 6): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,514 INFO [dispatcher-event-loop-0] yarn.YarnAllocator (Logging.scala:logInfo(54)) - Driver requested a total number of 1 executor(s).
Traceback (most recent call last):
File "script_2020-05-12-16-29-01.py", line 30, in <module>
dns = datasource0.toDF()
File "/mnt/yarn/usercache/root/appcache/application_1589300850182_0001/container_1589300850182_0001_01_000001/PyGlue.zip/awsglue/dynamicframe.py", line 147, in toDF
return DataFrame(self._jdf.toDF(self.glue_ctx._jvm.PythonUtils.toSeq(scala_options)), self.glue_ctx)
File "/mnt/yarn/usercache/root/appcache/application_1589300850182_0001/container_1589300850182_0001_01_000001/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/mnt/yarn/usercache/root/appcache/application_1589300850182_0001/container_1589300850182_0001_01_000001/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/mnt/yarn/usercache/root/appcache/application_1589300850182_0001/container_1589300850182_0001_01_000001/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o66.toDF.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 9929 in stage 0.0 failed 4 times, most recent failure: Lost task 9929.3 in stage 0.0 (TID 9983, ip-172-32-56-53.us-west-2.compute.internal, executor 1): java.io.IOException: too many length or distance symbols
at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:225)
at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:111)
at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:105)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at com.amazonaws.services.glue.readers.BufferedStream.read(DynamicRecordReader.scala:91)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:161)
at java.io.BufferedReader.readLine(BufferedReader.java:324)
at java.io.BufferedReader.readLine(BufferedReader.java:389)
at com.amazonaws.services.glue.readers.GrokReader$$anonfun$init$1$$anonfun$apply$1.apply$mcV$sp(GrokReader.scala:68)
at scala.util.control.Breaks.breakable(Breaks.scala:38)
at com.amazonaws.services.glue.readers.GrokReader$$anonfun$init$1.apply(GrokReader.scala:66)
at com.amazonaws.services.glue.readers.GrokReader$$anonfun$init$1.apply(GrokReader.scala:62)
at scala.collection.Iterator$$anon$9.next(Iterator.scala:162)
at scala.collection.Iterator$$anon$16.hasNext(Iterator.scala:599)
at com.amazonaws.services.glue.readers.GrokReader.hasNext(GrokReader.scala:117)
at com.amazonaws.services.glue.hadoop.TapeHadoopRecordReader.nextKeyValue(TapeHadoopRecordReader.scala:73)
at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:230)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:214)
at scala.collection.AbstractIterator.aggregate(Iterator.scala:1334)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1145)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1145)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1146)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1146)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1889)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1876)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2110)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2059)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2048)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2158)
at org.apache.spark.rdd.RDD$$anonfun$fold$1.apply(RDD.scala:1098)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.fold(RDD.scala:1092)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1.apply(RDD.scala:1161)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.treeAggregate(RDD.scala:1137)
at org.apache.spark.sql.glue.util.SchemaUtils$.fromRDD(SchemaUtils.scala:72)
at com.amazonaws.services.glue.DynamicFrame.recomputeSchema(DynamicFrame.scala:241)
at com.amazonaws.services.glue.DynamicFrame.schema(DynamicFrame.scala:227)
at com.amazonaws.services.glue.DynamicFrame.toDF(DynamicFrame.scala:290)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: too many length or distance symbols
at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:225)
at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:111)
at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:105)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at com.amazonaws.services.glue.readers.BufferedStream.read(DynamicRecordReader.scala:91)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:161)
at java.io.BufferedReader.readLine(BufferedReader.java:324)
at java.io.BufferedReader.readLine(BufferedReader.java:389)
at com.amazonaws.services.glue.readers.GrokReader$$anonfun$init$1$$anonfun$apply$1.apply$mcV$sp(GrokReader.scala:68)
at scala.util.control.Breaks.breakable(Breaks.scala:38)
at com.amazonaws.services.glue.readers.GrokReader$$anonfun$init$1.apply(GrokReader.scala:66)
at com.amazonaws.services.glue.readers.GrokReader$$anonfun$init$1.apply(GrokReader.scala:62)
at scala.collection.Iterator$$anon$9.next(Iterator.scala:162)
at scala.collection.Iterator$$anon$16.hasNext(Iterator.scala:599)
at com.amazonaws.services.glue.readers.GrokReader.hasNext(GrokReader.scala:117)
at com.amazonaws.services.glue.hadoop.TapeHadoopRecordReader.nextKeyValue(TapeHadoopRecordReader.scala:73)
at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:230)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:214)
at scala.collection.AbstractIterator.aggregate(Iterator.scala:1334)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1145)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1145)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1146)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1146)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
2020-05-12 18:49:22,587 ERROR [Driver] yarn.ApplicationMaster (Logging.scala:logError(70)) - User application exited with status 1
2020-05-12 18:49:22,588 INFO [Driver] yarn.ApplicationMaster (Logging.scala:logInfo(54)) - Final app status: FAILED, exitCode: 1, (reason: User application exited with status 1)
2020-05-12 18:49:22,591 INFO [pool-4-thread-1] spark.SparkContext (Logging.scala:logInfo(54)) - Invoking stop() from shutdown hook
2020-05-12 18:49:22,594 INFO [pool-4-thread-1] server.AbstractConnector (AbstractConnector.java:doStop(318)) - Stopped Spark#3a4d5cae{HTTP/1.1,[http/1.1]}{0.0.0.0:0}
2020-05-12 18:49:22,595 INFO [pool-4-thread-1] ui.SparkUI (Logging.scala:logInfo(54)) - Stopped Spark web UI at http://ip-172-32-50-149.us-west-2.compute.internal:40355
2020-05-12 18:49:22,597 INFO [dispatcher-event-loop-2] yarn.YarnAllocator (Logging.scala:logInfo(54)) - Driver requested a total number of 0 executor(s).
2020-05-12 18:49:22,598 INFO [pool-4-thread-1] cluster.YarnClusterSchedulerBackend (Logging.scala:logInfo(54)) - Shutting down all executors
2020-05-12 18:49:22,598 INFO [dispatcher-event-loop-3] cluster.YarnSchedulerBackend$YarnDriverEndpoint (Logging.scala:logInfo(54)) - Asking each executor to shut down
2020-05-12 18:49:22,600 INFO [pool-4-thread-1] cluster.SchedulerExtensionServices (Logging.scala:logInfo(54)) - Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
2020-05-12 18:49:22,604 INFO [dispatcher-event-loop-3] spark.MapOutputTrackerMasterEndpoint (Logging.scala:logInfo(54)) - MapOutputTrackerMasterEndpoint stopped!
2020-05-12 18:49:22,616 INFO [pool-4-thread-1] memory.MemoryStore (Logging.scala:logInfo(54)) - MemoryStore cleared
2020-05-12 18:49:22,616 INFO [pool-4-thread-1] storage.BlockManager (Logging.scala:logInfo(54)) - BlockManager stopped
2020-05-12 18:49:22,617 INFO [pool-4-thread-1] storage.BlockManagerMaster (Logging.scala:logInfo(54)) - BlockManagerMaster stopped
2020-05-12 18:49:22,618 INFO [dispatcher-event-loop-2] scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint (Logging.scala:logInfo(54)) - OutputCommitCoordinator stopped!
2020-05-12 18:49:22,621 INFO [pool-4-thread-1] spark.SparkContext (Logging.scala:logInfo(54)) - Successfully stopped SparkContext
2020-05-12 18:49:22,623 INFO [pool-4-thread-1] yarn.ApplicationMaster (Logging.scala:logInfo(54)) - Unregistering ApplicationMaster with FAILED (diag message: User application exited with status 1)
2020-05-12 18:49:22,631 INFO [pool-4-thread-1] impl.AMRMClientImpl (AMRMClientImpl.java:unregisterApplicationMaster(476)) - Waiting for application to be successfully unregistered.
2020-05-12 18:49:22,733 INFO [pool-4-thread-1] yarn.ApplicationMaster ```

Pig schema tuple not set. Will not generate code

I had run the following commands on pig on the google n-grams dataset:
inp = LOAD 'link to file' AS (ngram:chararray, year:int, occurences:float, books:float);
filter_input = FILTER inp BY (occurences >= 400) AND (books >= 8);
groupinp = GROUP filter_input BY ngram;
sum_occ = FOREACH groupinp GENERATE FLATTEN(group) as ngram, SUM(filter_input.occurences) / SUM(filter_input.books) AS ntry;
roundto = FOREACH sum_occ GENERATE sum_occ.ngram, ROUND_TO( sum_occ.ntry , 2 );
However I get the following error:
DUMP roundto;
601062 [main] WARN org.apache.pig.newplan.BaseOperatorPlan - Encountered Warning IMPLICIT_CAST_TO_FLOAT 2 time(s).
18/04/06 01:46:03 WARN newplan.BaseOperatorPlan: Encountered Warning IMPLICIT_CAST_TO_FLOAT 2 time(s).
601067 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: GROUP_BY,FILTER
18/04/06 01:46:03 INFO pigstats.ScriptState: Pig features used in the script: GROUP_BY,FILTER
601111 [main] INFO org.apache.pig.data.SchemaTupleBackend - Key [pig.schematuple] was not set... will not generate code.
18/04/06 01:46:03 INFO data.SchemaTupleBackend: Key [pig.schematuple] was not set... will not generate code.
601111 [main] INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, NestedLimitOptimizer, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
18/04/06 01:46:03 INFO optimizer.LogicalPlanOptimizer: {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, NestedLimitOptimizer, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
601238 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezLauncher - Tez staging directory is /tmp/temp-336429202 and resources directory is /tmp/temp-336429202
18/04/06 01:46:03 INFO tez.TezLauncher: Tez staging directory is /tmp/temp-336429202 and resources directory is /tmp/temp-336429202
601239 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.plan.TezCompiler - File concatenation threshold: 100 optimistic? false
18/04/06 01:46:03 INFO plan.TezCompiler: File concatenation threshold: 100 optimistic? false
601241 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.CombinerOptimizerUtil - Choosing to move algebraic foreach to combiner
18/04/06 01:46:03 INFO util.CombinerOptimizerUtil: Choosing to move algebraic foreach to combiner
601265 [main] INFO org.apache.pig.builtin.PigStorage - Using PigTextInputFormat
18/04/06 01:46:03 INFO builtin.PigStorage: Using PigTextInputFormat
18/04/06 01:46:03 INFO input.FileInputFormat: Total input files to process : 1
601285 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
18/04/06 01:46:03 INFO util.MapRedUtil: Total input paths to process : 1
601285 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
18/04/06 01:46:03 INFO util.MapRedUtil: Total input paths (combined) to process : 1
18/04/06 01:46:03 INFO hadoop.MRInputHelpers: NumSplits: 1, SerializedSize: 408
601322 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - Local resource: joda-time-2.9.4.jar
18/04/06 01:46:03 INFO tez.TezJobCompiler: Local resource: joda-time-2.9.4.jar
601322 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - Local resource: pig-0.17.0-core-h2.jar
18/04/06 01:46:03 INFO tez.TezJobCompiler: Local resource: pig-0.17.0-core-h2.jar
601322 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - Local resource: antlr-runtime-3.4.jar
18/04/06 01:46:03 INFO tez.TezJobCompiler: Local resource: antlr-runtime-3.4.jar
601322 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - Local resource: automaton-1.11-8.jar
18/04/06 01:46:03 INFO tez.TezJobCompiler: Local resource: automaton-1.11-8.jar
601402 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - For vertex - scope-141: parallelism=1, memory=1536, java opts=-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx1229m -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator -Dlog4j.configuration=tez-container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dtez.root.logger=INFO,CLA
18/04/06 01:46:03 INFO tez.TezDagBuilder: For vertex - scope-141: parallelism=1, memory=1536, java opts=-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx1229m -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator -Dlog4j.configuration=tez-container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dtez.root.logger=INFO,CLA
601402 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Processing aliases: filter_input,groupinp,inp,sum_occ
18/04/06 01:46:03 INFO tez.TezDagBuilder: Processing aliases: filter_input,groupinp,inp,sum_occ
601402 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Detailed locations: inp[1,6],inp[-1,-1],filter_input[2,15],sum_occ[4,10],groupinp[3,11]
18/04/06 01:46:03 INFO tez.TezDagBuilder: Detailed locations: inp[1,6],inp[-1,-1],filter_input[2,15],sum_occ[4,10],groupinp[3,11]
601402 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Pig features in the vertex:
18/04/06 01:46:03 INFO tez.TezDagBuilder: Pig features in the vertex:
601449 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Set auto parallelism for vertex scope-142
18/04/06 01:46:03 INFO tez.TezDagBuilder: Set auto parallelism for vertex scope-142
601450 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - For vertex - scope-142: parallelism=1, memory=3072, java opts=-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx2458m -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator -Dlog4j.configuration=tez-container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dtez.root.logger=INFO,CLA
18/04/06 01:46:03 INFO tez.TezDagBuilder: For vertex - scope-142: parallelism=1, memory=3072, java opts=-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx2458m -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator -Dlog4j.configuration=tez-container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dtez.root.logger=INFO,CLA
601450 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Processing aliases: roundto,sum_occ
18/04/06 01:46:03 INFO tez.TezDagBuilder: Processing aliases: roundto,sum_occ
601450 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Detailed locations: sum_occ[4,10],roundto[6,10]
18/04/06 01:46:03 INFO tez.TezDagBuilder: Detailed locations: sum_occ[4,10],roundto[6,10]
601450 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezDagBuilder - Pig features in the vertex: GROUP_BY
18/04/06 01:46:03 INFO tez.TezDagBuilder: Pig features in the vertex: GROUP_BY
601489 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - Total estimated parallelism is 2
18/04/06 01:46:04 INFO tez.TezJobCompiler: Total estimated parallelism is 2
601531 [PigTezLauncher-0] INFO org.apache.pig.tools.pigstats.tez.TezScriptState - Pig script settings are added to the job
18/04/06 01:46:04 INFO tez.TezScriptState: Pig script settings are added to the job
18/04/06 01:46:04 INFO client.TezClient: Tez Client Version: [ component=tez-api, version=0.8.4, revision=300391394352b074b85b529e870816a72c6f314a, SCM-URL=scm:git:https://git-wip-us.apache.org/repos/asf/tez.git, buildTime=2018-03-21T23:55:28Z ]
18/04/06 01:46:04 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-28-12.ec2.internal/172.31.28.12:8032
18/04/06 01:46:04 INFO client.TezClient: Using org.apache.tez.dag.history.ats.acls.ATSHistoryACLPolicyManager to manage Timeline ACLs
18/04/06 01:46:04 INFO impl.TimelineClientImpl: Timeline service address: http://ip-172-31-28-12.ec2.internal:8188/ws/v1/timeline/
18/04/06 01:46:04 INFO client.TezClient: Session mode. Starting session.
18/04/06 01:46:04 INFO client.TezClientUtils: Using tez.lib.uris value from configuration: hdfs:///apps/tez/tez.tar.gz
18/04/06 01:46:04 INFO client.TezClientUtils: Using tez.lib.uris.classpath value from configuration: null
18/04/06 01:46:04 INFO client.TezClient: Tez system stage directory hdfs://ip-172-31-28-12.ec2.internal:8020/tmp/temp-336429202/.tez/application_1522978297921_0003 doesn't exist and is created
18/04/06 01:46:04 INFO acls.ATSHistoryACLPolicyManager: Created Timeline Domain for History ACLs, domainId=Tez_ATS_application_1522978297921_0003
18/04/06 01:46:04 INFO impl.YarnClientImpl: Submitted application application_1522978297921_0003
18/04/06 01:46:04 INFO client.TezClient: The url to track the Tez Session: http://ip-172-31-28-12.ec2.internal:20888/proxy/application_1522978297921_0003/
607861 [PigTezLauncher-0] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJob - Submitting DAG PigLatin:DefaultJobName-0_scope-2
18/04/06 01:46:10 INFO tez.TezJob: Submitting DAG PigLatin:DefaultJobName-0_scope-2
18/04/06 01:46:10 INFO client.TezClient: Submitting dag to TezSession, sessionName=PigLatin:DefaultJobName, applicationId=application_1522978297921_0003, dagName=PigLatin:DefaultJobName-0_scope-2, callerContext={ context=PIG, callerType=PIG_SCRIPT_ID, callerId=PIG-default-d73e19dc-5287-4ee2-a85d-e931327011dc }
18/04/06 01:46:10 INFO client.TezClient: Submitted dag to TezSession, sessionName=PigLatin:DefaultJobName, applicationId=application_1522978297921_0003, dagName=PigLatin:DefaultJobName-0_scope-2
18/04/06 01:46:10 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-28-12.ec2.internal/172.31.28.12:8032
608409 [PigTezLauncher-0] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJob - Submitted DAG PigLatin:DefaultJobName-0_scope-2. Application id: application_1522978297921_0003
18/04/06 01:46:10 INFO tez.TezJob: Submitted DAG PigLatin:DefaultJobName-0_scope-2. Application id: application_1522978297921_0003
608528 [main] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezLauncher - HadoopJobId: job_1522978297921_0003
18/04/06 01:46:11 INFO tez.TezLauncher: HadoopJobId: job_1522978297921_0003
609410 [Timer-1] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJob - DAG Status: status=RUNNING, progress=TotalTasks: 2 Succeeded: 0 Running: 0 Failed: 0 Killed: 0, diagnostics=, counters=null
18/04/06 01:46:11 INFO tez.TezJob: DAG Status: status=RUNNING, progress=TotalTasks: 2 Succeeded: 0 Running: 0 Failed: 0 Killed: 0, diagnostics=, counters=null
629410 [Timer-1] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJob - DAG Status: status=RUNNING, progress=TotalTasks: 2 Succeeded: 0 Running: 1 Failed: 0 Killed: 0, diagnostics=, counters=null
18/04/06 01:46:31 INFO tez.TezJob: DAG Status: status=RUNNING, progress=TotalTasks: 2 Succeeded: 0 Running: 1 Failed: 0 Killed: 0, diagnostics=, counters=null
646404 [pool-1-thread-1] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezSessionManager - Shutting down Tez session org.apache.tez.client.TezClient#3a371843
18/04/06 01:46:48 INFO tez.TezSessionManager: Shutting down Tez session org.apache.tez.client.TezClient#3a371843
2018-04-06 01:46:48 Shutting down Tez session , sessionName=PigLatin:DefaultJobName, applicationId=application_1522978297921_0003
18/04/06 01:46:48 INFO client.TezClient: Shutting down Tez Session, sessionName=PigLatin:DefaultJobName, applicationId=application_1522978297921_0003
How do I fix this error? Dump commands work for the previous lines other than roundto. And What exactly is the Tez client?
I can't replicate your output, because I get an error as soon as I try this line:
roundto = FOREACH sum_occ GENERATE sum_occ.ngram, ROUND_TO( sum_occ.ntry , 2 );
You don't need to use the dot operator to refer to these fields (e.g. sum_occ.ngram) because they are not nested in a tuple or bag. Try the above line without the dot operator:
roundto = FOREACH sum_occ GENERATE ngram, ROUND_TO( ntry , 2 );
To answer your second question, MapReduce and Tez are both frameworks that can be used to run Pig scripts. Tez can sometimes speed up the time it takes Pig scripts to run. You can explicitly use MapReduce or Tez by starting your Pig shell with pig -x mapreduce or pig -x tez. MapReduce is the default, so if you haven't specified Tez, your Hadoop cluster must be set up to run Pig in Tez.

Deploying war file in Amazon Elastic Beanstalk

I have a war file of my application which works fine when executed from the command line in local. I´m uploading it to Amazon´s Elastic Beanstalk using Tomcat but when I try to access the URL I receive a 404 error.
The problem is something related to my war file or I have to change Amazon´s configuration?
Many thanks.
Logs:
-------------------------------------
/var/log/httpd/elasticbeanstalk-access_log
-------------------------------------
88.26.90.37 (88.26.90.37) - - [23/Jul/2017:18:55:23 +0000] "GET / HTTP/1.1" 404 1004 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36"
-------------------------------------
/var/log/httpd/error_log
-------------------------------------
[Sun Jul 23 18:54:15 2017] [notice] Apache/2.2.32 (Unix) configured -- resuming normal operations
[Sun Jul 23 18:55:23 2017] [error] server is within MinSpareThreads of MaxClients, consider raising the MaxClients setting
-------------------------------------
/var/log/tomcat8/host-manager.2017-07-23.log
-------------------------------------
-------------------------------------
/var/log/httpd/access_log
-------------------------------------
88.26.90.37 - - [23/Jul/2017:18:55:23 +0000] "GET / HTTP/1.1" 404 1004 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36"
-------------------------------------
/var/log/tomcat8/tomcat8-initd.log
-------------------------------------
-------------------------------------
/var/log/tomcat8/localhost_access_log.txt
-------------------------------------
127.0.0.1 - - [23/Jul/2017:18:55:24 +0000] "GET / HTTP/1.1" 404 1004
-------------------------------------
/var/log/tomcat8/manager.2017-07-23.log
-------------------------------------
-------------------------------------
/var/log/eb-activity.log
-------------------------------------
+ EB_APP_DEPLOY_BASE_DIR=/var/lib/tomcat8/webapps
+ rm -rf /var/lib/tomcat8/webapps/ROOT
+ rm -rf '/usr/share/tomcat8/conf/Catalina/localhost/*'
+ rm -rf '/usr/share/tomcat8/work/Catalina/*'
+ mkdir -p /var/lib/tomcat8/webapps/ROOT
[2017-07-23T18:54:13.069Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/02start_xray.sh] : Starting activity...
[2017-07-23T18:54:13.290Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/02start_xray.sh] : Completed activity.
[2017-07-23T18:54:13.291Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/03_stop_proxy.sh] : Starting activity...
[2017-07-23T18:54:13.629Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/03_stop_proxy.sh] : Completed activity. Result:
Executing: service nginx stop
Executing: service httpd stop
Stopping httpd: [FAILED]
[2017-07-23T18:54:13.629Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/03deploy.sh] : Starting activity...
[2017-07-23T18:54:14.089Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/03deploy.sh] : Completed activity. Result:
++ /opt/elasticbeanstalk/bin/get-config container -k app_staging_dir
+ EB_APP_STAGING_DIR=/tmp/deployment/application/ROOT
++ /opt/elasticbeanstalk/bin/get-config container -k app_deploy_dir
+ EB_APP_DEPLOY_DIR=/var/lib/tomcat8/webapps/ROOT
++ wc -l
++ find /tmp/deployment/application/ROOT -maxdepth 1 -type f
+ FILE_COUNT=0
++ grep -Pi '\.war$'
++ find /tmp/deployment/application/ROOT -maxdepth 1 -type f
++ echo ''
+ WAR_FILES=
+ WAR_FILE_COUNT=0
+ [[ 0 > 0 ]]
++ readlink -f /var/lib/tomcat8/webapps/ROOT/../
+ EB_APP_DEPLOY_BASE=/var/lib/tomcat8/webapps
+ rm -rf /var/lib/tomcat8/webapps/ROOT
+ [[ 0 == 0 ]]
+ [[ 0 > 1 ]]
+ cp -R /tmp/deployment/application/ROOT /var/lib/tomcat8/webapps/ROOT
+ chown -R tomcat:tomcat /var/lib/tomcat8/webapps
[2017-07-23T18:54:14.089Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/04config_deploy.sh] : Starting activity...
[2017-07-23T18:54:14.652Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/04config_deploy.sh] : Completed activity. Result:
++ /opt/elasticbeanstalk/bin/get-config container -k config_staging_dir
+ EB_CONFIG_STAGING_DIR=/tmp/deployment/config
++ /opt/elasticbeanstalk/bin/get-config container -k config_deploy_dir
+ EB_CONFIG_DEPLOY_DIR=/etc/sysconfig
++ /opt/elasticbeanstalk/bin/get-config container -k config_filename
+ EB_CONFIG_FILENAME=tomcat8
+ cp /tmp/deployment/config/tomcat8 /etc/sysconfig/tomcat8
[2017-07-23T18:54:14.653Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/05start.sh] : Starting activity...
[2017-07-23T18:54:15.081Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/05start.sh] : Completed activity. Result:
++ /opt/elasticbeanstalk/bin/get-config container -k tomcat_version
+ TOMCAT_VERSION=8
+ TOMCAT_NAME=tomcat8
+ /etc/init.d/tomcat8 status
tomcat8 is stopped
[ OK ]
+ /etc/init.d/tomcat8 start
Starting tomcat8: [ OK ]
+ /usr/bin/monit monitor tomcat
monit: generated unique Monit id ae33689ef3cf376bf23fa3b09041524e and stored to '/root/.monit.id'
[2017-07-23T18:54:15.082Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/09_start_proxy.sh] : Starting activity...
[2017-07-23T18:54:19.022Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook/09_start_proxy.sh] : Completed activity. Result:
Executing: service httpd stop
Stopping httpd: [FAILED]
Executing: service httpd start
Starting httpd: [ OK ]
Executing: /bin/chmod 755 /var/run/httpd
Executing: /opt/elasticbeanstalk/bin/healthd-track-pidfile --proxy httpd
Executing: /opt/elasticbeanstalk/bin/healthd-configure --appstat-log-path /var/log/httpd/healthd/application.log --appstat-unit usec --appstat-timestamp-on 'arrival'
Executing: /opt/elasticbeanstalk/bin/healthd-restart
[2017-07-23T18:54:19.022Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployEnactHook] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/appdeploy/enact.
[2017-07-23T18:54:19.022Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployPostHook] : Starting activity...
[2017-07-23T18:54:19.023Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployPostHook/03monitor_pids.sh] : Starting activity...
[2017-07-23T18:54:20.047Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployPostHook/03monitor_pids.sh] : Completed activity.
[2017-07-23T18:54:20.047Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/AppDeployPostHook] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/appdeploy/post.
[2017-07-23T18:54:20.047Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/PostInitHook] : Starting activity...
[2017-07-23T18:54:20.048Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/PostInitHook/01processmgrstart.sh] : Starting activity...
[2017-07-23T18:54:20.097Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/PostInitHook/01processmgrstart.sh] : Completed activity. Result:
+ /usr/bin/monit
[2017-07-23T18:54:20.097Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1/PostInitHook] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/postinit.
[2017-07-23T18:54:20.097Z] INFO [2048] - [Application deployment OnFocus2307#1/StartupStage1] : Completed activity. Result:
Application deployment - Command CMD-Startup stage 1 completed
[2017-07-23T18:54:20.098Z] INFO [2048] - [Application deployment OnFocus2307#1/AddonsAfter] : Starting activity...
[2017-07-23T18:54:20.098Z] INFO [2048] - [Application deployment OnFocus2307#1/AddonsAfter/ConfigLogRotation] : Starting activity...
[2017-07-23T18:54:20.098Z] INFO [2048] - [Application deployment OnFocus2307#1/AddonsAfter/ConfigLogRotation/10-config.sh] : Starting activity...
[2017-07-23T18:54:20.509Z] INFO [2048] - [Application deployment OnFocus2307#1/AddonsAfter/ConfigLogRotation/10-config.sh] : Completed activity. Result:
Disabled forced hourly log rotation.
[2017-07-23T18:54:20.510Z] INFO [2048] - [Application deployment OnFocus2307#1/AddonsAfter/ConfigLogRotation] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/addons/logpublish/hooks/config.
[2017-07-23T18:54:20.510Z] INFO [2048] - [Application deployment OnFocus2307#1/AddonsAfter] : Completed activity.
[2017-07-23T18:54:20.510Z] INFO [2048] - [Application deployment OnFocus2307#1] : Completed activity. Result:
Application deployment - Command CMD-Startup succeeded
[2017-07-23T18:55:36.363Z] INFO [2808] - [CMD-TailLogs] : Starting activity...
[2017-07-23T18:55:36.363Z] INFO [2808] - [CMD-TailLogs/AddonsBefore] : Starting activity...
[2017-07-23T18:55:36.364Z] INFO [2808] - [CMD-TailLogs/AddonsBefore] : Completed activity.
[2017-07-23T18:55:36.364Z] INFO [2808] - [CMD-TailLogs/TailLogs] : Starting activity...
[2017-07-23T18:55:36.364Z] INFO [2808] - [CMD-TailLogs/TailLogs/TailLogs] : Starting activity...
-------------------------------------
/var/log/tomcat8/catalina.2017-07-23.log
-------------------------------------
23-Jul-2017 18:54:17.868 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version: Apache Tomcat/8.0.44
23-Jul-2017 18:54:17.886 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Jul 5 2017 19:02:51 UTC
23-Jul-2017 18:54:17.886 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server number: 8.0.44.0
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name: Linux
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version: 4.9.32-15.41.amzn1.x86_64
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture: amd64
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.131-2.b11.30.amzn1.x86_64/jre
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version: 1.8.0_131-b11
23-Jul-2017 18:54:17.888 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor: Oracle Corporation
23-Jul-2017 18:54:17.888 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE: /usr/share/tomcat8
23-Jul-2017 18:54:17.888 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME: /usr/share/tomcat8
23-Jul-2017 18:54:17.888 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -DJDBC_CONNECTION_STRING=
23-Jul-2017 18:54:17.890 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Xms256m
23-Jul-2017 18:54:17.891 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Xmx256m
23-Jul-2017 18:54:17.891 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -XX:MaxPermSize=64m
23-Jul-2017 18:54:17.891 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.base=/usr/share/tomcat8
23-Jul-2017 18:54:17.891 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.home=/usr/share/tomcat8
23-Jul-2017 18:54:17.891 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.awt.headless=true
23-Jul-2017 18:54:17.892 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.endorsed.dirs=
23-Jul-2017 18:54:17.892 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.io.tmpdir=/var/cache/tomcat8/temp
23-Jul-2017 18:54:17.897 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.config.file=/usr/share/tomcat8/conf/logging.properties
23-Jul-2017 18:54:17.897 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
23-Jul-2017 18:54:17.897 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
23-Jul-2017 18:54:18.284 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"]
23-Jul-2017 18:54:18.352 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
23-Jul-2017 18:54:18.371 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["ajp-nio-8009"]
23-Jul-2017 18:54:18.374 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
23-Jul-2017 18:54:18.377 INFO [main] org.apache.catalina.startup.Catalina.load Initialization processed in 2288 ms
23-Jul-2017 18:54:18.477 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service Catalina
23-Jul-2017 18:54:18.479 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.0.44
23-Jul-2017 18:54:18.512 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /var/lib/tomcat8/webapps/ROOT
23-Jul-2017 18:54:27.482 INFO [localhost-startStop-1] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
23-Jul-2017 18:54:27.680 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /var/lib/tomcat8/webapps/ROOT has finished in 9,162 ms
23-Jul-2017 18:54:27.691 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
23-Jul-2017 18:54:27.724 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
23-Jul-2017 18:54:27.747 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 9369 ms
-------------------------------------
/var/log/eb-commandprocessor.log
-------------------------------------
[2017-07-23T18:52:59.225Z] INFO [1780] : Found enabled addons: ["logpublish", "logstreaming"].
[2017-07-23T18:52:59.227Z] INFO [1780] : Updating Command definition of addon logpublish.
[2017-07-23T18:52:59.227Z] INFO [1780] : Updating Command definition of addon logstreaming.
[2017-07-23T18:52:59.227Z] DEBUG [1780] : Retrieving metadata for key: AWS::CloudFormation::Init||Infra-WriteApplication2||files..
[2017-07-23T18:52:59.232Z] DEBUG [1780] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||ManifestFileS3Key..
[2017-07-23T18:52:59.515Z] INFO [1780] : Finding latest manifest from bucket 'elasticbeanstalk-us-west-2-253743328849' with prefix 'resources/environments/e-mrdyfipmbp/_runtime/versions/manifest_'.
[2017-07-23T18:52:59.801Z] INFO [1780] : Found manifest with key 'resources/environments/e-mrdyfipmbp/_runtime/versions/manifest_1500835914428'.
[2017-07-23T18:52:59.818Z] INFO [1780] : Updated manifest cache: deployment ID 1 and serial 1.
[2017-07-23T18:52:59.818Z] DEBUG [1780] : Loaded definition of Command CMD-PreInit.
[2017-07-23T18:52:59.818Z] INFO [1780] : Executing Initialization
[2017-07-23T18:52:59.819Z] INFO [1780] : Executing command: CMD-PreInit...
[2017-07-23T18:52:59.819Z] INFO [1780] : Executing command CMD-PreInit activities...
[2017-07-23T18:52:59.819Z] DEBUG [1780] : Setting environment variables..
[2017-07-23T18:52:59.819Z] INFO [1780] : Running AddonsBefore for command CMD-PreInit...
[2017-07-23T18:53:04.333Z] DEBUG [1780] : Running stages of Command CMD-PreInit from stage 0 to stage 0...
[2017-07-23T18:53:04.333Z] INFO [1780] : Running stage 0 of command CMD-PreInit...
[2017-07-23T18:53:04.333Z] DEBUG [1780] : Loaded 3 actions for stage 0.
[2017-07-23T18:53:04.333Z] INFO [1780] : Running 1 of 3 actions: InfraWriteConfig...
[2017-07-23T18:53:04.345Z] INFO [1780] : Running 2 of 3 actions: DownloadSourceBundle...
[2017-07-23T18:53:05.730Z] INFO [1780] : Running 3 of 3 actions: PreInitHook...
[2017-07-23T18:53:07.650Z] INFO [1780] : Running AddonsAfter for command CMD-PreInit...
[2017-07-23T18:53:07.650Z] INFO [1780] : Command CMD-PreInit succeeded!
[2017-07-23T18:53:07.651Z] INFO [1780] : Command processor returning results:
{"status":"SUCCESS","api_version":"1.0","results":[{"status":"SUCCESS","msg":"","returncode":0,"events":[]}]}
[2017-07-23T18:54:05.518Z] DEBUG [2048] : Reading config file: /etc/elasticbeanstalk/.aws-eb-stack.properties
[2017-07-23T18:54:05.518Z] DEBUG [2048] : Checking if the command processor should execute...
[2017-07-23T18:54:05.520Z] DEBUG [2048] : Checking whether the command is applicable to instance (i-04e322b065f1ab8d7)..
[2017-07-23T18:54:05.520Z] INFO [2048] : Command is applicable to this instance (i-04e322b065f1ab8d7)..
[2017-07-23T18:54:05.520Z] DEBUG [2048] : Checking if the received command stage is valid..
[2017-07-23T18:54:05.520Z] INFO [2048] : No stage_num in command. Valid stage..
[2017-07-23T18:54:05.520Z] INFO [2048] : Received command CMD-Startup: {"execution_data":"{\"leader_election\":\"true\"}","instance_ids":["i-04e322b065f1ab8d7"],"command_name":"CMD-Startup","api_version":"1.0","resource_name":"AWSEBAutoScalingGroup","request_id":"fb2b25c7-6fd7-11e7-87da-0d1616730116"}
[2017-07-23T18:54:05.520Z] INFO [2048] : Command processor should execute command.
[2017-07-23T18:54:05.520Z] DEBUG [2048] : Storing current stage..
[2017-07-23T18:54:05.520Z] DEBUG [2048] : Stage_num does not exist. Not saving null stage. Returning..
[2017-07-23T18:54:05.520Z] DEBUG [2048] : Reading config file: /etc/elasticbeanstalk/.aws-eb-stack.properties
[2017-07-23T18:54:05.520Z] DEBUG [2048] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_ContainerConfigFileContent||commands..
[2017-07-23T18:54:05.521Z] DEBUG [2048] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_API||_Commands..
[2017-07-23T18:54:05.522Z] INFO [2048] : Found enabled addons: ["logpublish", "logstreaming"].
[2017-07-23T18:54:05.524Z] INFO [2048] : Updating Command definition of addon logpublish.
[2017-07-23T18:54:05.525Z] INFO [2048] : Updating Command definition of addon logstreaming.
[2017-07-23T18:54:05.525Z] DEBUG [2048] : Refreshing metadata...
[2017-07-23T18:54:05.954Z] DEBUG [2048] : Refreshed environment metadata.
[2017-07-23T18:54:05.954Z] DEBUG [2048] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_ContainerConfigFileContent||commands..
[2017-07-23T18:54:05.956Z] DEBUG [2048] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_API||_Commands..
[2017-07-23T18:54:05.957Z] INFO [2048] : Found enabled addons: ["logpublish", "logstreaming"].
[2017-07-23T18:54:05.961Z] INFO [2048] : Updating Command definition of addon logpublish.
[2017-07-23T18:54:05.961Z] INFO [2048] : Updating Command definition of addon logstreaming.
[2017-07-23T18:54:05.962Z] DEBUG [2048] : Loaded definition of Command CMD-Startup.
[2017-07-23T18:54:05.963Z] INFO [2048] : Executing Application deployment
[2017-07-23T18:54:05.964Z] INFO [2048] : Executing command: CMD-Startup...
[2017-07-23T18:54:05.964Z] INFO [2048] : Executing command CMD-Startup activities...
[2017-07-23T18:54:05.964Z] DEBUG [2048] : Setting environment variables..
[2017-07-23T18:54:05.964Z] INFO [2048] : Running AddonsBefore for command CMD-Startup...
[2017-07-23T18:54:06.242Z] DEBUG [2048] : Running stages of Command CMD-Startup from stage 0 to stage 1...
[2017-07-23T18:54:06.242Z] INFO [2048] : Running stage 0 of command CMD-Startup...
[2017-07-23T18:54:06.242Z] INFO [2048] : Running leader election...
[2017-07-23T18:54:06.665Z] INFO [2048] : Instance is Leader.
[2017-07-23T18:54:06.666Z] DEBUG [2048] : Loaded 7 actions for stage 0.
[2017-07-23T18:54:06.666Z] INFO [2048] : Running 1 of 7 actions: HealthdLogRotation...
[2017-07-23T18:54:06.678Z] INFO [2048] : Running 2 of 7 actions: HealthdHTTPDLogging...
[2017-07-23T18:54:06.680Z] INFO [2048] : Running 3 of 7 actions: HealthdNginxLogging...
[2017-07-23T18:54:06.681Z] INFO [2048] : Running 4 of 7 actions: EbExtensionPreBuild...
[2017-07-23T18:54:07.163Z] INFO [2048] : Running 5 of 7 actions: AppDeployPreHook...
[2017-07-23T18:54:09.688Z] INFO [2048] : Running 6 of 7 actions: EbExtensionPostBuild...
[2017-07-23T18:54:10.176Z] INFO [2048] : Running 7 of 7 actions: InfraCleanEbExtension...
[2017-07-23T18:54:10.181Z] INFO [2048] : Running stage 1 of command CMD-Startup...
[2017-07-23T18:54:10.181Z] DEBUG [2048] : Loaded 3 actions for stage 1.
[2017-07-23T18:54:10.181Z] INFO [2048] : Running 1 of 3 actions: AppDeployEnactHook...
[2017-07-23T18:54:19.022Z] INFO [2048] : Running 2 of 3 actions: AppDeployPostHook...
[2017-07-23T18:54:20.047Z] INFO [2048] : Running 3 of 3 actions: PostInitHook...
[2017-07-23T18:54:20.097Z] INFO [2048] : Running AddonsAfter for command CMD-Startup...
[2017-07-23T18:54:20.510Z] INFO [2048] : Command CMD-Startup succeeded!
[2017-07-23T18:54:20.511Z] INFO [2048] : Command processor returning results:
{"status":"SUCCESS","api_version":"1.0","results":[{"status":"SUCCESS","msg":"","returncode":0,"events":[]}]}
[2017-07-23T18:55:36.353Z] DEBUG [2808] : Reading config file: /etc/elasticbeanstalk/.aws-eb-stack.properties
[2017-07-23T18:55:36.354Z] DEBUG [2808] : Checking if the command processor should execute...
[2017-07-23T18:55:36.356Z] DEBUG [2808] : Checking whether the command is applicable to instance (i-04e322b065f1ab8d7)..
[2017-07-23T18:55:36.356Z] INFO [2808] : Command is applicable to this instance (i-04e322b065f1ab8d7)..
[2017-07-23T18:55:36.356Z] DEBUG [2808] : Checking if the received command stage is valid..
[2017-07-23T18:55:36.356Z] INFO [2808] : No stage_num in command. Valid stage..
[2017-07-23T18:55:36.356Z] INFO [2808] : Received command CMD-TailLogs: {"execution_data":"{\"aws_access_key_id\":\"ASIAJSUYLCIZFOIKPO2A\",\"signature\":\"RPf86lrs\\\/c0114+EODhe8jxRJhs=\",\"security_token\":\"FQoDYXdzENz\\\/\\\/\\\/\\\/\\\/\\\/\\\/\\\/\\\/\\\/wEaDCSrox2Xx3QiuRUnziLcAxEo3H8dpDcz3tKFZriPXlqq595Xpcm6LsBYoPAwWWcm7bDE38KE8kwDhnSMHttNJl1yNd5kofzZ9J5pf9gRSQdXGHWXghfw8+Bt3IVKutzn7tni2NaXFMlZxSxOpkvVxRYUph9et1kFsDlX2ml2ONCPDGqGYFBatI1mMPbvdTVViz7YbMiGDx88kQQF9W9wghJ63FkxG0JGscE1ugXc840xjzTmSIT7bNPmlkaLI4iBLor9Whn4a1fiDuZq2EB8lDxKMd+hjWmMSbMYjPvdGusVbuvLu1KC8mvFMx29BVLoo+xvxMc2JzO03\\\/WVo50oWnM8nSG04UtfkNGapLnbVO1NWoMWD107qHSeyWqAi1HO83KmxW4E5gvtF5IGNd98yJkcSmwDv0BNJDZnP8DTZNP+AHrCW\\\/mC6ybEjNxkh\\\/La\\\/YpPmfWAcbOG61IKqIyZHrhGO65nvYRxsz5TJ9B5sbGvDmhlGEJ1thAP\\\/xcaTOAUn006DxGlO+aVrz6ie9uU6Mt4wNos4qdftSce5mszp4Gc3gYOpfzqq4lIpnB2GUY9ImVMclLI60VtaOMkzMNsNJTRtl1X1NuiUa7sefP8Rsod\\\/yeev3ueDLJsfhJozF\\\/w4MtijFfP547w1KfxKOeb08sF\",\"policy\":\"eyJleHBpcmF0aW9uIjoiMjAxNy0wNy0yM1QxOToyNTozNC41ODNaIiwiY29uZGl0aW9ucyI6W1sic3RhcnRzLXdpdGgiLCIkeC1hbXotbWV0YS10aW1lX3N0YW1wIiwiIl0sWyJzdGFydHMtd2l0aCIsIiR4LWFtei1tZXRhLXB1Ymxpc2hfbWVjaGFuaXNtIiwiIl0sWyJzdGFydHMtd2l0aCIsIiRrZXkiLCJyZXNvdXJjZXNcL2Vudmlyb25tZW50c1wvbG9nc1wvIl0sWyJzdGFydHMtd2l0aCIsIiR4LWFtei1tZXRhLWJhdGNoX2lkIiwiIl0sWyJzdGFydHMtd2l0aCIsIiR4LWFtei1tZXRhLWZpbGVfbmFtZSIsIiJdLFsic3RhcnRzLXdpdGgiLCIkeC1hbXotc2VjdXJpdHktdG9rZW4iLCIiXSxbInN0YXJ0cy13aXRoIiwiJENvbnRlbnQtVHlwZSIsIiJdLFsiZXEiLCIkYnVja2V0IiwiZWxhc3RpY2JlYW5zdGFsay11cy13ZXN0LTItMjUzNzQzMzI4ODQ5Il0sWyJlcSIsIiRhY2wiLCJwcml2YXRlIl1dfQ==\"}","instance_ids":["i-04e322b065f1ab8d7"],"data":"822325d4-6fd8-11e7-8e3e-c7d19f2d4234","command_name":"CMD-TailLogs","api_version":"1.0","resource_name":"AWSEBAutoScalingGroup","request_id":"822325d4-6fd8-11e7-8e3e-c7d19f2d4234"}
[2017-07-23T18:55:36.356Z] INFO [2808] : Command processor should execute command.
[2017-07-23T18:55:36.356Z] DEBUG [2808] : Storing current stage..
[2017-07-23T18:55:36.356Z] DEBUG [2808] : Stage_num does not exist. Not saving null stage. Returning..
[2017-07-23T18:55:36.356Z] DEBUG [2808] : Reading config file: /etc/elasticbeanstalk/.aws-eb-stack.properties
[2017-07-23T18:55:36.356Z] DEBUG [2808] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_ContainerConfigFileContent||commands..
[2017-07-23T18:55:36.358Z] DEBUG [2808] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_API||_Commands..
[2017-07-23T18:55:36.359Z] INFO [2808] : Found enabled addons: ["logpublish", "logstreaming"].
[2017-07-23T18:55:36.362Z] INFO [2808] : Updating Command definition of addon logpublish.
[2017-07-23T18:55:36.362Z] INFO [2808] : Updating Command definition of addon logstreaming.
[2017-07-23T18:55:36.362Z] DEBUG [2808] : Loaded definition of Command CMD-TailLogs.
[2017-07-23T18:55:36.362Z] INFO [2808] : Executing CMD-TailLogs
[2017-07-23T18:55:36.363Z] INFO [2808] : Executing command: CMD-TailLogs...
[2017-07-23T18:55:36.363Z] INFO [2808] : Executing command CMD-TailLogs activities...
[2017-07-23T18:55:36.363Z] DEBUG [2808] : Setting environment variables..
[2017-07-23T18:55:36.363Z] INFO [2808] : Running AddonsBefore for command CMD-TailLogs...
[2017-07-23T18:55:36.364Z] DEBUG [2808] : Running stages of Command CMD-TailLogs from stage 0 to stage 0...
[2017-07-23T18:55:36.364Z] INFO [2808] : Running stage 0 of command CMD-TailLogs...
[2017-07-23T18:55:36.364Z] DEBUG [2808] : Loaded 1 actions for stage 0.
[2017-07-23T18:55:36.364Z] INFO [2808] : Running 1 of 1 actions: TailLogs...
-------------------------------------
/var/log/httpd/elasticbeanstalk-error_log
-------------------------------------
-------------------------------------
/var/log/tomcat8/localhost.2017-07-23.log
-------------------------------------
23-Jul-2017 18:54:27.562 INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log Spring WebApplicationInitializers detected on classpath: [org.springframework.boot.autoconfigure.jersey.JerseyAutoConfiguration$JerseyWebApplicationInitializer#161183dc]
-------------------------------------
/var/log/tomcat8/catalina.out
-------------------------------------
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=64m; support was removed in 8.0
23-Jul-2017 18:54:17.868 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version: Apache Tomcat/8.0.44
23-Jul-2017 18:54:17.886 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Jul 5 2017 19:02:51 UTC
23-Jul-2017 18:54:17.886 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server number: 8.0.44.0
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name: Linux
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version: 4.9.32-15.41.amzn1.x86_64
23-Jul-2017 18:54:17.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture: amd64
Check your logs for the reason: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.logging.html
Perhaps you are missing a JDBC driver? ;D

Flink JobManager disconnected to taskmanager

I use Flink cluster(Standalone) and it disconnects task manager from Job manager after few hours.
I follows basic step of setup standalone cluster in flink.apache.org and I use 1 master node and 2 workers and they are Mac osx.
only 1 worker is disconnected from jobManager.
I share log file and please give me solution.
below is Job Manager's Log
2017-01-28 16:58:32,092 WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2017-01-28 16:58:32,195 INFO org.apache.flink.runtime.jobmanager.JobManager - --------------------------------------------------------------------------------
2017-01-28 16:58:32,195 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager (Version: 1.1.4, Rev:8fb0fc8, Date:19.12.2016 # 10:42:50 UTC)
2017-01-28 16:58:32,195 INFO org.apache.flink.runtime.jobmanager.JobManager - Current user: macmini
2017-01-28 16:58:32,195 INFO org.apache.flink.runtime.jobmanager.JobManager - JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.121-b13
2017-01-28 16:58:32,195 INFO org.apache.flink.runtime.jobmanager.JobManager - Maximum heap size: 491 MiBytes
2017-01-28 16:58:32,195 INFO org.apache.flink.runtime.jobmanager.JobManager - JAVA_HOME: (not set)
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - Hadoop version: 2.7.2
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - JVM Options:
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - -Xms512m
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - -Xmx512m
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - -Dlog.file=/server/flink-1.1.4/log/flink-macmini-jobmanager-0-macmini.local.log
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - -Dlog4j.configuration=file:/server/flink-1.1.4/conf/log4j.properties
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - -Dlogback.configurationFile=file:/server/flink-1.1.4/conf/logback.xml
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - Program Arguments:
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - --configDir
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - /server/flink-1.1.4/conf
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - --executionMode
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - cluster
2017-01-28 16:58:32,198 INFO org.apache.flink.runtime.jobmanager.JobManager - Classpath: /server/flink-1.1.4/lib/flink-dist_2.11-1.1.4.jar:/server/flink-1.1.4/lib/flink-python_2.11-1.1.4.jar:/server/flink-1.1.4/lib/log4j-1.2.17.jar:/server/flink-1.1.4/lib/slf4j-log4j12-1.7.7.jar:::
2017-01-28 16:58:32,199 INFO org.apache.flink.runtime.jobmanager.JobManager - --------------------------------------------------------------------------------
2017-01-28 16:58:32,199 INFO org.apache.flink.runtime.jobmanager.JobManager - Registered UNIX signal handlers for [TERM, HUP, INT]
2017-01-28 16:58:32,354 INFO org.apache.flink.runtime.jobmanager.JobManager - Loading configuration from /server/flink-1.1.4/conf
2017-01-28 16:58:32,367 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, 192.168.0.15
2017-01-28 16:58:32,367 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123
2017-01-28 16:58:32,368 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.mb, 512
2017-01-28 16:58:32,368 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.mb, 1024
2017-01-28 16:58:32,368 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 8
2017-01-28 16:58:32,368 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.preallocate, false
2017-01-28 16:58:32,368 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 2
2017-01-28 16:58:32,368 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.web.port, 8081
2017-01-28 16:58:32,372 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager without high-availability
2017-01-28 16:58:32,380 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager on 192.168.0.15:6123 with execution mode CLUSTER
2017-01-28 16:58:32,416 INFO org.apache.flink.runtime.jobmanager.JobManager - Security is not enabled. Starting non-authenticated JobManager.
2017-01-28 16:58:32,467 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager
2017-01-28 16:58:32,468 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager actor system at 192.168.0.15:6123
2017-01-28 16:58:32,774 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
2017-01-28 16:58:32,814 INFO Remoting - Starting remoting
2017-01-28 16:58:32,999 INFO Remoting - Remoting started; listening on addresses :[akka.tcp://flink#192.168.0.15:6123]
2017-01-28 16:58:33,006 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager web frontend
2017-01-28 16:58:33,020 INFO org.apache.flink.runtime.webmonitor.WebMonitorUtils - Determined location of JobManager log file: /server/flink-1.1.4/log/flink-macmini-jobmanager-0-macmini.local.log
2017-01-28 16:58:33,020 INFO org.apache.flink.runtime.webmonitor.WebMonitorUtils - Determined location of JobManager stdout file: /server/flink-1.1.4/log/flink-macmini-jobmanager-0-macmini.local.out
2017-01-28 16:58:33,088 INFO org.apache.flink.runtime.webmonitor.WebRuntimeMonitor - Using directory /var/folders/8m/qw1dq5sx68d38w7xg2pqhnz80000gn/T/flink-web-e2793d07-860b-4854-8c16-5805320cbd62 for the web interface files
2017-01-28 16:58:33,089 INFO org.apache.flink.runtime.webmonitor.WebRuntimeMonitor - Using directory /var/folders/8m/qw1dq5sx68d38w7xg2pqhnz80000gn/T/flink-web-upload-b98128d7-8572-47e8-bdfb-b30d287d0868 for web frontend JAR file uploads
2017-01-28 16:58:33,371 INFO org.apache.flink.runtime.webmonitor.WebRuntimeMonitor - Web frontend listening at 0:0:0:0:0:0:0:0:8081
2017-01-28 16:58:33,371 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager actor
2017-01-28 16:58:33,379 INFO org.apache.flink.runtime.blob.BlobServer - Created BLOB server storage directory /var/folders/8m/qw1dq5sx68d38w7xg2pqhnz80000gn/T/blobStore-1b966770-440f-45e0-97cc-018357bdefb1
2017-01-28 16:58:33,384 INFO org.apache.flink.runtime.blob.BlobServer - Started BLOB server at 0.0.0.0:52505 - max concurrent requests: 50 - max backlog: 1000
2017-01-28 16:58:33,399 INFO org.apache.flink.runtime.checkpoint.savepoint.SavepointStoreFactory - Using job manager savepoint state backend.
2017-01-28 16:58:33,406 INFO org.apache.flink.runtime.metrics.MetricRegistry - No metrics reporter configured, no metrics will be exposed/reported.
2017-01-28 16:58:33,417 INFO org.apache.flink.runtime.jobmanager.MemoryArchivist - Started memory archivist akka://flink/user/archive
2017-01-28 16:58:33,424 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager at akka.tcp://flink#192.168.0.15:6123/user/jobmanager.
2017-01-28 16:58:33,434 INFO org.apache.flink.runtime.webmonitor.WebRuntimeMonitor - Starting with JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager on port 8081
2017-01-28 16:58:33,435 INFO org.apache.flink.runtime.webmonitor.JobManagerRetriever - New leader reachable under akka.tcp://flink#192.168.0.15:6123/user/jobmanager:null.
2017-01-28 16:58:33,448 INFO org.apache.flink.runtime.clusterframework.standalone.StandaloneResourceManager - Trying to associate with JobManager leader akka.tcp://flink#192.168.0.15:6123/user/jobmanager
2017-01-28 16:58:33,532 INFO org.apache.flink.runtime.jobmanager.JobManager - JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager was granted leadership with leader session ID None.
2017-01-28 16:58:33,549 INFO org.apache.flink.runtime.clusterframework.standalone.StandaloneResourceManager - Resource Manager associating with leading JobManager Actor[akka://flink/user/jobmanager#-320621701] - leader session null
2017-01-28 16:58:34,996 INFO org.apache.flink.runtime.clusterframework.standalone.StandaloneResourceManager - TaskManager ResourceID{resourceId='6817357d50e00e3879507d8a0ecd9f42'} has started.
2017-01-28 16:58:34,998 INFO org.apache.flink.runtime.instance.InstanceManager - Registered TaskManager at 192.168.0.17 (akka.tcp://flink#192.168.0.17:49684/user/taskmanager) as 0495d83fc1767f41770ba9feb3c7b8a6. Current number of registered hosts is 1. Current number of alive task slots is 8.
2017-01-28 16:58:37,464 INFO org.apache.flink.runtime.clusterframework.standalone.StandaloneResourceManager - TaskManager ResourceID{resourceId='31c1ebbfdbd41ae798718575aff2fb8c'} has started.
2017-01-28 16:58:37,465 INFO org.apache.flink.runtime.instance.InstanceManager - Registered TaskManager at 192.168.0.16 (akka.tcp://flink#192.168.0.16:50620/user/taskmanager) as 794bb699ab58aa3eab324924a8b8409e. Current number of registered hosts is 2. Current number of alive task slots is 16.
2017-01-28 17:05:04,426 INFO org.apache.flink.runtime.blob.BlobCache - Created BLOB cache storage directory /var/folders/8m/qw1dq5sx68d38w7xg2pqhnz80000gn/T/blobStore-8d18ed4c-6a33-4d62-a63b-0aa2a82569eb
2017-01-28 17:05:04,464 INFO org.apache.flink.runtime.blob.BlobCache - Downloading 8b816b83aa254a2853a32e51b37ee1b3c14ef82a from localhost/127.0.0.1:52505
2017-01-28 17:07:45,153 INFO org.apache.flink.runtime.blob.BlobCache - Downloading 8808acee6a1b7c225a9b471a1429a75ec75d4c16 from localhost/127.0.0.1:52505
2017-01-29 09:54:01,082 WARN akka.remote.RemoteWatcher - Detected unreachable: [akka.tcp://flink#192.168.0.17:49684]
2017-01-29 09:54:01,089 INFO org.apache.flink.runtime.jobmanager.JobManager - Task manager akka.tcp://flink#192.168.0.17:49684/user/taskmanager terminated.
2017-01-29 09:54:01,090 INFO org.apache.flink.runtime.instance.InstanceManager - Unregistered task manager akka.tcp://flink#192.168.0.17:49684/user/taskmanager. Number of registered task managers 1. Number of available slots 8.
TaskManager's log
2017-01-28 16:58:33,162 WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2017-01-28 16:58:33,268 INFO org.apache.flink.runtime.taskmanager.TaskManager - --------------------------------------------------------------------------------
2017-01-28 16:58:33,268 INFO org.apache.flink.runtime.taskmanager.TaskManager - Starting TaskManager (Version: 1.1.4, Rev:8fb0fc8, Date:19.12.2016 # 10:42:50 UTC)
2017-01-28 16:58:33,268 INFO org.apache.flink.runtime.taskmanager.TaskManager - Current user: macmini3
2017-01-28 16:58:33,268 INFO org.apache.flink.runtime.taskmanager.TaskManager - JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.121-b13
2017-01-28 16:58:33,268 INFO org.apache.flink.runtime.taskmanager.TaskManager - Maximum heap size: 1024 MiBytes
2017-01-28 16:58:33,268 INFO org.apache.flink.runtime.taskmanager.TaskManager - JAVA_HOME: (not set)
2017-01-28 16:58:33,271 INFO org.apache.flink.runtime.taskmanager.TaskManager - Hadoop version: 2.7.2
2017-01-28 16:58:33,271 INFO org.apache.flink.runtime.taskmanager.TaskManager - JVM Options:
2017-01-28 16:58:33,271 INFO org.apache.flink.runtime.taskmanager.TaskManager - -XX:+UseG1GC
2017-01-28 16:58:33,271 INFO org.apache.flink.runtime.taskmanager.TaskManager - -Xms1024M
2017-01-28 16:58:33,271 INFO org.apache.flink.runtime.taskmanager.TaskManager - -Xmx1024M
2017-01-28 16:58:33,272 INFO org.apache.flink.runtime.taskmanager.TaskManager - -XX:MaxDirectMemorySize=8388607T
2017-01-28 16:58:33,272 INFO org.apache.flink.runtime.taskmanager.TaskManager - -Dlog.file=/server/flink-1.1.4/log/flink-macmini3-taskmanager-0-macmini3.local.log
2017-01-28 16:58:33,272 INFO org.apache.flink.runtime.taskmanager.TaskManager - -Dlog4j.configuration=file:/server/flink-1.1.4/conf/log4j.properties
2017-01-28 16:58:33,272 INFO org.apache.flink.runtime.taskmanager.TaskManager - -Dlogback.configurationFile=file:/server/flink-1.1.4/conf/logback.xml
2017-01-28 16:58:33,272 INFO org.apache.flink.runtime.taskmanager.TaskManager - Program Arguments:
2017-01-28 16:58:33,272 INFO org.apache.flink.runtime.taskmanager.TaskManager - --configDir
2017-01-28 16:58:33,272 INFO org.apache.flink.runtime.taskmanager.TaskManager - /server/flink-1.1.4/conf
2017-01-28 16:58:33,272 INFO org.apache.flink.runtime.taskmanager.TaskManager - Classpath: /server/flink-1.1.4/lib/flink-dist_2.11-1.1.4.jar:/server/flink-1.1.4/lib/flink-python_2.11-1.1.4.jar:/server/flink-1.1.4/lib/log4j-1.2.17.jar:/server/flink-1.1.4/lib/slf4j-log4j12-1.7.7.jar:::
2017-01-28 16:58:33,272 INFO org.apache.flink.runtime.taskmanager.TaskManager - --------------------------------------------------------------------------------
2017-01-28 16:58:33,273 INFO org.apache.flink.runtime.taskmanager.TaskManager - Registered UNIX signal handlers for [TERM, HUP, INT]
2017-01-28 16:58:33,277 INFO org.apache.flink.runtime.taskmanager.TaskManager - Maximum number of open file descriptors is 10240
2017-01-28 16:58:33,295 INFO org.apache.flink.runtime.taskmanager.TaskManager - Loading configuration from /server/flink-1.1.4/conf
2017-01-28 16:58:33,305 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, 192.168.0.15
2017-01-28 16:58:33,305 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123
2017-01-28 16:58:33,305 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.mb, 512
2017-01-28 16:58:33,306 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.mb, 1024
2017-01-28 16:58:33,306 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 8
2017-01-28 16:58:33,306 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.preallocate, false
2017-01-28 16:58:33,306 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 2
2017-01-28 16:58:33,306 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.web.port, 8081
2017-01-28 16:58:33,342 INFO org.apache.flink.runtime.taskmanager.TaskManager - Security is not enabled. Starting non-authenticated TaskManager.
2017-01-28 16:58:33,368 INFO org.apache.flink.runtime.util.LeaderRetrievalUtils - Trying to select the network interface and address to use by connecting to the leading JobManager.
2017-01-28 16:58:33,369 INFO org.apache.flink.runtime.util.LeaderRetrievalUtils - TaskManager will try to connect for 10000 milliseconds before falling back to heuristics
2017-01-28 16:58:33,371 INFO org.apache.flink.runtime.net.ConnectionUtils - Retrieved new target address /192.168.0.15:6123.
2017-01-28 16:58:33,405 INFO org.apache.flink.runtime.taskmanager.TaskManager - TaskManager will use hostname/address '192.168.0.17' (192.168.0.17) for communication.
2017-01-28 16:58:33,406 INFO org.apache.flink.runtime.taskmanager.TaskManager - Starting TaskManager
2017-01-28 16:58:33,406 INFO org.apache.flink.runtime.taskmanager.TaskManager - Starting TaskManager actor system at 192.168.0.17:0
2017-01-28 16:58:33,745 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
2017-01-28 16:58:33,813 INFO Remoting - Starting remoting
2017-01-28 16:58:33,963 INFO Remoting - Remoting started; listening on addresses :[akka.tcp://flink#192.168.0.17:49684]
2017-01-28 16:58:33,969 INFO org.apache.flink.runtime.taskmanager.TaskManager - Starting TaskManager actor
2017-01-28 16:58:33,971 WARN org.apache.flink.runtime.instance.InstanceConnectionInfo - No hostname could be resolved for the IP address 192.168.0.17, using IP address as host name. Local input split assignment (such as for HDFS files) may be impacted.
2017-01-28 16:58:33,976 INFO org.apache.flink.runtime.io.network.netty.NettyConfig - NettyConfig [server address: /192.168.0.17, server port: 49685, memory segment size (bytes): 32768, transport type: NIO, number of server threads: 8 (manual), number of client threads: 8 (manual), server connect backlog: 0 (use Netty's default), client connect timeout (sec): 120, send/receive buffer size (bytes): 0 (use Netty's default)]
2017-01-28 16:58:33,977 INFO org.apache.flink.runtime.taskmanager.TaskManager - Messages between TaskManager and JobManager have a max timeout of 10000 milliseconds
2017-01-28 16:58:33,980 INFO org.apache.flink.runtime.taskmanager.TaskManager - Temporary file directory '/var/folders/9n/3xqj9xzx67sd5dx1z59506840000gn/T': total 118 GB, usable 105 GB (88.98% usable)
2017-01-28 16:58:34,041 INFO org.apache.flink.runtime.io.network.buffer.NetworkBufferPool - Allocated 64 MB for network buffer pool (number of memory segments: 2048, bytes per segment: 32768).
2017-01-28 16:58:34,080 INFO org.apache.flink.runtime.taskmanager.TaskManager - Limiting managed memory to 0.7 of the currently free heap space (669 MB), memory will be allocated lazily.
2017-01-28 16:58:34,090 INFO org.apache.flink.runtime.io.disk.iomanager.IOManager - I/O manager uses directory /var/folders/9n/3xqj9xzx67sd5dx1z59506840000gn/T/flink-io-8fe2058a-8e2b-4fa6-8c5c-7e1a5edd8993 for spill files.
2017-01-28 16:58:34,103 INFO org.apache.flink.runtime.filecache.FileCache - User file cache uses directory /var/folders/9n/3xqj9xzx67sd5dx1z59506840000gn/T/flink-dist-cache-a663c4dc-6c17-43ba-af73-6d798b0f5196
2017-01-28 16:58:34,294 INFO org.apache.flink.runtime.taskmanager.TaskManager - Starting TaskManager actor at akka://flink/user/taskmanager#-1292864950.
2017-01-28 16:58:34,295 INFO org.apache.flink.runtime.taskmanager.TaskManager - TaskManager data connection information: 192.168.0.17 (dataPort=49685)
2017-01-28 16:58:34,296 INFO org.apache.flink.runtime.taskmanager.TaskManager - TaskManager has 8 task slot(s).
2017-01-28 16:58:34,298 INFO org.apache.flink.runtime.taskmanager.TaskManager - Memory usage stats: [HEAP: 79/1024/1024 MB, NON HEAP: 32/33/-1 MB (used/committed/max)]
2017-01-28 16:58:34,304 INFO org.apache.flink.runtime.taskmanager.TaskManager - Trying to register at JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager (attempt 1, timeout: 500 milliseconds)
2017-01-28 16:58:34,825 INFO org.apache.flink.runtime.taskmanager.TaskManager - Trying to register at JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager (attempt 2, timeout: 1000 milliseconds)
2017-01-28 16:58:35,077 INFO org.apache.flink.runtime.taskmanager.TaskManager - Successful registration at JobManager (akka.tcp://flink#192.168.0.15:6123/user/jobmanager), starting network stack and library cache.
2017-01-28 16:58:35,248 INFO org.apache.flink.runtime.io.network.netty.NettyClient - Successful initialization (took 40 ms).
2017-01-28 16:58:35,285 INFO org.apache.flink.runtime.io.network.netty.NettyServer - Successful initialization (took 37 ms). Listening on SocketAddress /192.168.0.17:49685.
2017-01-28 16:58:35,286 INFO org.apache.flink.runtime.taskmanager.TaskManager - Determined BLOB server address to be /192.168.0.15:52505. Starting BLOB cache.
2017-01-28 16:58:35,289 INFO org.apache.flink.runtime.blob.BlobCache - Created BLOB cache storage directory /var/folders/9n/3xqj9xzx67sd5dx1z59506840000gn/T/blobStore-941b8481-5f25-4baf-9739-1ff7f28f9262
2017-01-28 16:58:35,298 INFO org.apache.flink.runtime.metrics.MetricRegistry - No metrics reporter configured, no metrics will be exposed/reported.
2017-01-29 09:54:01,399 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink#192.168.0.15:6123] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
2017-01-29 09:54:04,625 WARN akka.remote.RemoteWatcher - Detected unreachable: [akka.tcp://flink#192.168.0.15:6123]
2017-01-29 09:54:04,631 INFO org.apache.flink.runtime.taskmanager.TaskManager - TaskManager akka://flink/user/taskmanager disconnects from JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager: JobManager is no longer reachable
2017-01-29 09:54:04,631 INFO org.apache.flink.runtime.taskmanager.TaskManager - Disassociating from JobManager
2017-01-29 09:54:04,633 INFO org.apache.flink.runtime.blob.BlobCache - Shutting down BlobCache
2017-01-29 09:54:04,642 INFO org.apache.flink.runtime.io.network.netty.NettyClient - Successful shutdown (took 2 ms).
2017-01-29 09:54:04,644 INFO org.apache.flink.runtime.io.network.netty.NettyServer - Successful shutdown (took 2 ms).
2017-01-29 09:54:04,651 INFO org.apache.flink.runtime.taskmanager.TaskManager - Trying to register at JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager (attempt 1, timeout: 500 milliseconds)
2017-01-29 09:54:05,163 INFO org.apache.flink.runtime.taskmanager.TaskManager - Trying to register at JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager (attempt 2, timeout: 1000 milliseconds)
2017-01-29 09:54:06,183 INFO org.apache.flink.runtime.taskmanager.TaskManager - Trying to register at JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager (attempt 3, timeout: 2000 milliseconds)
2017-01-29 09:54:06,316 INFO Remoting - Quarantined address [akka.tcp://flink#192.168.0.15:6123] is still unreachable or has not been restarted. Keeping it quarantined.
2017-01-29 09:54:08,202 INFO org.apache.flink.runtime.taskmanager.TaskManager - Trying to register at JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager (attempt 4, timeout: 4000 milliseconds)
2017-01-29 09:54:10,295 INFO Remoting - Quarantined address [akka.tcp://flink#192.168.0.15:6123] is still unreachable or has not been restarted. Keeping it quarantined.
2017-01-29 09:54:12,222 INFO org.apache.flink.runtime.taskmanager.TaskManager - Trying to register at JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager (attempt 5, timeout: 8000 milliseconds)
2017-01-29 09:54:14,461 INFO Remoting - Quarantined address [akka.tcp://flink#192.168.0.15:6123] is still unreachable or has not been restarted. Keeping it quarantined.
2017-01-29 09:54:20,242 INFO org.apache.flink.runtime.taskmanager.TaskManager - Trying to register at JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager (attempt 6, timeout: 16000 milliseconds)
2017-01-29 09:54:20,438 INFO Remoting - Quarantined address [akka.tcp://flink#192.168.0.15:6123] is still unreachable or has not been restarted. Keeping it quarantined.
2017-01-29 09:54:36,263 INFO org.apache.flink.runtime.taskmanager.TaskManager - Trying to register at JobManager akka.tcp://flink#192.168.0.15:6123/user/jobmanager (attempt 7, timeout: 30 seconds)

prefix cannot be "null" when creating a QName

I'm using the WSO2 ESB version 4.8.1. I tested sample no 658( Smooks Mediator transformation xml -> xml) I only changed in configuration paths to folders (I don't have folder '/home/lakmali/...':-)).This sample not work. Error from log:
java.lang.IllegalArgumentException: prefix cannot be "null" when creating a QName.
Full log with error:
TID: [0] [ESB] [2014-02-12 10:33:13,589] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Starting WSO2 Carbon... {org.wso2.carbon.core.internal.CarbonCoreActivator}
TID: [0] [ESB] [2014-02-12 10:33:13,605] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Operating System : Windows 7 6.1, amd64 {org.wso2.carbon.core.internal.CarbonCoreActivator}
TID: [0] [ESB] [2014-02-12 10:33:13,605] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java Home : C:\Java\jdk1.7.0_40\jre {org.wso2.carbon.core.internal.CarbonCoreActivator}
TID: [0] [ESB] [2014-02-12 10:33:13,605] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java Version : 1.7.0_40 {org.wso2.carbon.core.internal.CarbonCoreActivator}
TID: [0] [ESB] [2014-02-12 10:33:13,605] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java VM : Java HotSpot(TM) 64-Bit Server VM 24.0-b56,Oracle Corporation {org.wso2.carbon.core.internal.CarbonCoreActivator}
TID: [0] [ESB] [2014-02-12 10:33:13,605] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Carbon Home : C:\Java\WSO2ES~1.1\bin\.. {org.wso2.carbon.core.internal.CarbonCoreActivator}
TID: [0] [ESB] [2014-02-12 10:33:13,605] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java Temp Dir : C:\Java\WSO2ES~1.1\bin\..\tmp {org.wso2.carbon.core.internal.CarbonCoreActivator}
TID: [0] [ESB] [2014-02-12 10:33:13,605] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - User : ******* {org.wso2.carbon.core.internal.CarbonCoreActivator}
TID: [0] [ESB] [2014-02-12 10:33:13,745] WARN {org.wso2.carbon.core.bootup.validator.util.ValidationResultPrinter} - The default keystore (wso2carbon.jks) is currently being used. To maximize security when deploying to a production environment, configure a new keystore with a unique password in the production server profile. {org.wso2.carbon.core.bootup.validator.util.ValidationResultPrinter}
TID: [0] [ESB] [2014-02-12 10:33:13,761] INFO {org.wso2.carbon.databridge.agent.thrift.AgentHolder} - Agent created ! {org.wso2.carbon.databridge.agent.thrift.AgentHolder}
TID: [0] [ESB] [2014-02-12 10:33:13,776] INFO {org.wso2.carbon.databridge.agent.thrift.internal.AgentDS} - Successfully deployed Agent Client {org.wso2.carbon.databridge.agent.thrift.internal.AgentDS}
TID: [0] [ESB] [2014-02-12 10:33:19,051] INFO {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService} - Configured Registry in 39ms {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService}
TID: [0] [ESB] [2014-02-12 10:33:19,113] INFO {org.wso2.carbon.registry.core.internal.RegistryCoreServiceComponent} - Registry Mode : READ-WRITE {org.wso2.carbon.registry.core.internal.RegistryCoreServiceComponent}
TID: [0] [ESB] [2014-02-12 10:33:19,769] INFO {org.wso2.carbon.user.core.internal.UserStoreMgtDSComponent} - Carbon UserStoreMgtDSComponent activated successfully. {org.wso2.carbon.user.core.internal.UserStoreMgtDSComponent}
TID: [0] [ESB] [2014-02-12 10:33:21,563] INFO {org.apache.catalina.startup.TaglibUriRule} - TLD skipped. URI: http://tiles.apache.org/tags-tiles is already defined {org.apache.catalina.startup.TaglibUriRule}
TID: [0] [ESB] [2014-02-12 10:33:22,530] INFO {org.apache.axis2.deployment.ClusterBuilder} - Clustering has been disabled {org.apache.axis2.deployment.ClusterBuilder}
TID: [0] [ESB] [2014-02-12 10:33:22,826] INFO {org.wso2.carbon.stratos.landing.page.deployer.LandingPageWebappDeployer} - Deployed product landing page webapp: StandardEngine[Catalina].StandardHost[localhost].StandardContext[/home] {org.wso2.carbon.stratos.landing.page.deployer.LandingPageWebappDeployer}
TID: [0] [ESB] [2014-02-12 10:33:22,826] INFO {org.wso2.carbon.identity.user.store.configuration.deployer.UserStoreConfigurationDeployer} - User Store Configuration Deployer initiated. {org.wso2.carbon.identity.user.store.configuration.deployer.UserStoreConfigurationDeployer}
TID: [0] [ESB] [2014-02-12 10:33:22,873] INFO {org.apache.synapse.transport.passthru.PassThroughHttpSSLSender} - Initializing Pass-through HTTP/S Sender... {org.apache.synapse.transport.passthru.PassThroughHttpSSLSender}
TID: [0] [ESB] [2014-02-12 10:33:22,889] INFO {org.apache.synapse.transport.nhttp.config.ClientConnFactoryBuilder} - HTTPS Loading Identity Keystore from : repository/resources/security/wso2carbon.jks {org.apache.synapse.transport.nhttp.config.ClientConnFactoryBuilder}
TID: [0] [ESB] [2014-02-12 10:33:22,889] INFO {org.apache.synapse.transport.nhttp.config.ClientConnFactoryBuilder} - HTTPS Loading Trust Keystore from : repository/resources/security/client-truststore.jks {org.apache.synapse.transport.nhttp.config.ClientConnFactoryBuilder}
TID: [0] [ESB] [2014-02-12 10:33:22,920] INFO {org.apache.synapse.transport.passthru.PassThroughHttpSSLSender} - Pass-through HTTPS Sender started... {org.apache.synapse.transport.passthru.PassThroughHttpSSLSender}
TID: [0] [ESB] [2014-02-12 10:33:22,920] INFO {org.apache.synapse.transport.passthru.PassThroughHttpSender} - Initializing Pass-through HTTP/S Sender... {org.apache.synapse.transport.passthru.PassThroughHttpSender}
TID: [0] [ESB] [2014-02-12 10:33:22,920] INFO {org.apache.synapse.transport.passthru.PassThroughHttpSender} - Pass-through HTTP Sender started... {org.apache.synapse.transport.passthru.PassThroughHttpSender}
TID: [0] [ESB] [2014-02-12 10:33:22,935] INFO {org.apache.synapse.transport.vfs.VFSTransportSender} - VFS Sender started {org.apache.synapse.transport.vfs.VFSTransportSender}
TID: [0] [ESB] [2014-02-12 10:33:23,045] INFO {org.wso2.carbon.core.deployment.DeploymentInterceptor} - Deploying Axis2 service: echo {super-tenant} {org.wso2.carbon.core.deployment.DeploymentInterceptor}
TID: [0] [ESB] [2014-02-12 10:33:23,310] INFO {org.apache.axis2.deployment.DeploymentEngine} - Deploying Web service: Echo.aar - file:/C:/Java/WSO2ES~1.1/bin/../repository/deployment/server/axis2services/Echo.aar {org.apache.axis2.deployment.DeploymentEngine}
TID: [0] [ESB] [2014-02-12 10:33:23,575] INFO {org.wso2.carbon.core.deployment.DeploymentInterceptor} - Deploying Axis2 service: echo {super-tenant} {org.wso2.carbon.core.deployment.DeploymentInterceptor}
TID: [0] [ESB] [2014-02-12 10:33:23,825] INFO {org.wso2.carbon.core.deployment.DeploymentInterceptor} - Deploying Axis2 service: Version {super-tenant} {org.wso2.carbon.core.deployment.DeploymentInterceptor}
TID: [0] [ESB] [2014-02-12 10:33:23,934] INFO {org.apache.axis2.deployment.DeploymentEngine} - Deploying Web service: Version.aar - file:/C:/Java/WSO2ES~1.1/bin/../repository/deployment/server/axis2services/Version.aar {org.apache.axis2.deployment.DeploymentEngine}
TID: [0] [ESB] [2014-02-12 10:33:24,043] INFO {org.wso2.carbon.core.deployment.DeploymentInterceptor} - Deploying Axis2 service: Version {super-tenant} {org.wso2.carbon.core.deployment.DeploymentInterceptor}
TID: [0] [ESB] [2014-02-12 10:33:24,199] INFO {org.apache.synapse.transport.passthru.PassThroughHttpSSLListener} - Initializing Pass-through HTTP/S Listener... {org.apache.synapse.transport.passthru.PassThroughHttpSSLListener}
TID: [0] [ESB] [2014-02-12 10:33:24,667] INFO {org.apache.synapse.transport.passthru.PassThroughHttpListener} - Initializing Pass-through HTTP/S Listener... {org.apache.synapse.transport.passthru.PassThroughHttpListener}
TID: [0] [ESB] [2014-02-12 10:33:24,683] WARN {org.apache.synapse.transport.vfs.PollTableEntry} - transport.vfs.FileURI parameter is missing in the proxy service configuration {org.apache.synapse.transport.vfs.PollTableEntry}
TID: [0] [ESB] [2014-02-12 10:33:24,870] INFO {org.apache.axis2.deployment.ModuleDeployer} - Deploying module: addressing-1.6.1-wso2v10 - file:/C:/Java/WSO2ES~1.1/bin/../repository/deployment/client/modules/addressing-1.6.1-wso2v10.mar {org.apache.axis2.deployment.ModuleDeployer}
TID: [0] [ESB] [2014-02-12 10:33:24,885] INFO {org.apache.axis2.deployment.ModuleDeployer} - Deploying module: rampart-1.6.1-wso2v8 - file:/C:/Java/WSO2ES~1.1/bin/../repository/deployment/client/modules/rampart-1.6.1-wso2v8.mar {org.apache.axis2.deployment.ModuleDeployer}
TID: [0] [ESB] [2014-02-12 10:33:24,901] INFO {org.apache.axis2.transport.tcp.TCPTransportSender} - TCP Sender started {org.apache.axis2.transport.tcp.TCPTransportSender}
TID: [0] [ESB] [2014-02-12 10:33:25,712] INFO {org.apache.axis2.deployment.DeploymentEngine} - Deploying Web service: org.wso2.carbon.message.processor - {org.apache.axis2.deployment.DeploymentEngine}
TID: [0] [ESB] [2014-02-12 10:33:25,712] INFO {org.apache.axis2.deployment.DeploymentEngine} - Deploying Web service: org.wso2.carbon.message.store - {org.apache.axis2.deployment.DeploymentEngine}
TID: [0] [ESB] [2014-02-12 10:33:26,321] INFO {org.wso2.carbon.core.deployment.DeploymentInterceptor} - Deploying Axis2 service: wso2carbon-sts {super-tenant} {org.wso2.carbon.core.deployment.DeploymentInterceptor}
TID: [0] [ESB] [2014-02-12 10:33:26,430] INFO {org.apache.axis2.deployment.DeploymentEngine} - Deploying Web service: org.wso2.carbon.sts - {org.apache.axis2.deployment.DeploymentEngine}
TID: [0] [ESB] [2014-02-12 10:33:26,617] INFO {org.apache.axis2.deployment.DeploymentEngine} - Deploying Web service: org.wso2.carbon.tryit - {org.apache.axis2.deployment.DeploymentEngine}
TID: [0] [ESB] [2014-02-12 10:33:26,820] INFO {org.wso2.carbon.core.init.CarbonServerManager} - Repository : C:\Java\WSO2ES~1.1\bin\../repository/deployment/server/ {org.wso2.carbon.core.init.CarbonServerManager}
TID: [0] [ESB] [2014-02-12 10:33:26,960] INFO {org.wso2.carbon.core.internal.permission.update.PermissionUpdater} - Permission cache updated for tenant -1234 {org.wso2.carbon.core.internal.permission.update.PermissionUpdater}
TID: [0] [ESB] [2014-02-12 10:33:26,991] INFO {org.wso2.carbon.mediation.initializer.ServiceBusInitializer} - Starting ESB... {org.wso2.carbon.mediation.initializer.ServiceBusInitializer}
TID: [0] [ESB] [2014-02-12 10:33:27,007] INFO {org.wso2.carbon.mediation.initializer.ServiceBusInitializer} - Initializing Apache Synapse... {org.wso2.carbon.mediation.initializer.ServiceBusInitializer}
TID: [0] [ESB] [2014-02-12 10:33:27,007] INFO {org.apache.synapse.SynapseControllerFactory} - Using Synapse home : C:\Java\WSO2ES~1.1\. {org.apache.synapse.SynapseControllerFactory}
TID: [0] [ESB] [2014-02-12 10:33:27,007] INFO {org.apache.synapse.SynapseControllerFactory} - Using synapse.xml location : C:\Java\WSO2ES~1.1\.\.\repository\deployment\server\synapse-configs\default {org.apache.synapse.SynapseControllerFactory}
TID: [0] [ESB] [2014-02-12 10:33:27,007] INFO {org.apache.synapse.SynapseControllerFactory} - Using server name : localhost {org.apache.synapse.SynapseControllerFactory}
TID: [0] [ESB] [2014-02-12 10:33:27,023] INFO {org.apache.synapse.SynapseControllerFactory} - The timeout handler will run every : 15s {org.apache.synapse.SynapseControllerFactory}
TID: [0] [ESB] [2014-02-12 10:33:27,023] INFO {org.apache.synapse.Axis2SynapseController} - Initializing Synapse at : Wed Feb 12 10:33:27 CET 2014 {org.apache.synapse.Axis2SynapseController}
TID: [0] [ESB] [2014-02-12 10:33:27,023] INFO {org.wso2.carbon.mediation.initializer.CarbonSynapseController} - Loading the mediation configuration from the file system {org.wso2.carbon.mediation.initializer.CarbonSynapseController}
TID: [0] [ESB] [2014-02-12 10:33:27,023] INFO {org.apache.synapse.config.xml.MultiXMLConfigurationBuilder} - Building synapse configuration from the synapse artifact repository at : .\.\repository/deployment/server/synapse-configs\default {org.apache.synapse.config.xml.MultiXMLConfigurationBuilder}
TID: [0] [ESB] [2014-02-12 10:33:27,023] INFO {org.apache.synapse.config.xml.XMLConfigurationBuilder} - Generating the Synapse configuration model by parsing the XML configuration {org.apache.synapse.config.xml.XMLConfigurationBuilder}
TID: [0] [ESB] [2014-02-12 10:33:27,101] INFO {org.apache.synapse.config.SynapseConfigurationBuilder} - Loaded Synapse configuration from the artifact repository at : .\.\repository/deployment/server/synapse-configs\default {org.apache.synapse.config.SynapseConfigurationBuilder}
TID: [0] [ESB] [2014-02-12 10:33:27,116] INFO {org.apache.synapse.Axis2SynapseController} - Loading mediator extensions... {org.apache.synapse.Axis2SynapseController}
TID: [0] [ESB] [2014-02-12 10:33:27,116] INFO {org.apache.synapse.Axis2SynapseController} - Deploying the Synapse service... {org.apache.synapse.Axis2SynapseController}
TID: [0] [ESB] [2014-02-12 10:33:27,116] INFO {org.apache.synapse.Axis2SynapseController} - Deploying Proxy services... {org.apache.synapse.Axis2SynapseController}
TID: [0] [ESB] [2014-02-12 10:33:27,116] INFO {org.apache.synapse.core.axis2.ProxyService} - Building Axis service for Proxy service : ssXMLProxy {org.apache.synapse.core.axis2.ProxyService}
TID: [0] [ESB] [2014-02-12 10:33:27,116] INFO {org.apache.synapse.core.axis2.ProxyService} - Adding service ssXMLProxy to the Axis2 configuration {org.apache.synapse.core.axis2.ProxyService}
TID: [0] [ESB] [2014-02-12 10:33:27,132] INFO {org.wso2.carbon.core.deployment.DeploymentInterceptor} - Deploying Axis2 service: ssXMLProxy {super-tenant} {org.wso2.carbon.core.deployment.DeploymentInterceptor}
TID: [0] [ESB] [2014-02-12 10:33:27,225] INFO {org.apache.synapse.core.axis2.ProxyService} - Successfully created the Axis2 service for Proxy service : ssXMLProxy {org.apache.synapse.core.axis2.ProxyService}
TID: [0] [ESB] [2014-02-12 10:33:27,225] INFO {org.apache.synapse.Axis2SynapseController} - Deployed Proxy service : ssXMLProxy {org.apache.synapse.Axis2SynapseController}
TID: [0] [ESB] [2014-02-12 10:33:27,225] INFO {org.apache.synapse.core.axis2.ProxyService} - Building Axis service for Proxy service : FirmaXMLProxy {org.apache.synapse.core.axis2.ProxyService}
TID: [0] [ESB] [2014-02-12 10:33:27,241] INFO {org.apache.synapse.core.axis2.ProxyService} - Adding service FirmaXMLProxy to the Axis2 configuration {org.apache.synapse.core.axis2.ProxyService}
TID: [0] [ESB] [2014-02-12 10:33:27,241] INFO {org.wso2.carbon.core.deployment.DeploymentInterceptor} - Deploying Axis2 service: FirmaXMLProxy {super-tenant} {org.wso2.carbon.core.deployment.DeploymentInterceptor}
TID: [0] [ESB] [2014-02-12 10:33:27,303] INFO {org.apache.synapse.core.axis2.ProxyService} - Successfully created the Axis2 service for Proxy service : FirmaXMLProxy {org.apache.synapse.core.axis2.ProxyService}
TID: [0] [ESB] [2014-02-12 10:33:27,303] INFO {org.apache.synapse.Axis2SynapseController} - Deployed Proxy service : FirmaXMLProxy {org.apache.synapse.Axis2SynapseController}
TID: [0] [ESB] [2014-02-12 10:33:27,303] INFO {org.apache.synapse.Axis2SynapseController} - Deploying EventSources... {org.apache.synapse.Axis2SynapseController}
TID: [0] [ESB] [2014-02-12 10:33:27,319] INFO {org.apache.synapse.ServerManager} - Server ready for processing... {org.apache.synapse.ServerManager}
TID: [0] [ESB] [2014-02-12 10:33:27,350] INFO {org.wso2.carbon.bam.mediationstats.data.publisher.internal.MediationStatisticsComponent} - Statistic Reporter is Disabled {org.wso2.carbon.bam.mediationstats.data.publisher.internal.MediationStatisticsComponent}
TID: [0] [ESB] [2014-02-12 10:33:27,350] INFO {org.wso2.carbon.bam.mediationstats.data.publisher.internal.MediationStatisticsComponent} - Can't register an observer for mediationStatisticsStore. If you have disabled StatisticsReporter, please enable it in the Carbon.xml {org.wso2.carbon.bam.mediationstats.data.publisher.internal.MediationStatisticsComponent}
TID: [0] [ESB] [2014-02-12 10:33:27,397] INFO {org.wso2.carbon.rule.kernel.internal.ds.RuleEngineConfigDS} - Successfully registered the Rule Config service {org.wso2.carbon.rule.kernel.internal.ds.RuleEngineConfigDS}
TID: [0] [ESB] [2014-02-12 10:33:27,865] INFO {org.apache.synapse.transport.passthru.PassThroughHttpSSLListener} - Starting Pass-through HTTPS Listener... {org.apache.synapse.transport.passthru.PassThroughHttpSSLListener}
TID: [0] [ESB] [2014-02-12 10:33:27,865] INFO {org.apache.synapse.transport.passthru.PassThroughHttpSSLListener} - Pass-through HTTPS Listener started on 0:0:0:0:0:0:0:0:8243 {org.apache.synapse.transport.passthru.PassThroughHttpSSLListener}
TID: [0] [ESB] [2014-02-12 10:33:27,865] INFO {org.apache.synapse.transport.passthru.PassThroughHttpListener} - Starting Pass-through HTTP Listener... {org.apache.synapse.transport.passthru.PassThroughHttpListener}
TID: [0] [ESB] [2014-02-12 10:33:27,881] INFO {org.apache.synapse.transport.passthru.PassThroughHttpListener} - Pass-through HTTP Listener started on 0:0:0:0:0:0:0:0:8280 {org.apache.synapse.transport.passthru.PassThroughHttpListener}
TID: [0] [ESB] [2014-02-12 10:33:27,881] INFO {org.apache.synapse.transport.vfs.VFSTransportListener} - VFS listener started {org.apache.synapse.transport.vfs.VFSTransportListener}
TID: [0] [ESB] [2014-02-12 10:33:27,881] INFO {org.apache.tomcat.util.net.NioSelectorPool} - Using a shared selector for servlet write/read {org.apache.tomcat.util.net.NioSelectorPool}
TID: [0] [ESB] [2014-02-12 10:33:28,130] INFO {org.apache.tomcat.util.net.NioSelectorPool} - Using a shared selector for servlet write/read {org.apache.tomcat.util.net.NioSelectorPool}
TID: [0] [ESB] [2014-02-12 10:33:28,146] INFO {org.wso2.carbon.registry.eventing.internal.RegistryEventingServiceComponent} - Successfully Initialized Eventing on Registry {org.wso2.carbon.registry.eventing.internal.RegistryEventingServiceComponent}
TID: [0] [ESB] [2014-02-12 10:33:28,520] INFO {org.wso2.carbon.core.init.JMXServerManager} - JMX Service URL : service:jmx:rmi://localhost:11111/jndi/rmi://localhost:9999/jmxrmi {org.wso2.carbon.core.init.JMXServerManager}
TID: [0] [ESB] [2014-02-12 10:33:28,520] INFO {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} - Server : WSO2 Enterprise Service Bus-4.8.1 {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent}
TID: [0] [ESB] [2014-02-12 10:33:28,520] INFO {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} - WSO2 Carbon started in 22 sec {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent}
TID: [0] [ESB] [2014-02-12 10:33:28,910] INFO {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} - Mgt Console URL : https://10.104.0.44:9443/carbon/ {org.wso2.carbon.ui.internal.CarbonUIServiceComponent}
TID: [0] [ESB] [2014-02-12 10:35:38,588] ERROR {org.wso2.carbon.mediator.transform.SmooksMediator} - Failed to filter source. {org.wso2.carbon.mediator.transform.SmooksMediator}
org.milyn.SmooksException: Failed to filter source.
at org.milyn.delivery.sax.SmooksSAXFilter.doFilter(SmooksSAXFilter.java:86)
at org.milyn.delivery.sax.SmooksSAXFilter.doFilter(SmooksSAXFilter.java:61)
at org.milyn.Smooks._filter(Smooks.java:516)
at org.milyn.Smooks.filterSource(Smooks.java:475)
at org.wso2.carbon.mediator.transform.SmooksMediator.mediate(SmooksMediator.java:123)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:131)
at org.apache.synapse.core.axis2.ProxyServiceMessageReceiver.receive(ProxyServiceMessageReceiver.java:166)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.axis2.transport.base.AbstractTransportListener.handleIncomingMessage(AbstractTransportListener.java:328)
at org.apache.synapse.transport.vfs.VFSTransportListener.processFile(VFSTransportListener.java:597)
at org.apache.synapse.transport.vfs.VFSTransportListener.scanFileOrDirectory(VFSTransportListener.java:328)
at org.apache.synapse.transport.vfs.VFSTransportListener.poll(VFSTransportListener.java:158)
at org.apache.synapse.transport.vfs.VFSTransportListener.poll(VFSTransportListener.java:107)
at org.apache.axis2.transport.base.AbstractPollingTransportListener$1$1.run(AbstractPollingTransportListener.java:67)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.lang.IllegalArgumentException: prefix cannot be "null" when creating a QName
at javax.xml.namespace.QName.<init>(QName.java:251)
at com.sun.xml.internal.stream.events.XMLEventAllocatorImpl.getQName(XMLEventAllocatorImpl.java:254)
at com.sun.xml.internal.stream.events.XMLEventAllocatorImpl.getXMLEvent(XMLEventAllocatorImpl.java:76)
at com.sun.xml.internal.stream.events.XMLEventAllocatorImpl.allocate(XMLEventAllocatorImpl.java:53)
at com.sun.xml.internal.stream.XMLEventReaderImpl.nextEvent(XMLEventReaderImpl.java:84)
at com.sun.xml.internal.stream.XMLEventReaderImpl.next(XMLEventReaderImpl.java:248)
at org.wso2.carbon.mediator.transform.stream.IOElementPipe.populateEvents(IOElementPipe.java:90)
at org.wso2.carbon.mediator.transform.stream.IOElementPipe.getData(IOElementPipe.java:68)
at org.wso2.carbon.mediator.transform.stream.ElementInputStream.read(ElementInputStream.java:61)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at org.apache.xerces.impl.XMLEntityScanner.load(Unknown Source)
at org.apache.xerces.impl.XMLEntityScanner.skipString(Unknown Source)
at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at org.milyn.delivery.sax.SAXParser.parse(SAXParser.java:70)
at org.milyn.delivery.sax.SmooksSAXFilter.doFilter(SmooksSAXFilter.java:75)
... 19 more
TID: [0] [ESB] [2014-02-12 10:35:38,604] ERROR {org.apache.synapse.transport.vfs.VFSTransportListener} - Error processing File URI : file:///c:/java/test/toconvert/s481/input-message-658.xml {org.apache.synapse.transport.vfs.VFSTransportListener}
org.wso2.carbon.mediator.service.MediatorException: Failed to filter source. Caused by Failed to filter source.
at org.wso2.carbon.mediator.transform.SmooksMediator.handleException(SmooksMediator.java:242)
at org.wso2.carbon.mediator.transform.SmooksMediator.mediate(SmooksMediator.java:137)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:131)
at org.apache.synapse.core.axis2.ProxyServiceMessageReceiver.receive(ProxyServiceMessageReceiver.java:166)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.axis2.transport.base.AbstractTransportListener.handleIncomingMessage(AbstractTransportListener.java:328)
at org.apache.synapse.transport.vfs.VFSTransportListener.processFile(VFSTransportListener.java:597)
at org.apache.synapse.transport.vfs.VFSTransportListener.scanFileOrDirectory(VFSTransportListener.java:328)
at org.apache.synapse.transport.vfs.VFSTransportListener.poll(VFSTransportListener.java:158)
at org.apache.synapse.transport.vfs.VFSTransportListener.poll(VFSTransportListener.java:107)
at org.apache.axis2.transport.base.AbstractPollingTransportListener$1$1.run(AbstractPollingTransportListener.java:67)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Thanks in advance for response, Grzegorz
the issue has been fixed with the wso2 enterprise service bus 4.9.0 for sample 658. Once the server stats in sample mode kindly update the folder paths from /home/lakmali in[1] to a folder location relevant for you and in the proxy service named synapse_sample_658.xml[2] available in path /wso2esb-4.9.0/repository/samples
[1]repository/samples/resources/smooks/smooks-config-658.xml within the location <file:destinationDirectoryPattern>
[2]
<?xml version="1.0" encoding="UTF-8"?>
<definitions xmlns="http://ws.apache.org/ns/synapse">
<proxy name="SmooksSample" transports="vfs" startOnLoad="true" trace="disable">
<description />
<target>
<inSequence>
<smooks config-key="smooks-key">
<input type="xml" />
<output type="xml" />
</smooks>
</inSequence>
</target>
<parameter name="transport.vfs.Streaming">true</parameter>
<parameter name="transport.PollInterval">5</parameter>
<parameter name="transport.vfs.ActionAfterProcess">MOVE</parameter>
<parameter name="transport.vfs.FileURI">file:///home/shavantha/dev/test/smooks/in</parameter>
<parameter name="transport.vfs.MoveAfterProcess">file:///home/shavantha/dev/test/smooks/original</parameter>
<parameter name="transport.vfs.MoveAfterFailure">file:///home/shavantha/dev/test/smooks/original</parameter>
<parameter name="transport.vfs.FileNamePattern">.*\.xml</parameter>
<parameter name="transport.vfs.ContentType">application/xml</parameter>
<parameter name="transport.vfs.ActionAfterFailure">MOVE</parameter>
</proxy>
<localEntry key="smooks-key" src="file:repository/samples/resources/smooks/smooks-config-658.xml" />
<sequence name="fault">
<log level="full">
<property name="MESSAGE" value="Executing default "fault" sequence" />
<property name="ERROR_CODE" expression="get-property('ERROR_CODE')" />
<property name="ERROR_MESSAGE" expression="get-property('ERROR_MESSAGE')" />
</log>
<drop />
</sequence>
<sequence name="main">
<log />
<drop />
</sequence>
</definitions>
The public jira for the issue you have raised is available on https://wso2.org/jira/browse/ESBJAVA-3608
regards,shavantha