GStreamer H264 video and KLV mpegtsmux error - gstreamer

I need your help! I have a .ts video file that carries H264 video and KLV metadata stream. I need to demux the video, overlay an image on video and create a new .ts file with it and the same KLV stream.
I’ve tried different approaches using GStreamer without success. The following pipeline is the simplest version I’ve tried:
gst-launch-1.0 -t -v --gst-debug=3 filesrc location=/home/ubuntu/videoTeste.ts ! tsdemux name=demux demux. ! h264parse ! “video/x-h264” ! avdec_h264 ! x264enc bitrate = 10000 ! video/x-h264, profile=baseline ! queue ! mpegtsmux name=encod encod. ! filesink location=/home/ubuntu/teste.ts async=false demux. ! “meta/x-klv” ! queue ! encod.
This is the output:A definir pipeline para PAUSA …
0:00:00.053580180 19705 0x5647f4e61c00 WARN basesrc gstbasesrc.c:3600:gst_base_src_start_complete: pad not activated yet
Pipeline é PREROLLED …
A definir pipeline para REPRODUZIR …
New clock: GstSystemClock
/GstCapsFilter:capsfilter3: caps = meta/x-klv
/GstPipeline:pipeline0/GstCapsFilter:capsfilter4: caps = meta/x-klv
/GstPipeline:pipeline0/GstCapsFilter:capsfilter4.GstPad:src: caps = meta/x-klv, parsed=(boolean)true
/GstPipeline:pipeline0/GstQueue:queue1.GstPad:sink: caps = meta/x-klv, parsed=(boolean)true
/GstPipeline:pipeline0/GstQueue:queue1.GstPad:src: caps = meta/x-klv, parsed=(boolean)true
/GstPipeline:pipeline0/MpegTsMux:encod.GstPad:sink_66: caps = meta/x-klv, parsed=(boolean)true
(gst-launch-1.0:19705): GStreamer-CRITICAL **: 11:56:27.551: gst_segment_to_running_time: assertion ‘segment->format == format’ failed
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:sink: caps = video/x-h264, stream-format=(string)byte-stream, alignment=(string)nal
0:00:00.054452886 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 15508 will be dropped
0:00:00.054506922 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 24322 will be dropped
0:00:00.054553198 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 21270 will be dropped
0:00:00.054591203 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 26579 will be dropped
0:00:00.054630071 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 25436 will be dropped
0:00:00.054663925 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 25557 will be dropped
0:00:00.054697735 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 25598 will be dropped
0:00:00.054734388 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 25866 will be dropped
0:00:00.054768227 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 25782 will be dropped
0:00:00.054805325 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 26169 will be dropped
0:00:00.054838313 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 25996 will be dropped
0:00:00.054875026 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 25735 will be dropped
0:00:00.054908194 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 25991 will be dropped
0:00:00.054940617 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 26233 will be dropped
0:00:00.054974736 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 19915 will be dropped
0:00:00.055007396 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 26268 will be dropped
0:00:00.055037714 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 19609 will be dropped
0:00:00.055069936 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 26015 will be dropped
0:00:00.055103250 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 19613 will be dropped
0:00:00.055135263 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 25920 will be dropped
0:00:00.055164929 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 19735 will be dropped
0:00:00.055197066 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 25851 will be dropped
0:00:00.055235518 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 19638 will be dropped
0:00:00.055268897 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 19733 will be dropped
0:00:00.055301416 19705 0x5647f4dfcea0 WARN h264parse gsth264parse.c:1349:gst_h264_parse_handle_frame: broken/invalid nal Type: 1 Slice, Size: 25492 will be dropped
Seems to be a sync problem between the H264 frames and the KLV chunks, I think…
I’ve tried changing the place of the queues, using a “tee” after filesrc and using separated demux, multiqueue…
I’m able to create a .ts file with H264 video only and I’m also able to extract and save to a file the KLV data. I’m even able to do this in the same pipeline:
gst-launch-1.0 -t -v --gst-debug=0 filesrc location=/home/ubuntu/videoTeste.ts ! tee name=t t. ! queue ! tsdemux name=demux demux. ! h264parse ! “video/x-h264” ! avdec_h264 ! x264enc bitrate = 10000 ! video/x-h264, profile=baseline ! mpegtsmux name=encod encod. ! filesink location=/home/ubuntu/teste.ts async=false t. ! queue ! tsdemux ! “meta/x-klv” ! filesink location=/home/ubuntu/klv.dat async=false
Do you have any suggestions to solve this?
Thank you!

Related

What does `failed to download database [GeoLite2-ASN.mmdb` mean when launching Elasticsearch cluster

I deployed an Elasticsearch cluster in AWS EKS with 3 nodes. After launching the cluster, I can see 3 pods are running but 2 of them running fine, one of them keep failing and terminating and restarting.
Below is the error log on the failed pod.
{"type": "server", "timestamp": "2021-12-26T08:17:33,061Z", "level": "INFO", "component": "o.e.i.g.DatabaseRegistry", "cluster.name": "elk", "node.name": "elk-es-node-1", "message": "downloading geoip database [GeoLite2-ASN.mmdb] to [/tmp/elasticsearch-9470345091343635510/geoip-databases/HoGUMQ9ISsCjQ4KhIL2IFA/GeoLite2-ASN.mmdb.tmp.gz]" }
{"type": "server", "timestamp": "2021-12-26T08:17:33,070Z", "level": "ERROR", "component": "o.e.i.g.DatabaseRegistry", "cluster.name": "elk", "node.name": "elk-es-node-1", "message": "failed to download database [GeoLite2-ASN.mmdb]",
"stacktrace": ["org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];",
"at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:179) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(ClusterBlocks.java:165) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.action.search.TransportSearchAction.executeSearch(TransportSearchAction.java:605) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.action.search.TransportSearchAction.executeLocalSearch(TransportSearchAction.java:494) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.action.search.TransportSearchAction.lambda$executeRequest$3(TransportSearchAction.java:288) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:134) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.index.query.Rewriteable.rewriteAndFetch(Rewriteable.java:103) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.index.query.Rewriteable.rewriteAndFetch(Rewriteable.java:76) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.action.search.TransportSearchAction.executeRequest(TransportSearchAction.java:329) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:217) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:93) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:173) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.action.support.ActionFilter$Simple.apply(ActionFilter.java:42) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:171) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:149) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:77) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:90) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:70) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:402) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.client.FilterClient.doExecute(FilterClient.java:54) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.client.OriginSettingClient.doExecute(OriginSettingClient.java:40) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:402) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:390) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.client.support.AbstractClient.search(AbstractClient.java:534) ~[elasticsearch-7.15.2.jar:7.15.2]",
"at org.elasticsearch.ingest.geoip.DatabaseRegistry.lambda$retrieveDatabase$11(DatabaseRegistry.java:359) [ingest-geoip-7.15.2.jar:7.15.2]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.15.2.jar:7.15.2]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]",
"at java.lang.Thread.run(Thread.java:833) [?:?]"] }
{"type": "server", "timestamp": "2021-12-26T08:17:33,295Z", "level": "INFO", "component": "o.e.l.LicenseService", "cluster.name": "elk", "node.name": "elk-es-node-1", "message": "license [8a88ef40-3b0b-439e-9f46-32e911999b7d] mode [basic] - valid" }
{"type": "server", "timestamp": "2021-12-26T08:17:33,309Z", "level": "INFO", "component": "o.e.h.AbstractHttpServerTransport", "cluster.name": "elk", "node.name": "elk-es-node-1", "message": "publish_address {elk-es-node-1.elk-es-node.default.svc/10.0.1.182:9200}, bound_addresses {0.0.0.0:9200}", "cluster.uuid": "hqRP62pNTze1IWQ0sOOR2Q", "node.id": "HoGUMQ9ISsCjQ4KhIL2IFA" }
{"type": "server", "timestamp": "2021-12-26T08:17:33,310Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elk", "node.name": "elk-es-node-1", "message": "started", "cluster.uuid": "hqRP62pNTze1IWQ0sOOR2Q", "node.id": "HoGUMQ9ISsCjQ4KhIL2IFA" }
The error message says failed to download database [GeoLite2-ASN.mmdb] but I don't know what does this mean.
Below is my Elasticsearch K8S spec file.
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elk
spec:
version: 7.15.2
serviceAccountName: docker-sa
http:
tls:
selfSignedCertificate:
disabled: true
nodeSets:
- name: node
count: 3
config:
network.host: 0.0.0.0
xpack.security.enabled: false
podTemplate:
spec:
initContainers:
- name: sysctl
securityContext:
privileged: true
command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
containers:
- name: elasticsearch
readinessProbe:
exec:
command:
- bash
- -c
- /mnt/elastic-internal/scripts/readiness-probe-script.sh
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 12
successThreshold: 1
timeoutSeconds: 12
env:
- name: READINESS_PROBE_TIMEOUT
value: "120"
resources:
requests:
cpu: 1
memory: 4Gi
volumeMounts:
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
storage: 1024Gi
Any idea why this happens?
I have provisioned a cluster of nodes (ec2 instance) with 2 desired count and 3 maximum. At the moment, only 2 nodes are launched.
Below is the output of nodes information:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-1-216.ap-southeast-2.compute.internal Ready <none> 6d16h v1.21.5-eks-bc4871b
ip-10-0-2-24.ap-southeast-2.compute.internal Ready <none> 6d17h v1.21.5-eks-bc4871b
$ kubectl describe node ip-10-0-1-216.ap-southeast-2.compute.internal
Name: ip-10-0-1-216.ap-southeast-2.compute.internal
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=t3.xlarge
beta.kubernetes.io/os=linux
eks.amazonaws.com/capacityType=ON_DEMAND
eks.amazonaws.com/nodegroup=elk
eks.amazonaws.com/nodegroup-image=ami-045371401c5f70a1e
failure-domain.beta.kubernetes.io/region=ap-southeast-2
failure-domain.beta.kubernetes.io/zone=ap-southeast-2a
kubernetes.io/arch=amd64
kubernetes.io/hostname=ip-10-0-1-216.ap-southeast-2.compute.internal
kubernetes.io/os=linux
node.kubernetes.io/instance-type=t3.xlarge
topology.ebs.csi.aws.com/zone=ap-southeast-2a
topology.kubernetes.io/region=ap-southeast-2
topology.kubernetes.io/zone=ap-southeast-2a
Annotations: csi.volume.kubernetes.io/nodeid: {"ebs.csi.aws.com":"i-0fd067997eddeaf86"}
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 24 Dec 2021 20:29:13 +1100
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ip-10-0-1-216.ap-southeast-2.compute.internal
AcquireTime: <unset>
RenewTime: Fri, 31 Dec 2021 13:25:51 +1100
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 31 Dec 2021 13:25:52 +1100 Mon, 27 Dec 2021 20:44:02 +1100 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 31 Dec 2021 13:25:52 +1100 Fri, 24 Dec 2021 20:29:13 +1100 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 31 Dec 2021 13:25:52 +1100 Fri, 24 Dec 2021 20:29:13 +1100 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 31 Dec 2021 13:25:52 +1100 Fri, 24 Dec 2021 20:29:34 +1100 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.0.1.216
Hostname: ip-10-0-1-216.ap-southeast-2.compute.internal
InternalDNS: ip-10-0-1-216.ap-southeast-2.compute.internal
Capacity:
attachable-volumes-aws-ebs: 25
cpu: 4
ephemeral-storage: 20959212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 16205904Ki
pods: 58
Allocatable:
attachable-volumes-aws-ebs: 25
cpu: 3920m
ephemeral-storage: 18242267924
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 15189072Ki
pods: 58
System Info:
Machine ID: ec2ac34dd2e6a84bdc340dd6a62c3514
System UUID: ec2ac34d-d2e6-a84b-dc34-0dd6a62c3514
Boot ID: 26c6dde2-1131-4068-9265-c0c4e12d54d3
Kernel Version: 5.4.156-83.273.amzn2.x86_64
OS Image: Amazon Linux 2
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.7
Kubelet Version: v1.21.5-eks-bc4871b
Kube-Proxy Version: v1.21.5-eks-bc4871b
ProviderID: aws:///ap-southeast-2a/i-0fd067997eddeaf86
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
cert-manager cert-manager-68ff46b886-tzfqw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d16h
cert-manager cert-manager-cainjector-7cdbb9c945-w99rf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d16h
cert-manager cert-manager-webhook-58d45d56b8-wsmwp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d16h
default elk-es-node-0 1 (25%) 100m (2%) 4Gi (27%) 50Mi (0%) 3d16h
default sidecar-66f887c666-pbbcv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d19h
elastic-system elastic-operator-0 100m (2%) 1 (25%) 150Mi (1%) 512Mi (3%) 3d16h
kube-system aws-load-balancer-controller-9c59c86d8-jd6j9 100m (2%) 200m (5%) 200Mi (1%) 500Mi (3%) 3d16h
kube-system aws-node-hpnxj 10m (0%) 0 (0%) 0 (0%) 0 (0%) 4d14h
kube-system cluster-autoscaler-76fd4db4c-lcff5 100m (2%) 100m (2%) 600Mi (4%) 600Mi (4%) 3d16h
kube-system coredns-68f7974869-x4kxs 100m (2%) 0 (0%) 70Mi (0%) 170Mi (1%) 3d16h
kube-system ebs-csi-controller-55b9f85d5c-c94m5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d16h
kube-system ebs-csi-controller-55b9f85d5c-mt57k 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d16h
kube-system ebs-csi-node-692n6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d13h
kube-system kube-proxy-cqll5 100m (2%) 0 (0%) 0 (0%) 0 (0%) 4d14h
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1510m (38%) 1400m (35%)
memory 5116Mi (34%) 1832Mi (12%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
attachable-volumes-aws-ebs 0 0
Events: <none>
Name: ip-10-0-2-24.ap-southeast-2.compute.internal
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=t3.xlarge
beta.kubernetes.io/os=linux
eks.amazonaws.com/capacityType=ON_DEMAND
eks.amazonaws.com/nodegroup=elk
eks.amazonaws.com/nodegroup-image=ami-045371401c5f70a1e
failure-domain.beta.kubernetes.io/region=ap-southeast-2
failure-domain.beta.kubernetes.io/zone=ap-southeast-2b
kubernetes.io/arch=amd64
kubernetes.io/hostname=ip-10-0-2-24.ap-southeast-2.compute.internal
kubernetes.io/os=linux
node.kubernetes.io/instance-type=t3.xlarge
topology.ebs.csi.aws.com/zone=ap-southeast-2b
topology.kubernetes.io/region=ap-southeast-2
topology.kubernetes.io/zone=ap-southeast-2b
Annotations: csi.volume.kubernetes.io/nodeid: {"ebs.csi.aws.com":"i-0bac62ee2ae10a59a"}
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 24 Dec 2021 20:23:12 +1100
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ip-10-0-2-24.ap-southeast-2.compute.internal
AcquireTime: <unset>
RenewTime: Fri, 31 Dec 2021 13:26:12 +1100
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 31 Dec 2021 13:25:52 +1100 Mon, 27 Dec 2021 20:23:13 +1100 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 31 Dec 2021 13:25:52 +1100 Fri, 24 Dec 2021 20:23:12 +1100 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 31 Dec 2021 13:25:52 +1100 Fri, 24 Dec 2021 20:23:12 +1100 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 31 Dec 2021 13:25:52 +1100 Fri, 24 Dec 2021 20:23:32 +1100 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.0.2.24
Hostname: ip-10-0-2-24.ap-southeast-2.compute.internal
InternalDNS: ip-10-0-2-24.ap-southeast-2.compute.internal
Capacity:
attachable-volumes-aws-ebs: 25
cpu: 4
ephemeral-storage: 20959212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 16205904Ki
pods: 58
Allocatable:
attachable-volumes-aws-ebs: 25
cpu: 3920m
ephemeral-storage: 18242267924
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 15189072Ki
pods: 58
System Info:
Machine ID: ec29ef8b81ba9ab1229c8348fa8f6c97
System UUID: ec29ef8b-81ba-9ab1-229c-8348fa8f6c97
Boot ID: 8bf36296-625d-4749-8e81-abe0d7dd85c8
Kernel Version: 5.4.156-83.273.amzn2.x86_64
OS Image: Amazon Linux 2
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.7
Kubelet Version: v1.21.5-eks-bc4871b
Kube-Proxy Version: v1.21.5-eks-bc4871b
ProviderID: aws:///ap-southeast-2b/i-0bac62ee2ae10a59a
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default elk-es-node-1 1 (25%) 100m (2%) 4Gi (27%) 50Mi (0%) 3d16h
default kibana-kb-7f66d6978d-6knrx 100m (2%) 100m (2%) 1Gi (6%) 1Gi (6%) 3d16h
default transform-798f5758cd-4x94w 1 (25%) 0 (0%) 2Gi (13%) 0 (0%) 6d16h
kube-system aws-node-qmfg7 10m (0%) 0 (0%) 0 (0%) 0 (0%) 6d17h
kube-system coredns-68f7974869-wqs2k 100m (2%) 0 (0%) 70Mi (0%) 170Mi (1%) 3d16h
kube-system ebs-csi-node-48h2n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d17h
kube-system kube-proxy-f99dh 100m (2%) 0 (0%) 0 (0%) 0 (0%) 6d17h
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 2310m (58%) 200m (5%)
memory 7238Mi (48%) 1244Mi (8%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
attachable-volumes-aws-ebs 0 0
Events: <none>

AWS glue error in converting dynamic dataframe to spark

I am using AWS Glue crawler to read some data from S3 into a table.
I would like to then use the AWS Glue jobs to do some transformations. I am able to modify and run the script of a small file, but when I try to run it on larger data, I get the following error that seems to be complaining about converting Dynamic Frame to spark dataframe. I am not even sure how to start debugging it.
I didn't see many posts on this here -- only about sparkDF->Dynamic frames.
No older events found for the selected filter. clear filter.

18:49:31
er$$anonfun$init$1.apply(GrokReader.scala:62) at scala.collection.Iterator$$anon$9.next(Iterator.scala:162) at scala.collection.Iterator$$anon$16.hasNext(Iterator.scala:599) at com.amazonaws.services.glue.readers.GrokReader.hasNext(GrokReader.scala:117) at com.amazonaws.services.glue.hadoop.TapeHadoopRecordReader.nextKeyValue(TapeHadoopRecordReader.scala:73) at org.apache.spark.rdd.NewHadoopR
er$$anonfun$init$1.apply(GrokReader.scala:62)
at scala.collection.Iterator$$anon$9.next(Iterator.scala:162)
at scala.collection.Iterator$$anon$16.hasNext(Iterator.scala:599)
at com.amazonaws.services.glue.readers.GrokReader.hasNext(GrokReader.scala:117)
at com.amazonaws.services.glue.hadoop.TapeHadoopRecordReader.nextKeyValue(TapeHadoopRecordReader.scala:73)
at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:230)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:214)
at scala.collection.AbstractIterator.aggregate(Iterator.scala:1334)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1145)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1145)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1146)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1146)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
2020-05-12 18:49:22,434 INFO [Thread-9] scheduler.DAGScheduler (Logging.scala:logInfo(54)) - Job 0 failed: fromRDD at DynamicFrame.scala:241, took 8327.404883 s
2020-05-12 18:49:22,450 WARN [task-result-getter-1] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9983.0 in stage 0.0 (TID 9986, ip-172-32-50-149.us-west-2.compute.internal, executor 2): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,451 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 10000.0 in stage 0.0 (TID 10003, ip-172-32-50-149.us-west-2.compute.internal, executor 2): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,451 WARN [task-result-getter-3] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9986.0 in stage 0.0 (TID 9989, ip-172-32-50-149.us-west-2.compute.internal, executor 2): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,451 WARN [task-result-getter-0] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9985.0 in stage 0.0 (TID 9988, ip-172-32-50-149.us-west-2.compute.internal, executor 2): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,451 WARN [task-result-getter-1] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9864.0 in stage 0.0 (TID 9864, ip-172-32-62-222.us-west-2.compute.internal, executor 5): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,454 INFO [dispatcher-event-loop-3] storage.BlockManagerInfo (Logging.scala:logInfo(54)) - Added broadcast_25_piece0 in memory on ip-172-32-56-53.us-west-2.compute.internal:34837 (size: 32.1 KB, free: 2.8 GB)
2020-05-12 18:49:22,455 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9900.0 in stage 0.0 (TID 9900, ip-172-32-62-222.us-west-2.compute.internal, executor 5): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,456 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9991.0 in stage 0.0 (TID 9994, ip-172-32-56-53.us-west-2.compute.internal, executor 4): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,456 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9949.0 in stage 0.0 (TID 9949, ip-172-32-56-53.us-west-2.compute.internal, executor 4): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,456 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9975.0 in stage 0.0 (TID 9977, ip-172-32-62-222.us-west-2.compute.internal, executor 5): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,456 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9995.0 in stage 0.0 (TID 9998, ip-172-32-62-222.us-west-2.compute.internal, executor 7): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,456 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 10001.0 in stage 0.0 (TID 10004, ip-172-32-62-222.us-west-2.compute.internal, executor 5): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,457 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9993.0 in stage 0.0 (TID 9996, ip-172-32-62-222.us-west-2.compute.internal, executor 7): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,457 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9939.0 in stage 0.0 (TID 9939, ip-172-32-62-222.us-west-2.compute.internal, executor 7): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,457 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9930.0 in stage 0.0 (TID 9930, ip-172-32-62-222.us-west-2.compute.internal, executor 7): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,457 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9998.0 in stage 0.0 (TID 10001, ip-172-32-54-163.us-west-2.compute.internal, executor 6): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,462 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9965.0 in stage 0.0 (TID 9967, ip-172-32-56-53.us-west-2.compute.internal, executor 1): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,463 WARN [task-result-getter-3] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9934.0 in stage 0.0 (TID 9934, ip-172-32-56-53.us-west-2.compute.internal, executor 1): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,463 WARN [task-result-getter-0] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9990.0 in stage 0.0 (TID 9993, ip-172-32-56-53.us-west-2.compute.internal, executor 1): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,464 WARN [task-result-getter-1] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9992.0 in stage 0.0 (TID 9995, ip-172-32-56-53.us-west-2.compute.internal, executor 4): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,464 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9967.0 in stage 0.0 (TID 9969, ip-172-32-56-53.us-west-2.compute.internal, executor 4): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,464 WARN [task-result-getter-0] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9982.0 in stage 0.0 (TID 9985, ip-172-32-54-163.us-west-2.compute.internal, executor 3): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,464 WARN [task-result-getter-3] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9999.0 in stage 0.0 (TID 10002, ip-172-32-54-163.us-west-2.compute.internal, executor 3): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,464 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9984.0 in stage 0.0 (TID 9987, ip-172-32-54-163.us-west-2.compute.internal, executor 3): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,464 WARN [task-result-getter-0] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9966.0 in stage 0.0 (TID 9968, ip-172-32-54-163.us-west-2.compute.internal, executor 3): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,467 WARN [task-result-getter-3] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9996.0 in stage 0.0 (TID 9999, ip-172-32-54-163.us-west-2.compute.internal, executor 6): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,474 WARN [task-result-getter-1] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9960.0 in stage 0.0 (TID 9962, ip-172-32-54-163.us-west-2.compute.internal, executor 6): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,474 WARN [task-result-getter-2] scheduler.TaskSetManager (Logging.scala:logWarning(66)) - Lost task 9980.0 in stage 0.0 (TID 9982, ip-172-32-54-163.us-west-2.compute.internal, executor 6): TaskKilled (Stage cancelled)
2020-05-12 18:49:22,514 INFO [dispatcher-event-loop-0] yarn.YarnAllocator (Logging.scala:logInfo(54)) - Driver requested a total number of 1 executor(s).
Traceback (most recent call last):
File "script_2020-05-12-16-29-01.py", line 30, in <module>
dns = datasource0.toDF()
File "/mnt/yarn/usercache/root/appcache/application_1589300850182_0001/container_1589300850182_0001_01_000001/PyGlue.zip/awsglue/dynamicframe.py", line 147, in toDF
return DataFrame(self._jdf.toDF(self.glue_ctx._jvm.PythonUtils.toSeq(scala_options)), self.glue_ctx)
File "/mnt/yarn/usercache/root/appcache/application_1589300850182_0001/container_1589300850182_0001_01_000001/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/mnt/yarn/usercache/root/appcache/application_1589300850182_0001/container_1589300850182_0001_01_000001/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/mnt/yarn/usercache/root/appcache/application_1589300850182_0001/container_1589300850182_0001_01_000001/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o66.toDF.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 9929 in stage 0.0 failed 4 times, most recent failure: Lost task 9929.3 in stage 0.0 (TID 9983, ip-172-32-56-53.us-west-2.compute.internal, executor 1): java.io.IOException: too many length or distance symbols
at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:225)
at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:111)
at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:105)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at com.amazonaws.services.glue.readers.BufferedStream.read(DynamicRecordReader.scala:91)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:161)
at java.io.BufferedReader.readLine(BufferedReader.java:324)
at java.io.BufferedReader.readLine(BufferedReader.java:389)
at com.amazonaws.services.glue.readers.GrokReader$$anonfun$init$1$$anonfun$apply$1.apply$mcV$sp(GrokReader.scala:68)
at scala.util.control.Breaks.breakable(Breaks.scala:38)
at com.amazonaws.services.glue.readers.GrokReader$$anonfun$init$1.apply(GrokReader.scala:66)
at com.amazonaws.services.glue.readers.GrokReader$$anonfun$init$1.apply(GrokReader.scala:62)
at scala.collection.Iterator$$anon$9.next(Iterator.scala:162)
at scala.collection.Iterator$$anon$16.hasNext(Iterator.scala:599)
at com.amazonaws.services.glue.readers.GrokReader.hasNext(GrokReader.scala:117)
at com.amazonaws.services.glue.hadoop.TapeHadoopRecordReader.nextKeyValue(TapeHadoopRecordReader.scala:73)
at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:230)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:214)
at scala.collection.AbstractIterator.aggregate(Iterator.scala:1334)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1145)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1145)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1146)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1146)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1889)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1876)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2110)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2059)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2048)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2158)
at org.apache.spark.rdd.RDD$$anonfun$fold$1.apply(RDD.scala:1098)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.fold(RDD.scala:1092)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1.apply(RDD.scala:1161)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.treeAggregate(RDD.scala:1137)
at org.apache.spark.sql.glue.util.SchemaUtils$.fromRDD(SchemaUtils.scala:72)
at com.amazonaws.services.glue.DynamicFrame.recomputeSchema(DynamicFrame.scala:241)
at com.amazonaws.services.glue.DynamicFrame.schema(DynamicFrame.scala:227)
at com.amazonaws.services.glue.DynamicFrame.toDF(DynamicFrame.scala:290)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: too many length or distance symbols
at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:225)
at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:111)
at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:105)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at com.amazonaws.services.glue.readers.BufferedStream.read(DynamicRecordReader.scala:91)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:161)
at java.io.BufferedReader.readLine(BufferedReader.java:324)
at java.io.BufferedReader.readLine(BufferedReader.java:389)
at com.amazonaws.services.glue.readers.GrokReader$$anonfun$init$1$$anonfun$apply$1.apply$mcV$sp(GrokReader.scala:68)
at scala.util.control.Breaks.breakable(Breaks.scala:38)
at com.amazonaws.services.glue.readers.GrokReader$$anonfun$init$1.apply(GrokReader.scala:66)
at com.amazonaws.services.glue.readers.GrokReader$$anonfun$init$1.apply(GrokReader.scala:62)
at scala.collection.Iterator$$anon$9.next(Iterator.scala:162)
at scala.collection.Iterator$$anon$16.hasNext(Iterator.scala:599)
at com.amazonaws.services.glue.readers.GrokReader.hasNext(GrokReader.scala:117)
at com.amazonaws.services.glue.hadoop.TapeHadoopRecordReader.nextKeyValue(TapeHadoopRecordReader.scala:73)
at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:230)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:214)
at scala.collection.AbstractIterator.aggregate(Iterator.scala:1334)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1145)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1145)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1146)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1146)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
2020-05-12 18:49:22,587 ERROR [Driver] yarn.ApplicationMaster (Logging.scala:logError(70)) - User application exited with status 1
2020-05-12 18:49:22,588 INFO [Driver] yarn.ApplicationMaster (Logging.scala:logInfo(54)) - Final app status: FAILED, exitCode: 1, (reason: User application exited with status 1)
2020-05-12 18:49:22,591 INFO [pool-4-thread-1] spark.SparkContext (Logging.scala:logInfo(54)) - Invoking stop() from shutdown hook
2020-05-12 18:49:22,594 INFO [pool-4-thread-1] server.AbstractConnector (AbstractConnector.java:doStop(318)) - Stopped Spark#3a4d5cae{HTTP/1.1,[http/1.1]}{0.0.0.0:0}
2020-05-12 18:49:22,595 INFO [pool-4-thread-1] ui.SparkUI (Logging.scala:logInfo(54)) - Stopped Spark web UI at http://ip-172-32-50-149.us-west-2.compute.internal:40355
2020-05-12 18:49:22,597 INFO [dispatcher-event-loop-2] yarn.YarnAllocator (Logging.scala:logInfo(54)) - Driver requested a total number of 0 executor(s).
2020-05-12 18:49:22,598 INFO [pool-4-thread-1] cluster.YarnClusterSchedulerBackend (Logging.scala:logInfo(54)) - Shutting down all executors
2020-05-12 18:49:22,598 INFO [dispatcher-event-loop-3] cluster.YarnSchedulerBackend$YarnDriverEndpoint (Logging.scala:logInfo(54)) - Asking each executor to shut down
2020-05-12 18:49:22,600 INFO [pool-4-thread-1] cluster.SchedulerExtensionServices (Logging.scala:logInfo(54)) - Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
2020-05-12 18:49:22,604 INFO [dispatcher-event-loop-3] spark.MapOutputTrackerMasterEndpoint (Logging.scala:logInfo(54)) - MapOutputTrackerMasterEndpoint stopped!
2020-05-12 18:49:22,616 INFO [pool-4-thread-1] memory.MemoryStore (Logging.scala:logInfo(54)) - MemoryStore cleared
2020-05-12 18:49:22,616 INFO [pool-4-thread-1] storage.BlockManager (Logging.scala:logInfo(54)) - BlockManager stopped
2020-05-12 18:49:22,617 INFO [pool-4-thread-1] storage.BlockManagerMaster (Logging.scala:logInfo(54)) - BlockManagerMaster stopped
2020-05-12 18:49:22,618 INFO [dispatcher-event-loop-2] scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint (Logging.scala:logInfo(54)) - OutputCommitCoordinator stopped!
2020-05-12 18:49:22,621 INFO [pool-4-thread-1] spark.SparkContext (Logging.scala:logInfo(54)) - Successfully stopped SparkContext
2020-05-12 18:49:22,623 INFO [pool-4-thread-1] yarn.ApplicationMaster (Logging.scala:logInfo(54)) - Unregistering ApplicationMaster with FAILED (diag message: User application exited with status 1)
2020-05-12 18:49:22,631 INFO [pool-4-thread-1] impl.AMRMClientImpl (AMRMClientImpl.java:unregisterApplicationMaster(476)) - Waiting for application to be successfully unregistered.
2020-05-12 18:49:22,733 INFO [pool-4-thread-1] yarn.ApplicationMaster ```

Unable to create/read document to HDFS deployed with AWS EBS in EKS cluster

I have EKS cluster with EBS storage class/volume.
I am able to deploy hdfs namenode and datanode images (bde2020/hadoop-xxx) using statefulset successfully.
When I am trying to put a file to hdfs from my machine using hdfs://:, it gives me success, but it does not get written on datanode.
In namenode log, I see below error.
Can it be something to do with EBS volume? I cannot even upload/download files from namenode GUI. Can it be due to as datanode host name hdfs-data-X.hdfs-data.pulse.svc.cluster.local is not resolvable to my local machine?
Please help
2020-05-12 17:38:51,360 INFO hdfs.StateChange: BLOCK* allocate blk_1073741825_1001, replicas=10.8.29.112:9866, 10.8.29.176:9866, 10.8.29.188:9866 for /vault/a.json
2020-05-12 17:39:13,036 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology
2020-05-12 17:39:13,036 WARN protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2020-05-12 17:39:13,036 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2020-05-12 17:39:13,036 INFO hdfs.StateChange: BLOCK* allocate blk_1073741826_1002, replicas=10.8.29.176:9866, 10.8.29.188:9866 for /vault/a.json
2020-05-12 17:39:34,607 INFO namenode.FSEditLog: Number of transactions: 11 Total time for transactions(ms): 23 Number of transactions batched in Syncs: 3 Number of syncs: 8 SyncTimes(ms): 23
2020-05-12 17:39:35,146 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 2 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology
2020-05-12 17:39:35,146 WARN protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 2 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK, DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2020-05-12 17:39:35,146 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 2 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2020-05-12 17:39:35,147 INFO hdfs.StateChange: BLOCK* allocate blk_1073741827_1003, replicas=10.8.29.188:9866 for /vault/a.json
2020-05-12 17:39:57,319 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 3 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology
2020-05-12 17:39:57,319 WARN protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 3 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK, DISK, DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2020-05-12 17:39:57,319 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 3 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2020-05-12 17:39:57,320 INFO ipc.Server: IPC Server handler 5 on default port 8020, call Call#12 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 10.254.40.95:59328
java.io.IOException: File /vault/a.json could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2219)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2789)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:574)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:999)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2915)
My namenode web page shows below:
Node Http Address Last contact Last Block Report Capacity Blocks Block pool used Version
hdfs-data-0.hdfs-data.pulse.svc.cluster.local:9866 http://hdfs-data-0.hdfs-data.pulse.svc.cluster.local:9864 1s 0m
975.9 MB
0 24 KB (0%) 3.2.1
hdfs-data-1.hdfs-data.pulse.svc.cluster.local:9866 http://hdfs-data-1.hdfs-data.pulse.svc.cluster.local:9864 2s 0m
975.9 MB
0 24 KB (0%) 3.2.1
hdfs-data-2.hdfs-data.pulse.svc.cluster.local:9866 http://hdfs-data-2.hdfs-data.pulse.svc.cluster.local:9864 1s 0m
975.9 MB
0 24 KB (0%) 3.2.1
My deployment:
NameNode:
#clusterIP service of namenode
apiVersion: v1
kind: Service
metadata:
name: hdfs-name
namespace: pulse
labels:
component: hdfs-name
spec:
ports:
- port: 8020
protocol: TCP
name: nn-rpc
- port: 9870
protocol: TCP
name: nn-web
selector:
component: hdfs-name
type: ClusterIP
---
#namenode stateful deployment
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: hdfs-name
namespace: pulse
labels:
component: hdfs-name
spec:
serviceName: hdfs-name
replicas: 1
selector:
matchLabels:
component: hdfs-name
template:
metadata:
labels:
component: hdfs-name
spec:
initContainers:
- name: delete-lost-found
image: busybox
command: ["sh", "-c", "rm -rf /hadoop/dfs/name/lost+found"]
volumeMounts:
- name: hdfs-name-pv-claim
mountPath: /hadoop/dfs/name
containers:
- name: hdfs-name
image: bde2020/hadoop-namenode
env:
- name: CLUSTER_NAME
value: hdfs-k8s
- name: HDFS_CONF_dfs_permissions_enabled
value: "false"
ports:
- containerPort: 8020
name: nn-rpc
- containerPort: 9870
name: nn-web
volumeMounts:
- name: hdfs-name-pv-claim
mountPath: /hadoop/dfs/name
#subPath: data #subPath required as on root level, lost+found folder is created which does not cause to run namenode --format
volumeClaimTemplates:
- metadata:
name: hdfs-name-pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: ebs
resources:
requests:
storage: 1Gi
Datanode:
#headless service of datanode
apiVersion: v1
kind: Service
metadata:
name: hdfs-data
namespace: pulse
labels:
component: hdfs-data
spec:
ports:
- port: 80
protocol: TCP
selector:
component: hdfs-data
clusterIP: None
type: ClusterIP
---
#datanode stateful deployment
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: hdfs-data
namespace: pulse
labels:
component: hdfs-data
spec:
serviceName: hdfs-data
replicas: 3
selector:
matchLabels:
component: hdfs-data
template:
metadata:
labels:
component: hdfs-data
spec:
containers:
- name: hdfs-data
image: bde2020/hadoop-datanode
env:
- name: CORE_CONF_fs_defaultFS
value: hdfs://hdfs-name:8020
volumeMounts:
- name: hdfs-data-pv-claim
mountPath: /hadoop/dfs/data
volumeClaimTemplates:
- metadata:
name: hdfs-data-pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: ebs
resources:
requests:
storage: 1Gi
It seems to be issue with the datanode not reachable over rpc port from my client machine.
I had datanodes http port reachable from my client machine. Tried using webhdfs:// (instead of hdfs://) after putting mapping of datanode podname vs IP in hosts file, it worked out.

RegistryEventingServiceComponent Error Instantiating Registry Event Source and CarbonServerManager WSO2 Carbon initialization Failed

I have some troubles while deploying wso2-apim using helm in Kubernetes.
I pull the latest code from 'https://github.com/wso2/kubernetes-apim' , the image version is 2.6.0, and edit some config like this:
delete the file kubernetes-apim/helm/pattern-1/apim-with-analytics/templates/persistent-volumes.yaml because of we already have a nfs in kubernetes environment.
edit the file kubernetes-apim/helm/pattern-1/apim-with-analytics/templates/wso2apim-volume-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wso2apim-with-analytics-apim-deployment-volume-claim
namespace : {{ .Values.namespace }}
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: "nfs-client-provisioner"
# selector:
# matchLabels:
# purpose: apim-shared-deployment
edit the file /kubernetes-apim/helm/pattern-1/mysql/values.yaml
persistence:
enabled: true
storageClass: "nfs-client-provisioner"
accessMode: ReadWriteOnce
size: 8Gi
however the server wso2apim-with-analytics-apim-deployment-8876c9c55-6xf4m can not start up and always restart.
the error info :
warn
WARN - ApplicationManagementServiceComponent Templates directory not found at /home/wso2carbon/wso2am-2.6.0/repository/resources/identity/authntemplates
WARN - BlockingConditionRetriever Failed retrieving Blocking Conditions from remote endpoint: Connection refused (Connection refused). Retrying after 15 seconds...
WARN - KeyTemplateRetriever Failed retrieving throttling data from remote endpoint: Connection refused (Connection refused). Retrying after 15 seconds...
WARN - AppDeployerServiceComponent Waiting for required OSGi services: org.wso2.carbon.application.deployer.synapse.service.SynapseAppDeployerService,org.wso2.carbon.mediation.initializer.services.SynapseEnvironmentService,
WARN - StartupFinalizerServiceComponent Waiting for required OSGi services: org.wso2.carbon.application.deployer.service.CappDeploymentService,org.wso2.carbon.server.admin.common.IServerAdmin,org.wso2.carbon.throttling.agent.ThrottlingAgent,
2.error
ERROR - RegistryEventingServiceComponent Error Instantiating Registry Event Source
INFO - AndesConfigurationManager Main andes configuration located at : /home/wso2carbon/wso2am-2.6.0/repository/conf/broker.xml
INFO - RegistryEventingServiceComponent Successfully Initialized Eventing on Registry
FATAL - CarbonServerManager WSO2 Carbon initialization Failed
java.lang.ExceptionInInitializerError
at org.wso2.carbon.core.init.CarbonServerManager.initializeCarbon(CarbonServerManager.java:520)
at org.wso2.carbon.core.init.CarbonServerManager.start(CarbonServerManager.java:220)
at org.wso2.carbon.core.internal.CarbonCoreServiceComponent.activate(CarbonCoreServiceComponent.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)
at org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)
at org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:107)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:819)
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:771)
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130)
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:214)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:433)
at org.eclipse.equinox.http.servlet.internal.Activator.registerHttpService(Activator.java:81)
at org.eclipse.equinox.http.servlet.internal.Activator.addProxyServlet(Activator.java:60)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.init(ProxyServlet.java:40)
at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.init(DelegationServlet.java:38)
at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1230)
at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1174)
at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1066)
at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:5370)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5668)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:145)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1700)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1690)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: Shutdown in progress
at java.lang.ApplicationShutdownHooks.add(ApplicationShutdownHooks.java:66)
at java.lang.Runtime.addShutdownHook(Runtime.java:211)
at org.wso2.carbon.core.multitenancy.MultitenantServerManager.<clinit>(MultitenantServerManager.java:43)
... 39 more
ERROR - CarbonCoreServiceComponent Failed clean up Carbon core
java.lang.NullPointerException
at org.wso2.carbon.core.init.CarbonServerManager.stop(CarbonServerManager.java:947)
at org.wso2.carbon.core.internal.CarbonCoreServiceComponent.deactivate(CarbonCoreServiceComponent.java:118)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.eclipse.equinox.internal.ds.model.ServiceComponent.deactivate(ServiceComponent.java:387)
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.deactivate(ServiceComponentProp.java:161)
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.dispose(ServiceComponentProp.java:387)
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.dispose(ServiceComponentProp.java:102)
at org.eclipse.equinox.internal.ds.InstanceProcess.disposeInstances(InstanceProcess.java:344)
at org.eclipse.equinox.internal.ds.InstanceProcess.disposeInstances(InstanceProcess.java:306)
at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:368)
at org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)
at org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:107)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:819)
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:771)
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.unregister(ServiceRegistrationImpl.java:225)
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.unregisterServices(ServiceRegistry.java:635)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.close(BundleContextImpl.java:88)
at org.eclipse.osgi.framework.internal.core.BundleHost.stopWorker(BundleHost.java:514)
at org.eclipse.osgi.framework.internal.core.AbstractBundle.suspend(AbstractBundle.java:566)
at org.eclipse.osgi.framework.internal.core.Framework.suspendBundle(Framework.java:1206)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.decFWSL(StartLevelManager.java:592)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:257)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.shutdown(StartLevelManager.java:215)
at org.eclipse.osgi.framework.internal.core.InternalSystemBundle.suspend(InternalSystemBundle.java:284)
at org.eclipse.osgi.framework.internal.core.Framework.shutdown(Framework.java:692)
at org.eclipse.osgi.framework.internal.core.Framework.close(Framework.java:600)
at org.eclipse.core.runtime.adaptor.EclipseStarter.shutdown(EclipseStarter.java:400)
at org.wso2.carbon.core.init.CarbonServerManager.shutdown(CarbonServerManager.java:851)
at org.wso2.carbon.core.init.CarbonServerManager.shutdownGracefully(CarbonServerManager.java:885)
at org.wso2.carbon.core.init.CarbonServerManager$4.run(CarbonServerManager.java:903)
...
ERROR - RegistryEventingServiceComponent Failed obtaining server configuration
INFO - CarbonServerManager Shutdown complete
INFO - CarbonServerManager Halting JVM
so, what is wrong with it, thanks!
I delete all the volume and re-deploy wso2-apim, and I found that wso2apim-with-analytics-apim always exit abnormally at the first start up, and when it restart the error like above saying appears.
I gress that may be it is because of the livenessProbe in kubernetes-apim/helm/pattern-1/apim-with-analytics/templates/wso2apim-deployment.yaml,so I
comment out the code
livenessProbe:
exec:
command:
- /bin/sh
- -c
- nc -z localhost 9443
initialDelaySeconds: 150
periodSeconds: 10
and then delete all the pv and re-deploy, it works well.

Gstreamer: display and record H264 stream simultaneously

Using gst-launch-1.0 I am able record a H.264 stream to a file with:
gst-launch-1.0 -e v4l2src device=/dev/video3 do-timestamp=true \
! video/x-h264, width=$WIDTH, height=$HEIGHT, framerate=$FRAMERATE/1 \
! h264parse ! mp4mux ! queue ! filesink location=video.mp4
and I am able to display the stream with:
gst-launch-1.0 -e v4l2src device=/dev/video3 \
! video/x-h264, width=$WIDTH,height=$HEIGHT,framerate=$FRAMERATE/1 ! tee name=t \
t. ! queue ! h264parse ! decodebin ! xvimagesink sync=false
However, doing both things on the same time fails. Then nothing happens. Command:
gst-launch-1.0 -e v4l2src device=/dev/video3 \
! video/x-h264,width=$WIDTH,height=$HEIGHT,framerate=$FRAMERATE/1 ! tee name=t \
t. ! queue ! h264parse ! decodebin ! xvimagesink sync=false \
t. ! queue ! h264parse ! mp4mux ! filesink location=video.mp4
Output: (no error - but no display and filesize = 0kB)
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1
/GstPipeline:pipeline0/GstTee:t.GstTeePad:src_0: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:src: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true
/GstPipeline:pipeline0/GstDecodeBin:decodebin0.GstGhostPad:sink.GstProxyPad:proxypad0: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind.GstPad:src: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:sink: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1
/GstPipeline:pipeline0/GstTee:t.GstTeePad:src_1: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1
/GstPipeline:pipeline0/GstQueue:queue1.GstPad:src: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1
/GstPipeline:pipeline0/GstH264Parse:h264parse1.GstPad:sink: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1
/GstPipeline:pipeline0/GstQueue:queue1.GstPad:sink: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1
/GstPipeline:pipeline0/GstTee:t.GstPad:sink: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstH264Parse:h264parse2.GstPad:src: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstOMXH264Dec-omxh264dec:omxh264dec-omxh264dec0.GstPad:sink: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstCapsFilter:capsfilter1.GstPad:src: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstCapsFilter:capsfilter1.GstPad:sink: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstH264Parse:h264parse2.GstPad:sink: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind.GstPad:sink: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true
/GstPipeline:pipeline0/GstDecodeBin:decodebin0.GstGhostPad:sink: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:sink: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true, profile=(string)main, level=(string)4.1
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true, profile=(string)main, level=(string)4.1
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true, profile=(string)main, level=(string)4.1
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true, profile=(string)main, level=(string)4.1
/GstPipeline:pipeline0/GstDecodeBin:decodebin0.GstGhostPad:sink.GstProxyPad:proxypad0: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true, profile=(string)main, level=(string)4.1
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind.GstPad:src: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true, profile=(string)main, level=(string)4.1
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstH264Parse:h264parse2.GstPad:sink: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true, profile=(string)main, level=(string)4.1
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind.GstPad:sink: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true, profile=(string)main, level=(string)4.1
/GstPipeline:pipeline0/GstDecodeBin:decodebin0.GstGhostPad:sink: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true, profile=(string)main, level=(string)4.1
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstH264Parse:h264parse2.GstPad:src: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true, profile=(string)main, level=(string)4.1
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstCapsFilter:capsfilter1.GstPad:src: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true, profile=(string)main, level=(string)4.1
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstOMXH264Dec-omxh264dec:omxh264dec-omxh264dec0.GstPad:sink: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true, profile=(string)main, level=(string)4.1
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstCapsFilter:capsfilter1.GstPad:sink: caps = video/x-h264, width=(int)640, height=(int)480, framerate=(fraction)15/1, stream-format=(string)byte-stream, alignment=(string)au, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:7:1, parsed=(boolean)true, profile=(string)main, level=(string)4.1