gstreamer won't play rtsp - gstreamer

I have this gstreamer command that plays stream from one Ubuntu 16.04 box but two others.
As far as I know I have same packages installed regarding gstreamer on all boxes because I ran sudo apt-get install gstreamer1-0* so that all gstreamer 1.0 packages are installed. I find it strange that it does not work on other
Following commands where used:
gst-launch-1.0 rtspsrc location=rtsp://<user>:<password>#<IP>/axis-media/media.amp user-id=root user-pw=xxxxxxxxxxc latency=150 ! decodebin max-size-time=30000000000 ! videoconvert ! autovideosink
or
gst-launch-1.0 playbin uri=rtsp://<user>:<password>#<IP>/axis-media/media.amp
This will open a stream of a Axis camera with h264. I don't understand why it does not work on two Ubuntu 16.04 boxes but works on one. All of these are the same Ubuntu 16.04 with same gstreamer packages installed.
Could there maybe be another package except gstreamer that is necessary in order for gstreamer to stream rtsp?
The error a get when it is not playing
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Got context from element 'autovideosink0': gst.gl.GLDisplay=context, gst.gl.GLDisplay=(GstGLDisplay)"\(GstGLDisplayX11\)\ gldisplayx11-0";
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://root:pass#172.26.134.166/axis-media/media.amp
(gst-launch-1.0:4036): GLib-GIO-WARNING **: Ignoring invalid ignore_hosts value '*]'
ERROR: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0: Could not open resource for reading and writing.
Additional debug info:
gstrtspsrc.c(6795): gst_rtspsrc_retrieve_sdp (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0:
Failed to connect. (Generic error)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
Found the command from this webpage:
http://gstreamer-devel.966125.n4.nabble.com/gstreamer-client-pipeline-to-view-video-from-AXIS-M1054-Network-Camera-td4667092.html

Well it seems that it was the proxy that caused the issue. When I disabled proxy I was able to stream with gstreamer on all boxes.
Best regards

Related

How do i install exiftool on heroku

i have tried the buildpack but non worked. i got these error when trying to install https://github.com/velizarn/heroku-buildpack-exiftool
Installing exiftool 11.36
Fetching https://123456.mycdn.org/downl/Image-ExifTool-11.36.tar.gz
gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error is not recoverable: exiting now
! Push rejected, failed to compile exiftool app.
! Push failed
please can someone assist me. my django needs exiftool before it can work.

Logstash Google Pubsub Input Plugin fails to load file and pull messages

I'm getting this error when trying to run Logstash pipeline with a configuration that is using google_pubsub on a docker container running in my production env:
2021-09-16 19:13:25 FATAL runner:135 - The given configuration is invalid. Reason: Unable to configure plugins: (PluginLoadingError) Couldn't find any input plugin named 'google_pubsub'. Are you sure this is correct? Trying to load the google_pubsub input plugin resulted in this error: Problems loading the requested plugin named google_pubsub of type input. Error: RuntimeError
you might need to reinstall the gem which depends on the missing jar or in case there is Jars.lock then resolve the jars with `lock_jars` command
no such file to load -- com/google/cloud/google-cloud-pubsub/1.37.1/google-cloud-pubsub-1.37.1 (LoadError)
2021-09-16 19:13:25 ERROR Logstash:96 - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
This seems to randomly happen when re-installing the plugin. I thought it's a proxy issue but I have the google domain enabled in the whitelist. Might be the wrong one / missing something. Still, doesn't explain the random failures.
Also, when I run the pipeline in my machine I get GCP events, but when I do it on a VM - no Pubsub messages are being pulled. Could it be a firewall rule blocking them?
The error message suggests there is a problem in loading the ‘google_pubsub’ input plugin. This error generally occurs when the input Pub/Sub plugin is not installed properly. Kindly ensure that you are installing the Logstash Plugin for Pub/Sub correctly.
For example, installing Logstash Plugin for Pub/Sub in a VM :
sudo -u root sudo -u logstash bin/logstash-plugin install logstash-input-google_pubsub
For a detailed demo refer to this community tutorial.

Gstreamer is unable to play videos

I ran the command. I have an ubuntu 18 running on arm64 with an external monitor connected to nvidia jetson xavier.
gst-launch-1.0 videotestsrc ! videoconvert ! autovideosink
However it doesn't do anything . neither does it play any video. Terminal return following line
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock

Put the webcam data to kinesis video stream

Im very new to Api and our use case is to stream the live data from webcam to kinesis video stream (kvs).
Steps Taken : Created the ubuntu server on aws and installed the CPP SDK.
created the kinesis video stream in aws
downloaded and installed the Gstreamer on my local.
I was trying to put the rtsp sample data to gstream which is on Ec2ubuntu server server i ran below query
$ gst-launch-1.0 rtspsrc location="rtsp://YourCameraRtspUrl" short-header=TRUE ! rtph264depay ! video/x-h264, format=avc,alignment=au ! kvssink stream-name="YourStreamName" storage-size=512 access-key="YourAccessKey" secret-key="YourSecretKey" aws-region="YourAWSRegion"
im getting attached error may be we need to open some port on ec2 ?
Suggestion required :how can i put my local webcam video to kinesis ?
thanks
If you use macOS, you can stream your webcam into Kinesis Video Stream like the command below setting fps 1;
AWS_ACCESS_KEY_ID=<YOUR_AWS_ACCESS_KEY_ID>
AWS_SECRET_ACCESS_KEY=<YOUR_AWS_SECRET_ACCESS_KEY>
AWS_REGION=<YOUR_AWS_REGION>
STREAM_NAME=<YOUR_STREAM_NAME>
gst-launch-1.0 -v avfvideosrc \
! clockoverlay font-desc="Sans bold 60px" \
! videorate \
! video/x-raw,framerate=1/1 \
! vtenc_h264_hw allow-frame-reordering=FALSE realtime=TRUE max-keyframe-interval=2 bitrate=512 \
! h264parse \
! video/x-h264,stream-format=avc,alignment=au \
! kvssink stream-name="${STREAM_NAME}" storage-size=512 \
access-key="${AWS_ACCESS_KEY_ID}" \
secret-key="${AWS_SECRET_ACCESS_KEY}" \
aws-region="${AWS_REGION}" \
frame-timecodes=true \
framerate=1
It's not immediately clear from your question what your scenario is. I assume if you refer to a webcam then it's attached to your computer. Depending of your use case, you could check out the producer libraries that will help integrating.
Java: https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-java
C++: https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-cpp
C: https://github.com/awslabs/amazon-kinesis-video-streams-producer-c
You could ask relevant questions and get support in GitHub by cutting an Issue in the appropriate repository

Spring Cloud Data Flow : Cannot run program "docker"

I want to deploy Spring Boot applications using Kinesis streams on Kubernetes cluster on AWS.
I used kops in an AWS EC2 (Amazon Linux) instance to create my cluster and deploy it using terraform.
I installed Spring Cloud Data Flow for Kubernetes using Helm chart. All my pods are up and running and I can access to the Spring Cloud Data Flow interface in order to register my dockerized apps. I am using ECR repositories to upload my Docker images.
When I want to deploy the stream (composed of a time-source and a log-sink), a big nice red error message pops up. I checked the log of the Skipper pod and I have the following error message starting with :
org.springframework.cloud.skipper.SkipperException: Could not install AppDeployRequest
and finishing with :
Caused by: java.io.IOException: Cannot run program "docker" (in directory "/tmp/spring-cloud-deployer-5769885450333766520/time-log-kinesis-stream-1539963209716/time-log-kinesis-stream.log-sink-kinesis-app-v1"): error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048) ~[na:1.8.0_111-internal]
at org.springframework.cloud.deployer.spi.local.LocalAppDeployer$AppInstance.start(LocalAppDeployer.java:386) ~[spring-cloud-deployer-local-1.3.7.RELEASE.jar!/:1.3.7.RELEASE]
at org.springframework.cloud.deployer.spi.local.LocalAppDeployer$AppInstance.start(LocalAppDeployer.java:414) ~[spring-cloud-deployer-local-1.3.7.RELEASE.jar!/:1.3.7.RELEASE]
at org.springframework.cloud.deployer.spi.local.LocalAppDeployer$AppInstance.access$200(LocalAppDeployer.java:296) ~[spring-cloud-deployer-local-1.3.7.RELEASE.jar!/:1.3.7.RELEASE]
at org.springframework.cloud.deployer.spi.local.LocalAppDeployer.deploy(LocalAppDeployer.java:199) ~[spring-cloud-deployer-local-1.3.7.RELEASE.jar!/:1.3.7.RELEASE]
... 54 common frames omitted
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method) ~[na:1.8.0_111-internal]
at java.lang.UNIXProcess.<init>(UNIXProcess.java:247) ~[na:1.8.0_111-internal]
at java.lang.ProcessImpl.start(ProcessImpl.java:134) ~[na:1.8.0_111-internal]
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029) ~[na:1.8.0_111-internal]
... 58 common frames omitted
I already had this error when I tried to deploy on a local k8s cluster on Windows 10 and I thought it was linked to Win10 platform.
I am using spring-cloud-dataflow-server-kubernetes at version 1.6.2.RELEASE.
I really do not have any clues why this error is appearing. Thanks !
It looks like the docker command is not found by the SCDF local deployer's ProcessBuilder when it tries to run the docker exec from this path:
/tmp/spring-cloud-deployer-5769885450333766520/time-log-kinesis-stream-1539963209716/time-log-kinesis-stream.log-sink-kinesis-app-v1
The SCDF sets the above path as its working directory before running the docker command and hence docker is expected to run from this location.
I have found where was the issue. My bad, the problem is always between the keyboard and the chair !
I wanted to remove all the metrics process in the skipper-config.yaml file and I inserted a typo in the configuration file. The JSON env variable data.spring.application.json for the Skipper launch was not valid hence the DeployerInitializationService never saw the properties it needed to add Kubernetes into the repository !
Now in the logs and in the dataflow shell I have the default and the minikube accounts. Thanks for your help anyway :)