I am trying to stream local mp4 files via Amazon Kinesis Video Streams. But When I try with there Provided Example here its working Fine. But When I try to put my file and try to push its giving me Error on AWS Console Screen
The type of the media is not supported or could not be determined from the media codec ids: (track 1: V_MPEG4/ISO/AVC), (track 2: A_VORBIS).
I tried to convert the File with the provide command here But on the local terminal I am getting Error
onAckEvent AckEvent{ackEventType=ERROR, fragmentTimecode=109963, fragmentNumber='91343852333183234942317985614720708962629140175', errorCode=FRAGMENT_TIMECODE_LESSER_THAN_PREVIOUS, errorId=4004}
Then I tried to add -profile:v baseline in command Still getting the Same Error.
Any Help would be appreciated.
Related
I have created a lambda function, that is extracting the audio stream from a video file using ffmpeg. I have also configured API gateway as a trigger, where I am passing the file to the lambda function in the request body.
The lambda function is working perfectly well with small files, but with bigger files, it needs a bit more time and then I am running into the API gateway timeout, which according to my understanding is set to 29 seconds max.
So when I trigger audio extraction from a bigger file, I am hitting this timeout and my API request fails to return any result even though the transcoding still runs in the background and the file is extracted, so I was wondering what is the best approach to handle those cases, where the execution of the lambda function is taking longer?
I was thinking to start the transcoding in the background and simply return a JSON with a message that the transcoding might take a couple of minutes, depending on the input file duration, but if I try to push the ffmpeg to the background I am being presented with an error, that the destination file doesn't exist.
os.system(f"{ffmpeg} -loglevel panic -nostdin -i {in_video} -vn -c:a aac -ar 48000 -b:a 192K {out_audio} 2> /dev/null &")
This is the ffmpeg command extracting the audio and transcoding it to AAC.
If I remove the 2> /dev/null & part of the command, it runs just fine, but if I keep it, I get an error:
"errorMessage": "[Errno 2] No such file or directory: 'output_audio.aac'"
"errorType": "FileNotFoundError"
So I was wondering what is the preferred way to run processes in the background.
There are many options that can be considered.
But first, since you already have all the flow working with lambda behind API Gateway, you can use lambda url.
Lambda url are a good way to trigger lambda via HTTPS. It supports multiple authorization mechanism such as IAM.
The interesting point is about timeout. When using Lambda url, the maximum timeout you can have is 15 mins, which is definitely better than the 29s you have when dealing with API Gateway.
Lambda url is free of charge and can be enabled on existing lambda function.
Increasing the timeout might just push back the problem until you have a very big file to convert and in the long run, maybe worth exploring other solution like uploading the file to S3 and maybe use AWS Batch or Spin up an EC2 to process the file. This would require more architecture design and implementation though.
For longer processing, it is recommended to use asynchronous invocations, where the Lambda function is triggered and runs until completion and does not block the caller. One option to solve it would be to upload the file to S3, configure the Lambda function to react to the S3 event, download the file from S3, process it, and upload it to another S3 bucket after processing completes.
I've been using dataflow and pubsub for streaming for over a year, and today without me changing anything dataflow is not reading from pubsub anymore. At first, I was getting the below error in my logging but it stopped popping up once I updated pubsub to the latest version and apache beam sdk from 2.10.0 to 2.17.0
20 streaming Windmill RPC errors for a stream, last was: org.apache.beam.vendor.grpc.v1p13p1.io.grpc.StatusRuntimeException: NOT_FOUND: Requested entity was not found.
I see the below link but at the end it just says GCP is working on it and does not say if the writer did anything to fix the issue. How does this get fixed and want is causing it?
Dataflow: streaming Windmill RPC errors for a stream
I'm using rtp_forward from the videoroom plugin in Janus-Gateway to stream WebRTC.
My target pipeline looks like this:
WebRTC --> Janus-Gateway --> (RTP_Forward) MediaLive RTP_Push Input
I've achieved this:
WebRTC --> Janus-Gateway --> (RTP-Forward) Janus-Gateway [Streaming Plugin]
I've tried multiple rtp_forward requests, like:
register = {"request": "rtp_forward", "publisher_id": 8097546391494614, "room": 1234, "video_port": 5000, "video_ptype": 100, "host": "medialive_rtp_input", "secret": "adminpwd"}
But medialive just doesn't receive any stream. Anything I'm missing?
I'm not familiar with AWS MediaLive: initially I thought that, since most media servers like this expect RTMP and not RTP, that was the cause of the issue, but it looks like it does indeed support a plain RTP input mode. At this point this is very likely a codec issue: probably MediaLive doesn't support the codecs your browser is sending (opus and vp8?). Looking at the supported codecs, this seems to be the issue: https://docs.aws.amazon.com/medialive/latest/ug/inputs-supported-containers-and-codecs.html
You can probably get video working if you use H.264 in the browser, but audio is always Opus and definitely not AAC, so you'll need an intermediate node to do transcoding.
Since you're using RTP PUSH, Are you pushing stream it to correct RTP endpoint provided by AWS ? If so, you can see alerts in health check if Medialive received it but it failed to read or corrupted. You'll see error is any of these pieplines where you're pushing the stream, if you don't see anything which means some Network problem, try RTMP as it's on TCP and should get something in packet capturer.
https://docs.aws.amazon.com/medialive/latest/ug/monitoring-console.html
I am trying to implement producer as mentioned here(https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-java/blob/master/src/main/demo/com/amazonaws/kinesisvideo/demoapp/PutMediaDemo.java).
I have a mkv file which i want to upload in loop to act as producer in Kinesis video stream. But program hangs on line 122 (latch.await()). Program stuck at this line without giving any error and i am not able to see any thing on amazon video preview tab.
What i am doing wrong?
The line 122 (latch.await()) is waiting for acknowledge or connection close event. It could be firewall or network condition causing this to wait for ever…. Before you try your own mkv file, were you able to get the demo running with the sample mkv files and see playback in the console? Let us know if that succeeds in your environment.
I faced this issue many times. While uploading or editing any file from FileZilla it its showing error message
Error: error while writing: received failure with description 'Failure'.
After the upload file the file size will be 0 byte.
My server is AWS EC2 with Minimum instance class type.
The "Failure" is an error message for error code 4, returned by the OpenSSH SFTP server for various problems, for which there's no more specific code in the SFTP protocol version 3. While the server should at least return a specific plain-text error message, it fails to do so.
Common reasons you may get the generic "Failure" error message, while uploading are:
Uploading a file to a full filesystem (HDD).
Exceeding a user disk quota.
These reasons will even agree with the empty file when the error occurs.
For details, see SFTP Status/Error Code 4 (Failure).
I created a file (without an extension) instead of a folder by mistake. When I tried to create a folder with the same name, I had the same error as yours.
To fix issue, I removed the file and created the folder again.