How can I get server and stream key from AWS MediaLive - amazon-web-services

I need to configure live streaming using OBS studio software, live streaming has already been setup on AWS using ElementalMediaLive(auto wizard). but I am unable to figure out the way to find the server address and stream key which are required the configure the OBS studio.
can some one please guid me where can I find the above required information into AWS panel?
Thanks

That is the "stream name" in the rtmp ingest point URL. The format would be something like
rtmp://domain:1935/live/streamname
stream key in OBS is streamname
For detail, please refer to page 7 and 8 of this pdf: https://d2908q01vomqb2.cloudfront.net/fb644351560d8296fe6da332236b1f8d61b2828a/2020/04/14/Connecting-OBS-Studio-to-AWS-Media-Services-in-the-Cloud-v2.pdf

Have a look here:
https://aws.amazon.com/de/blogs/media/connecting-obs-studio-to-aws-media-services-in-the-cloud/
Especially this document:
https://d2908q01vomqb2.cloudfront.net/fb644351560d8296fe6da332236b1f8d61b2828a/2020/04/14/Connecting-OBS-Studio-to-AWS-Media-Services-in-the-Cloud-v2.pdf
Where it says:
STEP C: CONFIGURE THE APPLIANCE
Launch OBS Studio on the source system. Choose Settings to open the settings window. Choose Stream to access the streaming settings.
Complete the fields:
For Stream Type, choose Custom Streaming Server.
For URL, copy one of the endpoint URLs from the input you created in Step B. Remove the /<stream_name> at the end of the URL.
For Stream key, type the stream name.
Leave the Use authentication box unchecked.
Choose Apply to save your changes.

What OBS refers to as "stream key" is the App Instance.
Regards,

Related

How to access prefix S3 while auto record with Amazon IVS?

Today I tried to use Amazon Interactive Video Service with auto record to an Amazon S3 bucket. The problem is that after live streaming ended I want to get the video recorded in S3. I follow the documentation then I get path/prefix like below:
/ivs/v1/<aws_account_id>/<channel_id>/<year>/<month>/<day>/<hours>/<minutes>/<recording_id>
At the path I can find in response json except
<recording_id>
FYI the recording_id created when I start live streaming, but I cannot get a response. So how can I get the recording id with response JSON to access the path of video recorded in S3?
According to the AWS IVS documentation, you'll have to subscribe to the IVS Record State Change event. (see link)
In the event data, there's a key called recording_s3_key_prefix that will be in the following format:
"recording_s3_key_prefix": "ivs/v1/123456789012/AbCdef1G2hij/2020/6/23/20/12/j8Z9O91ndcVs"
You'll be able to get the full path from there or if you just want the recording_id you may extract it from this key as well.

Recover EFS with aws start-restore-job in OneZone

I didn't find the AvailabilityZoneName parameter in the startRestoreJob SDK
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Backup.html#startRestoreJob-property
For this reason, when I restore the snapshot, it is created as REGIONAL.
The AWS console itself allows you to select this when you restore. Does anyone know a solution?
I was confronted with the same problem. the documentation seems not aligned. i check on cloudtrail but i have a HIDDEN_DUR_TO_SECURITY_REASONS placeholder...
But in Developper mode on chrome you can see metadata attribute sent to the server. so you need to use availabilityZoneName and singleAzFilesystem parameters.
You can pass the file system type information in the startRestoreJob API in the Metadata property.
To the the values allowed, you can call the GetRecoveryPointRestoreMetadata API to get the Metadata value for your Recovery Point, and then use the values you get to pass to the StartRestoreJob API.
Docs for the GetRecoveryPointRestoreMetadata API: https://docs.aws.amazon.com/aws-backup/latest/devguide/API_GetRecoveryPointRestoreMetadata.html

Connect BigQuery as a source to Data Fusion in another GCP project

I am trying to connect BigQuery of ProjectA to Data Fusion of ProjectB and its asking me to enter a service key file. I have tried to upload the service key file to Cloud Storage of ProjectB and provided the link but it's asking me to provide a local file path.
Can someone help me on this?
Thanks in advance.
Can you try this, grant BQ permission of project A to data fusion in project B.
service-project_number#gcp-sa-datafusion.iam.gserviceaccount.com.
project_number-compute#developer.gserviceaccount.com.
Steps:
Navigate to the customer project that contains the CDF instance and copy the project number (this is found on the Home Page in the Project Info card)
Navigate to the project that contains the resources you would like to interact with.
In the sidebar, click on ‘IAM & Admin’
Click on ‘Add’ at the top of the page.
Provide the first service account name from the table above, be sure to replace with the actual number you obtained in step 1
Grant the Admin role for the resource you would like to interact with. Ex. BigQuery Admin for reading/writing to BigQuery. For BigQuery, you will also need to grant the BigQuery Data Owner role as well.
Repeat steps 5 & 6 for the second service account in the table above.
In your pipeline, ensure you define the correct Project Id for the sources/sinks. Using ‘auto-detect’ will default to the customer project that contains the CDF instance.
Can you try download the service key json file to the local, ie you local computer? And try to put the file into some folder and provide the full path to that service key file in the BigQuery properties.

Stream Analytics reading csv from Event Hub

I am running into this issue, and I don't know how to get ride of this error:
Some possible reasons: 1) Malformed events 2) Input source configured...
I have develop a c# Console App to write a csv like this into an Azure event Hub:
datacol1;datacol2;datacol3
It's working fine, as I develop a reader to check that data is written correctly.
I try to configure an Azure Stream Analytics Job to read data from the previous Event Hub, but nothing arrived in the input. Logs from the Stream Analytics Job said that:
Some possible reasons: 1) Malformed events 2) Input source configured...
Event Hub is working, Stream Analytics is not working... why?
thx a lot!
Looking at your input - it doesn't look CSV but rather ";" separated values.
Please ensure the console app generated correct input - according to the job input specifications.
You can try your job with a pre-defined input through the portal - click on the input -> Test and see how it reacts.
You can also sample some data from the source in a similar way.
Ho men, I just find the way out!!!
Every CSV data in input to stream analytics must have the header!!!
Ho men!

WOWZA LiveAutoRecord

I am tired of one problem so please make things clear to me.
Please read these following three points and help me out.
(1)
I have simply followed this https://www.wowza.com/docs/how-to-start-and-stop-live-stream-recordings-programmatically-livestreamrecordautorecord-example#documentation
I have attached my Application.xml. Now when I publish live stream name "test1" via FMLE it get recorded on server but when I run different instance of FMLE on different PC and publish live stream name "test2" it does not get record and I think it goes to previously recorded file "test1" (means no separate file being record, however there should be two files recorded test1 and test2).
Why this happenning ?
Is this com.wowza.wms.plugin.livestreamrecord.module.ModuleAutoRecordAdvancedExample for single stream recording ? means If I publish stream A B C D , it will record them in one single file ? (probably the output file will be A.mp4 as A was first published stream ?)
(2) What is this https://www.wowza.com/docs/how-to-start-and-stop-live-stream-recordings-programmatically-imediastreamactionnotify3#comments module for ?
I have implement this code in Eclipse and successfully put jar in lib folder and configured everything. Now again I am not able to record different streams with their corresponding name. Means If I publish stream1 and stream2 then desired output should be two different files (in content folder) but again I see one single file being record ?
(3) Can I use ModuleLiveStreamRecord.java ? This was in older version of WOWZA but I have properly imported required jar and tested it.
My requirement is very simple:
As soon as users start publishing, WOWZA should start live recording. If 10 users publishing live, 10 files should be generate.
Don't make things more difficult than necessary (assuming you have Wowza 4.x; if you still have 3.x then I highly recommend to upgrade for free)
Open the Engine Manager (http://your.server.com:8088)
Go to "Applications" from the top menu
Select your application from the left menu (e.g. "live")
In the setup window for this application, click the blue Edit button
Enable "Record all incoming streams"
Click "Save"
Click the orange "Restart now" button at the top
Done
Every stream that is published via this application will now automatically be recorded. The default folder for recordings is the /content folder in your Wowza installation. You can change this on the same page under "Streaming File Directory" (make sure it's a directory on your local system, unless you really well understand how Wowza works)
The filename is always the streamname + ".mp4", but when you start a new recording while the file already exists, the old file will be renamed first.
Want to control recording manually? Start publishing first, then select "Incoming streams" from the left menu and use the big red dot button behind a stream name to start recording.
If your server produces any different behavior with regards to the file (re)naming or recording, then you may need to review your Wowza setup.
I appreciate your response KBoek.
I sorted out issue but there were really debugging need if one doing custom module. I had to write custom module for live auto recording because I wanted HTTP authentication and then custom name of live recording.
thanks again