I created a scheduled live video on Facebook live producer. However, when I try to fetch it using me/live_videos?&source=owner API, I get an empty list.
I only be able to get the video when it is currently live or the live has ended.
How do I get unpublished live video? I tried filter it with broadcast_status but getting the same result.
Can I only see the scheduled live video if it were created using the Facebook Live Video API only?
I'd the same issue this is how I've done it.
You need to append &source=owner while requesting scheduled live streams.
Example
node-id/live_videos?broadcast_status=["UNPUBLISHED"]&source=owner
Related
I try to get return report from amazon, but my request is always cancelled. I have working request report using
'ReportType' => 'GET_MERCHANT_LISTINGS_DATA',
'ReportOptions' => 'ShowSalesChannel=true'
I modify it by changing ReportType and removing ReportOptions. MWS accept request by its always cancelled. I also try to find any working example of it on google but also without success. Meybe somone have working example of it? I can downolad report when I send request from amazon webpage. I suppose it require ReportOptions, but I dont know what to put in this place (I have only info ReportProcessingStatus CANCELLED). Normally I choose Day,Week,Month. I check on amazon docs but there isnt many informations https://docs.developer.amazonservices.com/en_US/reports/Reports_RequestReport.html
Any ideas?
When getting the call log data using below URL:
https://platform.ringcentral.com/restapi/v1.0/account/~/call-log?view=Simple&dateFrom='+Datetime.now().format('yyyy-MM-dd')+'&page=1&perPage=10000
This data is not matching with the inbound and outbound count in Live Reports.
Is any way there to get Live Reports data using an API call?
Live Reports provide graphical representations from some internal metrics and metadata and are totally independent system and not sure if it uses data from RingCentral call log.
The API you are using is of call log with parameters and it will be not same as Live Report data and will have some difference in both the output.
RingCentral Live Reports uses Call Session Notification (CSN) events as the underlying data for its metrics and so you can replicate the results by subscribing to the following CSN event filters:
[
"/restapi/v1.0/account/{accountId}/telephony/sessions",
"/restapi/v1.0/account/{accountId}/extension/~/telephony/sessions"
]
Read more here:
Account Telephony Sessions Event
Extension Telephony Sessions Event
I tried to create a response card using the console but it doesn't show up and previously it use to give an option in slot(prompt) and now it is not showing up.
I'm building a chatbot from Amazon Lex, and I want a response card in Facebook Messenger, and I have been doing it without using a Lambda function, there was an option to display a card in the prompt (slot) before. However, yesterday when I tried to enable a response card, the prompt doesn't have the option for response card.
As per the Amazon Lex documentation the card has to work, but in my case, it is not even showing an option to enable a card from prompt.
Just enable the message inside Response
and then put any message
after that you can enable Response cards.
may be this can solve your problementer image description here
I’ve created a Lex bot that is integrated with an Amazon Connect work flow. The bot is invoked when the user calls the phone number specified in the Connect instance, and the bot itself invokes a Lambda function for initialisation & validation and fulfilment. The bot asks several questions that require the caller to provide simple responses. It all works OK, so far so good. I would like to add a final question that asks the caller for their comments. This could be any spoken text, including non-English words. I would like to be able to capture this Comment slot value as an audio stream or file, perhaps for storage in S3, with the goal of emailing a call centre administrator and providing the audio file as an MP3 or WAV attachment. Is there any way of doing this in Lex?
I’ve seen mention of ‘User utterance storage’ here: https://aws.amazon.com/blogs/contact-center/amazon-connect-with-amazon-lex-press-or-say-input/, but there’s no such setting visible in my Lex console.
I’m aware that Connect can be configured to store a recording in S3, but I need to be able to access the recording for the current phone call from within the Lambda function in order to attach it to an email. Any advice on how to achieve this, or suggestions for a workaround, would be much appreciated.
Thanks
Amazon Connect call recording can only record conversations once an agent accepts the call. Currently Connect cannot record voice in the Contact Flows. So in regards to getting the raw audio from Connect, that is not possible.
However, it looks like you can get it from lex if you developed an external application (could be lambda) that gets utterances: https://docs.aws.amazon.com/lex/latest/dg/API_GetUtterancesView.html
I also do not see the option to enable or disable user utterance storage in Lex, but this makes me think that by default, all are recorded: https://docs.aws.amazon.com/lex/latest/dg/API_DeleteUtterances.html
I am using a microphone which records sound through a browser, converts it into a file and sends the file to a java server. Then, my java server sends the file to the cloud speech api and gives me the transcription. The problem is that the transcription is super long (around 3.7sec for 2sec of dialog).
So I would like to speed up the transcription. The first thing to do is to stream the data (if I start the transcription at the beginning of the record. The problem is that I don't really understand the api. For instance if I want to transcript my audio stream from the source (browser/microphone) I need to use some kind of JS api, but I can't find anything I can use in a browser (we can't use node like this can we?).
Else I need to stream my data from my js to my java (not sure how to do it without breaking the data...) and then push it through streamingRecognizeFile from there : https://github.com/GoogleCloudPlatform/java-docs-samples/blob/master/speech/cloud-client/src/main/java/com/example/speech/Recognize.java
But it takes a file as the input, so how am I supposed to use it? I cannot really tell the system I finished or not the record... How will it understand it is the end of the transcription?
I would like to create something in my web browser just like the google demo there :
https://cloud.google.com/speech/
I think there is some fundamental stuff I do not understand about the way to use the streaming api. If someone can explain a bit how I should process about this, it would be owesome.
Thank you.
Google "Speech-to-Text typically processes audio faster than real-time, processing 30 seconds of audio in 15 seconds on average" [1]. You can use Google APIs Explorer to test exactly how long your each request would take [2].
To speed up the transcribing you may try to add recognition metadata to your request [3]. You can provide phrase hints if you are aware of the context of the speech [4]. Or use enhanced models to use special set of machine learning models [5]. All these suggestions would improve the accuracy and might have effects on transcribing speed.
When using the streaming recognition, in config you can set singleUtterance option to True. This will detect if user pause speaking and cease the recognition. If not streaming request will continue until to the content limit, which is 1 minute of audio length for streaming request [6].