Error while calling the start POST call in Agora API - postman

I am trying to make API calls to the Agora Cloud Recording API through their Postman Environment, but I am getting a 404 error during the query and stop calls. The acquire call returns a 200 response with the ResourceId and the start call also returns a 200 response with the sid.
I have enabled Cloud Recording functions from the Agora dashboard. I have also double-checked my bucket credentials.
This is what the start API body looks like:
{
"cname":"bhavya",
"uid":"123",
"clientRequest":{
"token":"{{token}}",
"recordingConfig":{
"maxIdleTime":120,
"streamTypes":2,
"audioProfile":1,
"channelType":1,
"videoStreamType":0,
"transcodingConfig":{
"width":360,
"height":640,
"fps":30,
"bitrate":600,
"mixedVideoLayout":1,
"maxResolutionUid":"1"
}
},
"storageConfig":{
"vendor":{{Vendor}},
"region":{{Region}},
"bucket":"{{Bucket}}",
"accessKey":"{{AccessKey}}",
"secretKey":"{{SecretKey}}"
}
}
}
Moreover, using their interactive documentation gives me a 400 bad request error in the start step. This is the error received:
{
"code": 2,
"reason": "response detail error:2,errMsg:uid inside the List can't be convert to uint32_t!"
}
Am I missing some step while setting up the project? What could the solution be?

An expired/invalid token will return the response 200 OK for start request but did not start recording. So while querying status or calling stop will throw 404 error as there is no recording is going on.
You must use a valid token while acquiring resource and starting recording and all other request where token is required.
Also don't forget to add authorization token in request header.

The main reason for me was the region of the storage cloud (was using Amazon S3).
So here are the main things to solve:
Remember the region you set during the acquire call. Forexample If i set it to AP which means Asia Pacific.
Create bucket(either AWS S3 or else) that should lie in the same region we used in the acquire call. For example: I have used AP in the acquire call so my bucket should be like Asia Pacific (Mumbai) ap-south-1
Now in the storage configurations of the start call, the region must be set same as that of the bucket i.e. In my case 14 (AP-SOUTH-1)
Make sure to check the documentation here for the required region:
https://docs.agora.io/en/cloud-recording/cloud_recording_api_start?platform=RESTful#cloud-storage-configuration

Related

How can I manually specify a X-Cloud-Trace-Context header value to and correlate and trace logs in separate Cloud Run requests?

I'm using Cloud Run and Cloud Tasks to do some async processing of webhooks. When I get a request to my Cloud Run service, I queue up a task in my Cloud Tasks queue and return a response from my service immediately. Cloud Tasks will then trigger my service again (different endpoint) and do some processing. I want to correlate all the logs in these steps by using the same trace id, but it is not working.
When creating a task in Cloud Tasks, I request it to send the X-Cloud-Trace-Context header and I fill it with the original request's X-Cloud-Trace-Context header value. Theoretically, when the request comes to my Cloud Run service from Cloud Tasks, it should have this header value, and all my logs will be correlated correctly. However, when this second request comes, it looks like Cloud Run is overriding the header with a new trace id.
Is there a way to prevent this from happening? If not, what is the recommended solution to correlate all the logs (generated by service code and also the logs auto generated by GCP) in the steps described above?
Thanks for the help!
We found that passing along the traceparent header into the cloud task works. The trace id is preserved and a new span/parent id is automatically assigned by cloudrun.
task = {
"http_request": {
"url": url,
"headers": {
"traceparent": request.headers.get('traceparent', "")
}
}
}
Note it also appears to work with "X-Cloud-Trace-Context" but you have to split the value and only pass along the trace id (ex the cloudrun header value is like "trace_id/span_id;flags" -- you have to split out just the trace_id and set that as the task header value). Otherwise it seems like cloudrun considers the header invalid and, as you mentioned, sets a whole new trace context.
As a related note - while this gets the right header into place you still need to actually log the trace_id in some fashion for your logs to correlate. Looks to me like the logs generated by cloudrun itself do this, but I had to configure my logger so that my logs would also be correlated.
I don't think you can override the HTTP headers set by Cloud Tasks, but you can override the trace member in the log records sent to Stack Driver.
So you could include the original trace ID in the task payload and then override the trace in the logs produced by your Cloud Run endpoint which performs the real work.

Media Tailor ad returning 504 error in AWS

I'm using AWS Media Tailor to test an ad inserting demo. The demo page is this one: https://github.com/aws-samples/aws-media-services-simple-vod-workflow/tree/master/12-AdMarkerInsertion.
When I place my manifest into a TheoPlayer I always get an 504 error. My manifes page is: https://ebf348c58b834d189af82777f4f742a6.mediatailor.us-west-2.amazonaws.com/v1/master/3c879a81c14534e13d0b39aac4479d6d57e7c462/MyTestCampaign/llama.m3u8.
I have also tried with: https://ebf348c58b834d189af82777f4f742a6.mediatailor.us-west-2.amazonaws.com/v1/master/3c879a81c14534e13d0b39aac4479d6d57e7c462/MyTestCampaign/llama_with_slates.m3u8.
The specific error is:
{"message":"failed to generate manifest: Unable to obtain template playlist. sessionId:[c915d529-3527-4e37-89e0-087e393e75de]"}
I have read about this error: https://docs.aws.amazon.com/mediatailor/latest/ug/playback-errors-examples.html
But don't know how to fix it.
Maybe I did something wrong or do I need a quote in AWS?
Any idea?
Thanks for the inquiry!
The following example shows the result when a timeout occurs between AWS Elemental MediaTailor and either the ad decision server (ADS) or the origin server.
An HTTP 504 error is known as a Gateway Timeout meaning that a resource was unresponsive and prevented the request from completing successfully. In this case since MediaTailor is returning an HTTP 504 this means that either the ADS or Origin failed to respond within the timeout period.
To troubleshoot this you will need to determine which dependency is failing to respond to MediaTailor and correct it. Typically the issue is the ADS failing to respond to a VAST request performed by MediaTailor which you can confirm by reviewing your CloudWatch logs.
https://docs.aws.amazon.com/mediatailor/latest/ug/monitor-cloudwatch-ads-logs.html
Make sure that your ADS follows the guidelines listed below for integrating with MediaTailor.
https://docs.aws.amazon.com/mediatailor/latest/ug/vast-integration.html

AWS Storage gateway : refresh cache Too many requests have been sent to server

I am calling AWS Storage Gateway refreshCache method quite too frequently I guess, (As the message suggests), but I am not sure how long do I need to wait till I hit it again, any help will be appreciated.
AWSStorageGateway gatewayClient = AWSStorageGatewayClientBuilder.standard().build();
RefreshCacheRequest cacheRequest = new RefreshCacheRequest();
cacheRequest.setFileShareARN(this.fileShareArn);
gatewayClient.refreshCache(cacheRequest);
com.amazonaws.services.storagegateway.model.InvalidGatewayRequestException: Too many requests have been sent to server. (Service: AWSStorageGateway; Status Code: 400; Error Code: InvalidGatewayRequestException; Request ID: f1ffa249-6908-4ae1-9f71-93fe7f26b2af)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770)
I think you can refer to the official document. https://docs.aws.amazon.com/storagegateway/latest/APIReference/API_RefreshCache.html
As it said,
When this API is called, it only initiates the refresh operation. When the API call completes and returns a success code, it doesn't necessarily mean that the file refresh has completed. You should use the refresh-complete notification to determine that the operation has completed before you check for new files on the gateway file share.
So I guess after you called AWS Storage Gateway refreshCache method, you must wait until the refresh action completed. And if you call the method again during this period,some exceptions will be raised.
For the solution, you can refer to Monitoring Your File Share to set a notification.

AWS API Gateway fails with "Unable to invoke" when query string contains key without value

There is a resource /{myvar} defined in API Gateway, with GET method. Integration request points to Lambda function, with Lambda Proxy integration enabled.
When I invoke test execution from API Resource Editor of this resource and method, it works for queries like
/abc
/abc?def=ghi
but it fails to execute query like
/abc?def
with following response body visible in test console:
{
"cause": "Unable to invoke. Please try again later.",
"logref": "f6c905bd-cc71-11e8-a731-37e05a411010",
"message": ""
}
and also Response Headers and Logs boxes below are empty.
When I publish such resource to stage, accessing it through HTTPS in browser results with {"message": "Internal server error"} See edit below
How to deal with that? How can I capture whole resource path with or without query, with no Gateway crash? It fails the same way also for greedy resource /{myvar+}.
EDIT
After redeployment problem no longer occurs on stage. It is still occurs within Method Execution window in API Gateway Resources Editor.
you can capture the query string params through the "event" object sent to your Function Handler, it is called queryStringParameters. You can just log this and go through it on CloudWatch to see what exactly is failing.
P.S. Sorry for posting as an answer, don't have rep for comment ^^

Api gateway get output results from step function?

I followed tutorial on creating and invoking step functions
I'm getting output in my GET request of api as
{
"executionArn": "arn:aws:states:ap-northeast-1:123456789012:execution:HelloWorld:MyExecution",
"startDate": 1.486772644911E9
}
But, instead of above response I want my step functions output, which is given by end state as below.
{
"name":"Hellow World"
}
How to achieve this?
Update: You can now use Express Step Functions for synchronous requests.
AWS Step Functions are asynchronous and do not immediately return their results. API Gateway methods are synchronous and have a maximum timeout of 29 seconds.
To get the function output from a Step Function, you have to add a second method in API Gateway which will call the Step Function with the DescribeExecution action. The API Gateway client will have to call this periodically (poll) until the returned status is no longer "RUNNING".
Here's the DescribeExecution documentation
Use Express Step Functions instead. This type of Step Functions can be called synchronously. Go to your API Gateway and in the Integration Request section make sure you have the StartSyncExecution action:
After that, go a bit lower in the same page to the Mapping Templates:
and include the following template for the application/json Content-Type:
#set($input = $input.json('$'))
{
"input": "$util.escapeJavaScript($input)",
"stateMachineArn": "arn:aws:states:us-east-1:your_aws_account_id:stateMachine:your_step_machine_name"
}
After that, go back to the Method Execution and go to the Integration Response and then to the Mapping Templates section:
And use the following template to have a custom response from your lambda:
#set ($parsedPayload = $util.parseJson($input.json('$.output')))
$parsedPayload
My testing Step Function is like this:
And my Lambda Function code is:
Deploy your API Gateway stage.
Now, if you go to Postman and send a POST request with any json body, now you have a response like this:
New Synchronous Express Workflows for AWS Step Functions is the answer:
https://aws.amazon.com/blogs/compute/new-synchronous-express-workflows-for-aws-step-functions/
Amazon API Gateway now supports integration with Step Functions StartSyncExecution for HTTP APIs:
https://aws.amazon.com/about-aws/whats-new/2020/12/amazon-api-gateway-supports-integration-with-step-functions-startsyncexecution-http-apis/
First of all the step functions executes asynchronously and API Gateway is only capable of invoking the step function (Starting a flow) only.
If you are waiting for the results of a step function invocation from a web application, you can use AWS IOT WebSockets for this. The steps are as follows.
Setup AWS IOT topic with WebSockets.
Configure the API Gateway and Step functions invocation.
From the Web Frontend subscribe to the IOT Topic as a WebSocket listener.
At the last step (And in error steps) in the Step Functions workflow use AWS SDK to trigger the IOT Topic which will broadcast the results to the Web App running in the browser using WebSockets.
For more details on WebSockets with AWS IOT refer the medium article Receiving AWS IoT messages in your browser using websockets.
Expanding on what #MikeD at AWS says, if you're certain that the Step Function won't exceed the 30 second timeout, you could create a lambda that executes the step function and then blocks as it polls for the result. Once it has the result, it can return it.
It is a better idea to have the first call return immediately with the execution id, and then pass that id into a second call to retrieve the result, once it's finished.