Cloud Recording issue - amazon-web-services

I am trying agora cloud Recording API and trying to record into AWS S3 bucket. The calls appear to go through fine. While doing stop record, I get success message. I have reproduced part of it here:
{
insertId: "5d66423d00012ad9d6d02f2b"
labels: {
clone_id:
"00c61b117c803f45c35dbd46759dc85f8607177c3234b870987ba6be86fec0380c162a"
}
lotextPayload: "Stop cloud recording success. FileList :
01ce51a4a640ecrrrrhxabd9e9d823f08_tdeaon_20197121758.m3u8, uploading
status: backuped"
timestamp: "2019-08-28T08:58:37.076505Z"
}
It shows the status 'backuped'. As per agora documentation, it uploaded the files into agora cloud. Then within 5 minutes it is supposed to upload into my AWS S3 bucket.
I am not seeing this file in my AWS bucket. I have tested the bucket secret key. the same key works fine for other application. Also I have verified CORS settings.
Please suggest how I could debug further.

Make sure you are inputing your S3 credentials correctly within the storageConfig settings
"storageConfig":{
"vendor":{{StorageVendor}},
"region":{{StorageRegion}},
"bucket":"{{Bucket}}",
"accessKey":"{{AccessKey}}",
"secretKey":"{{SecretKey}}"
}
Agora offers a Postman collection to make testing easier: https://documenter.getpostman.com/view/6319646/SVSLr9AM?version=latest

I had faced issue due to wrong uid.
Recording uid needs to be a random id which will be used for 'joining by the recording client which 'resides' on cloud. I had passed my main client's id.
Other two reasons I had faced:
S3 credentials
S3 CORS settings : Go to AWS S3 permissions and set the allowed CORS headers.
EDIT:
It could be something like this on S3 side ..
[
{
"AllowedHeaders": [
"Authorization",
"*"
],
"AllowedMethods": [
"HEAD",
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [
"ETag",
"x-amz-meta-custom-header",
"x-amz-storage-class"
],
"MaxAgeSeconds": 5000
}
]

Related

AWS S3 Bucket Policy for CORS

I am trying to figure out how to follow these instructions to set up an S3 bucket on AWS. I have been trying most of the year but still can't make sense of the AWS documentation. I think this github repo readme may have been prepared at a time when the AWS S3 interface appeared differently (there is no CORS setting in the form to make the S3 bucket permissions now).
I asked this question earlier this year, and I have tried using the upvoted answer to make a bucket policy, which is precisely as shown in that answer, as:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"POST",
"GET",
"PUT",
"DELETE",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
When I try this, I get an error that says:
Ln 1, Col 0Data Type Mismatch: The text does not match the expected
JSON data type Object. Learn more
The Learn More link goes to a page that suggests I can resolve this error by updating text to use the supported data type. I do not know what that means in this context. I don't know how to find out which condition keys require which type of data.
Resolving the error
Update the text to use the supported data type.
For example, the Version global condition key requires a String data
type. If you provide a date or an integer, the data type won't match.
Can anyone help with current instructions for how to create the CORS permissions that are consistent with the spirit of the instructions in the github readme for the repo?

Is it possible to download the contents of a public Lambda layer from AWS given the ARN?

I want to download the public arn for a more compact version of spacy from this GitHub repository.
"arn:aws:lambda:us-west-2:113088814899:layer:Klayers-python37-spacy:27"
How can I achieve this?
You can get it from a Arn using the get-layer-version-by-arn function in the CLI.
You can run the below command to get the source of the Lambda layer you requested.
aws lambda get-layer-version-by-arn \
--arn "arn:aws:lambda:us-west-2:113088814899:layer:Klayers-python37-spacy:27"
An example of the response you will receive is below
{
"LayerVersionArn": "arn:aws:lambda:us-west-2:123456789012:layer:AWSLambda-Python37-SciPy1x:2",
"Description": "AWS Lambda SciPy layer for Python 3.7 (scipy-1.1.0, numpy-1.15.4) https://github.com/scipy/scipy/releases/tag/v1.1.0 https://github.com/numpy/numpy/releases/tag/v1.15.4",
"CreatedDate": "2018-11-12T10:09:38.398+0000",
"LayerArn": "arn:aws:lambda:us-west-2:123456789012:layer:AWSLambda-Python37-SciPy1x",
"Content": {
"CodeSize": 41784542,
"CodeSha256": "GGmv8ocUw4cly0T8HL0Vx/f5V4RmSCGNjDIslY4VskM=",
"Location": "https://awslambda-us-west-2-layers.s3.us-west-2.amazonaws.com/snapshots/123456789012/..."
},
"Version": 2,
"CompatibleRuntimes": [
"python3.7"
],
"LicenseInfo": "SciPy: https://github.com/scipy/scipy/blob/master/LICENSE.txt, NumPy: https://github.com/numpy/numpy/blob/master/LICENSE.txt"
}
Once you run this you will get a response returned with a key of "Content", containing a subkey of "Location" which references the S3 path to download the layer contents.
You can download from this path, you will then need to configure this as a Lambda layer again after removing any dependencies.
Please ensure in this process that you only remove unnecessary dependencies.

linking to a google cloud bucket file in a terminal command?

I'm trying to find my way with Google Cloud.
I have a Debian VM Instance that I am running a server on. It is installed and working via SSH Connection in a browser window. The command to start the server is "./ninjamsrv config-file-path.cfg"
I have the config file in my default google firebase storage bucket as I will need to update it regularly.
I want to start the server referencing the cfg file in the bucket, e.g:
"./ninjamsrv gs://my-bucket/ninjam-config.cfg"
But the file is not found:
error opening configfile 'gs://my-bucket/ninjam-config.cfg'
Error loading config file!
However if I run:
"gsutil acl get gs://my-bucket/"
I see:
[
{
"entity": "project-editors-XXXXX",
"projectTeam": {
"projectNumber": "XXXXX",
"team": "editors"
},
"role": "OWNER"
},
{
"entity": "project-owners-XXXXX",
"projectTeam": {
"projectNumber": "XXXXX",
"team": "owners"
},
"role": "OWNER"
},
{
"entity": "project-viewers-XXXXX",
"projectTeam": {
"projectNumber": "XXXXX",
"team": "viewers"
},
"role": "READER"
}
]
Can anyone advise what I am doing wrong here? Thanks
The first thing to verify is if indeed the error thrown is a permission one. Checking the logs related to the VM’s operations will certainly provide more details in that aspect, and a 403 error code would confirm if this is a permission issue. If the VM is a Compute Engine one, you can refer to this documentation about logging.
If the error is indeed a permission one, then you should verify if the permissions for this object are set as “fine-grained” access. This would mean that each object would have its own set of permissions, regardless of the bucket-level access set. You can read more about this here. You could either change the level of access to “uniform” which would grant access to all objects in the relevant bucket, or make the appropriate permissions change for this particular object.
If the issue is not a permission one, then I would recommend trying to start the server from the same .cfg file hosted on the local directory of the VM. This might point the error at the file itself, and not its hosting on Cloud Storage. In case the server starts successfully from there, you may want to re-upload the file to GCS in case the file got corrupted during the initial upload.

API Gateway not importing exported definition

I am testing my backup procedure for an API, in my API Gateway.
So, I am exporting my API from the API Console within my AWS account, I then go into API Gateway and create a new API - "import from swagger".
I paste my exported definition in and create, which throws tons of errors.
From my reading - it seems this is a known issue / pain.
I suspect the reason for the error(s) are because I use a custom authorizer;
"security" : [ {
"TestAuthorizer" : [ ]
}, {
"api_key" : [ ]
} ]
I use this on each method, hence, I get a lot of errors.
The weird thing is that I can clone this API perfectly fine, hence, I assumed that I could take an exported definition and re-import without issues.
Any ideas on how I can correct these errors (preferably within my API gateway, so that I can export / import with no issues).
An example of one of my GET methods using this authorizer is:
"/api/example" : {
"get" : {
"produces" : [ "application/json" ],
"parameters" : [ {
"name" : "Authorization",
"in" : "header",
"required" : true,
"type" : "string"
} ],
"responses" : {
"200" : {
"description" : "200 response",
"schema" : {
"$ref" : "#/definitions/exampleModel"
},
"headers" : {
"Access-Control-Allow-Origin" : {
"type" : "string"
}
}
}
},
"security" : [ {
"TestAuthorizer" : [ ]
}, {
"api_key" : [ ]
} ]
}
Thanks in advance
UPDATE
The error(s) that I get when importing a definition I had just exported are:
Your API was not imported due to errors in the Swagger file.
Unable to put method 'GET' on resource at path '/api/v1/MethodName': Invalid authorizer ID specified. Setting the authorization type to CUSTOM or COGNITO_USER_POOLS requires a valid authorizer.
I get the message for each method in my API - so there is a lot.
Additionality, right at the end of the message, I get this:
Additionally, these warnings were found:
Unable to create authorizer from security definition: 'TestAuthorizer'. Extension x-amazon-apigateway-authorizer is required. Any methods with security: 'TestAuthorizer' will not be created. If this security definition is not a configured authorizer, remove the x-amazon-apigateway-authtype extension and it will be ignored.
I have tried with Ignoring the errors, same result.
Make sure you are exporting your swagger with both integrations and authorizers extensions.
Try exporting your swagger using AWS CLI:
aws apigateway get-export \
--parameters '{"extensions":"integrations,authorizers"}' \
--rest-api-id {api_id} \
--stage-name {stage_name} \
--export-type swagger swagger.json
The output will be sent to swagger.json file.
For more details about swagger custom extensions see this.
For anyone that may come across this issue.
After LOTS of troubleshooting and eventually involving the AWS Support Team, this has been resolved and identified as an AWS CLI client bug (confirmed from AWS Support Team);
Final response.
Thank you for providing the details requested. After going through the AWS CLI version and error details, I can confirm the error is because of known issue with Powershell AWS CLI. I apologize for inconvenience caused due to the error. To get around the error I recommend going through the following steps
Create a file named data.json in the current directory where the powershell command is to be executed
Save the following contents to file {"extensions":"authorizers,integrations"}
In Powershell console, ensure the current working directory is the same as the location where data.json is present
Execute the following command aws apigateway get-export --parameters file://data.json --rest-api-id APIID --stage-name dev --export-type swagger C:\temp\export.json
Using this, finally resolved my issue - I look forward to the fix in one of the upcoming versions.
PS - this is currently on the latest version:
aws --version
aws-cli/1.11.44 Python/2.7.9 Windows/8 botocore/1.5.7

S3 TVM Issue – getting access denied

I'm trying to let my iOS app upload to S3 using credentials it gets from a slightly modified anonymous token vending machine.
The policy statement my token vending machine returns is:
{"Statement":
[
{"Effect":"Allow",
"Action":"s3:*",
"Resource":"arn:aws:s3:::my-bucket-test",
"Condition": {
"StringLike": {
"s3:prefix": "66-*"
}
}
},
{"Effect":"Deny","Action":"sdb:*","Resource":["arn:aws:sdb:us-east-1:MYACCOUNTIDHERE:domain/__USERS_DOMAIN__","arn:aws:sdb:us-east-1:MYACCOUNTIDHERE:domain/TokenVendingMachine_DEVICES"]},
{"Effect":"Deny","Action":"iam:*","Resource":"*"}
]
}
The object I'm trying to put has the same bucket name and key 66-3315F11E-84FA-417F-9C32-AC4BE364AD99.natural.mp4.
As far as I understand this should work fine, but it doesn't, and throws an access denied message. Is there anything wrong with my policy statement?
You don't need to use prefix to refer to the resource for the context of Object operations. I'd also recommend restricting the S3 actions. Here is a recommend policy, based on the one from an article on an S3 Personal File Store. Feel free to remove the ListBucket if it doesn't make sense for you app.
{"Statement":
[
{"Effect":"Allow",
"Action":["s3:PutObject","s3:GetObject","s3:DeleteObject"],
"Resource":"arn:aws:s3:::my-bucket-test/66-*",
},
{"Effect":"Allow",
"Action":"s3:ListBucket",
"Resource":"arn:aws:s3:::my-bucket-test",
"Condition":{
"StringLike":{
"s3:prefix":"66-*"
}
}
},
{"Effect":"Deny","Action":"sdb:*","Resource":["arn:aws:sdb:us-east-1:MYACCOUNTIDHERE:domain/__USERS_DOMAIN__","arn:aws:sdb:us-east-1:MYACCOUNTIDHERE:domain/TokenVendingMachine_DEVICES"]},
{"Effect":"Deny","Action":"iam:*","Resource":"*"}
]
}