How can I troubleshoot Dockerrun parsing errors? - amazon-web-services

I'm banging my head against a wall trying to figure out the source of the following error I get when trying to deploy this Dockerrun file to EB:
Error: parse Dockerrun.aws.json file failed with error json: invalid use of ,string struct tag, trying to unmarshal unquoted value into int
Here is the file in question:
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "mybucket",
"Key": "myconfig.json"
},
"Image": {
"Name": "1234567890.dkr.ecr.us-east-2.amazonaws.com/myimage:tag",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "3001",
"HostPort": "80"
}
]
}
I've read over the documentation here: https://docs.amazonaws.cn/en_us/elasticbeanstalk/latest/dg/single-container-docker-configuration.html
I can't seem to find any issues with the file. I know that AWS has validators for CloudFormation templates, does something similar exist for Dockerrun files? How would one go about troubleshooting this error?

It turns out the error was unrelated to the actual parsing of the file. I dug through the logs and realized that my ECR authentication token had supposedly expired. This was strange since I was using the same ECR authentication for other Elastic Beanstalk environments without issue. The solution was to generate a new authentication token for ECR, upload a new config file to S3, and point the Dockerrun authentication bucket and key fields to the new file.
If you run into a similar error, look further back in your eb-engine logs for other errors that may be the root of the problem.

Related

Error publishing ASP.NET Core Web API to AWS Serverless Lambda: 'AWSLambdaFullAccess' at 'policyArn' ... Member must have length greater than

For over a year I have been able to publish a ASP.NET Core Web API application using Visual Studio 2019 by selecting "Publish to AWS Lambda..." without incident (via a right click on the project). Until yesterday. Now it consistently fails to publish and rolls back.
The following two reasons are given as to why it has failed.
1 validation error detected: Value 'AWSLambdaFullAccess' at 'policyArn' failed to satisfy constraint: Member must have length greater than or equal to 20 (Service: AmazonIdentityManagement; Status Code: 400; Error Code: ValidationError; Request ID: ...; Proxy: null)
The following resource(s) failed to create: [AspNetCoreFunctionRole, Bucket]. Rollback requested by user.
I have looked at AWSLambdaFullAccess and AWSLambda_FullAccess and the other things and just have no model to follow or even know what it is referring to in any sense where I can imagine a fruitful path to proceed. What exactly is the "Member" it is referring to? Extensive research has yielded nothing of use.
I want to successfully publish my Web API. What can I look into to proceed?
This may not be the correct or ideal solution, I tried this approach and it worked
Step 1:
Changed the Access from "AWSLambdaFullAccess" to "AWSLambda_FullAccess" in serverless.template
"Resources": {
"AspNetCoreFunction": {
"Type": "AWS::Serverless::Function",
"Properties": {
"Handler": "SampleAPI::SampleAPI.LambdaEntryPoint::FunctionHandlerAsync",
"Runtime": "dotnetcore3.1",
"CodeUri": "",
"MemorySize": 256,
"Timeout": 30,
"Role": null,
"Policies": [
"AWSLambda_FullAccess"
],
"Environment": {
"Variables": {
"AppS3Bucket": {
Lambda publishing was successful after this step.
Step 2:
Then I faced an issue in accessing the DynamoDb table. I went to IAM role added the DynamoDb Execution role. (Previously I don't remember adding this role explicitly)
According to https://docs.aws.amazon.com/lambda/latest/dg/access-control-identity-based.html the AWSLambdaFullAccess policy has just been deprecated and as a result my stack which I tried to update was stuck in UPDATE_ROLLBACK_FAILED.
To fix this I had to take the following steps:
Manually continue the rollback of the stack from the CloudFormation page and ensuring that I was skipping the role which was referencing AWSLambdaFullAccess.
Change my AWSLambdaFullAccess reference to AWSLambda_FullAccess in the CloudFormation template
Update the stack using my newly updated CloudFormation template
Hope this is able to help someone!

linking to a google cloud bucket file in a terminal command?

I'm trying to find my way with Google Cloud.
I have a Debian VM Instance that I am running a server on. It is installed and working via SSH Connection in a browser window. The command to start the server is "./ninjamsrv config-file-path.cfg"
I have the config file in my default google firebase storage bucket as I will need to update it regularly.
I want to start the server referencing the cfg file in the bucket, e.g:
"./ninjamsrv gs://my-bucket/ninjam-config.cfg"
But the file is not found:
error opening configfile 'gs://my-bucket/ninjam-config.cfg'
Error loading config file!
However if I run:
"gsutil acl get gs://my-bucket/"
I see:
[
{
"entity": "project-editors-XXXXX",
"projectTeam": {
"projectNumber": "XXXXX",
"team": "editors"
},
"role": "OWNER"
},
{
"entity": "project-owners-XXXXX",
"projectTeam": {
"projectNumber": "XXXXX",
"team": "owners"
},
"role": "OWNER"
},
{
"entity": "project-viewers-XXXXX",
"projectTeam": {
"projectNumber": "XXXXX",
"team": "viewers"
},
"role": "READER"
}
]
Can anyone advise what I am doing wrong here? Thanks
The first thing to verify is if indeed the error thrown is a permission one. Checking the logs related to the VM’s operations will certainly provide more details in that aspect, and a 403 error code would confirm if this is a permission issue. If the VM is a Compute Engine one, you can refer to this documentation about logging.
If the error is indeed a permission one, then you should verify if the permissions for this object are set as “fine-grained” access. This would mean that each object would have its own set of permissions, regardless of the bucket-level access set. You can read more about this here. You could either change the level of access to “uniform” which would grant access to all objects in the relevant bucket, or make the appropriate permissions change for this particular object.
If the issue is not a permission one, then I would recommend trying to start the server from the same .cfg file hosted on the local directory of the VM. This might point the error at the file itself, and not its hosting on Cloud Storage. In case the server starts successfully from there, you may want to re-upload the file to GCS in case the file got corrupted during the initial upload.

Cloud Recording issue

I am trying agora cloud Recording API and trying to record into AWS S3 bucket. The calls appear to go through fine. While doing stop record, I get success message. I have reproduced part of it here:
{
insertId: "5d66423d00012ad9d6d02f2b"
labels: {
clone_id:
"00c61b117c803f45c35dbd46759dc85f8607177c3234b870987ba6be86fec0380c162a"
}
lotextPayload: "Stop cloud recording success. FileList :
01ce51a4a640ecrrrrhxabd9e9d823f08_tdeaon_20197121758.m3u8, uploading
status: backuped"
timestamp: "2019-08-28T08:58:37.076505Z"
}
It shows the status 'backuped'. As per agora documentation, it uploaded the files into agora cloud. Then within 5 minutes it is supposed to upload into my AWS S3 bucket.
I am not seeing this file in my AWS bucket. I have tested the bucket secret key. the same key works fine for other application. Also I have verified CORS settings.
Please suggest how I could debug further.
Make sure you are inputing your S3 credentials correctly within the storageConfig settings
"storageConfig":{
"vendor":{{StorageVendor}},
"region":{{StorageRegion}},
"bucket":"{{Bucket}}",
"accessKey":"{{AccessKey}}",
"secretKey":"{{SecretKey}}"
}
Agora offers a Postman collection to make testing easier: https://documenter.getpostman.com/view/6319646/SVSLr9AM?version=latest
I had faced issue due to wrong uid.
Recording uid needs to be a random id which will be used for 'joining by the recording client which 'resides' on cloud. I had passed my main client's id.
Other two reasons I had faced:
S3 credentials
S3 CORS settings : Go to AWS S3 permissions and set the allowed CORS headers.
EDIT:
It could be something like this on S3 side ..
[
{
"AllowedHeaders": [
"Authorization",
"*"
],
"AllowedMethods": [
"HEAD",
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [
"ETag",
"x-amz-meta-custom-header",
"x-amz-storage-class"
],
"MaxAgeSeconds": 5000
}
]

API Gateway not importing exported definition

I am testing my backup procedure for an API, in my API Gateway.
So, I am exporting my API from the API Console within my AWS account, I then go into API Gateway and create a new API - "import from swagger".
I paste my exported definition in and create, which throws tons of errors.
From my reading - it seems this is a known issue / pain.
I suspect the reason for the error(s) are because I use a custom authorizer;
"security" : [ {
"TestAuthorizer" : [ ]
}, {
"api_key" : [ ]
} ]
I use this on each method, hence, I get a lot of errors.
The weird thing is that I can clone this API perfectly fine, hence, I assumed that I could take an exported definition and re-import without issues.
Any ideas on how I can correct these errors (preferably within my API gateway, so that I can export / import with no issues).
An example of one of my GET methods using this authorizer is:
"/api/example" : {
"get" : {
"produces" : [ "application/json" ],
"parameters" : [ {
"name" : "Authorization",
"in" : "header",
"required" : true,
"type" : "string"
} ],
"responses" : {
"200" : {
"description" : "200 response",
"schema" : {
"$ref" : "#/definitions/exampleModel"
},
"headers" : {
"Access-Control-Allow-Origin" : {
"type" : "string"
}
}
}
},
"security" : [ {
"TestAuthorizer" : [ ]
}, {
"api_key" : [ ]
} ]
}
Thanks in advance
UPDATE
The error(s) that I get when importing a definition I had just exported are:
Your API was not imported due to errors in the Swagger file.
Unable to put method 'GET' on resource at path '/api/v1/MethodName': Invalid authorizer ID specified. Setting the authorization type to CUSTOM or COGNITO_USER_POOLS requires a valid authorizer.
I get the message for each method in my API - so there is a lot.
Additionality, right at the end of the message, I get this:
Additionally, these warnings were found:
Unable to create authorizer from security definition: 'TestAuthorizer'. Extension x-amazon-apigateway-authorizer is required. Any methods with security: 'TestAuthorizer' will not be created. If this security definition is not a configured authorizer, remove the x-amazon-apigateway-authtype extension and it will be ignored.
I have tried with Ignoring the errors, same result.
Make sure you are exporting your swagger with both integrations and authorizers extensions.
Try exporting your swagger using AWS CLI:
aws apigateway get-export \
--parameters '{"extensions":"integrations,authorizers"}' \
--rest-api-id {api_id} \
--stage-name {stage_name} \
--export-type swagger swagger.json
The output will be sent to swagger.json file.
For more details about swagger custom extensions see this.
For anyone that may come across this issue.
After LOTS of troubleshooting and eventually involving the AWS Support Team, this has been resolved and identified as an AWS CLI client bug (confirmed from AWS Support Team);
Final response.
Thank you for providing the details requested. After going through the AWS CLI version and error details, I can confirm the error is because of known issue with Powershell AWS CLI. I apologize for inconvenience caused due to the error. To get around the error I recommend going through the following steps
Create a file named data.json in the current directory where the powershell command is to be executed
Save the following contents to file {"extensions":"authorizers,integrations"}
In Powershell console, ensure the current working directory is the same as the location where data.json is present
Execute the following command aws apigateway get-export --parameters file://data.json --rest-api-id APIID --stage-name dev --export-type swagger C:\temp\export.json
Using this, finally resolved my issue - I look forward to the fix in one of the upcoming versions.
PS - this is currently on the latest version:
aws --version
aws-cli/1.11.44 Python/2.7.9 Windows/8 botocore/1.5.7

Want to server-side encrypt S3 data node file created by ShellCommandActivity

I created a ShellCommandActivity with stage = "true". Shell command creates a new file and store it in ${OUTPUT1_STAGING_DIR}. I want this new file to be server-side encrypted in S3.
According to document all files created in s3 data node are server-side encrypted by default. But after my pipeline completes an un-encrypted file gets created in s3. I tried setting s3EncryptionType as SERVER_SIDE_ENCRYPTION explicitly in S3 datanode but that doesn't help either. I want to encrypt this new file.
Here is relevant part of pipeline:
{
"id": "DataNodeId_Fdcnk",
"schedule": {
"ref": "DefaultSchedule"
},
"directoryPath": "s3://my-bucket/test-pipeline",
"name": "s3DataNode",
"s3EncryptionType": "SERVER_SIDE_ENCRYPTION",
"type": "S3DataNode"
},
{
"id": "ActivityId_V1NOE",
"schedule": {
"ref": "DefaultSchedule"
},
"name": "FileGenerate",
"command": "echo 'This is a test' > ${OUTPUT1_STAGING_DIR}/foo.txt",
"workerGroup": "my-worker-group",
"output": {
"ref": "DataNodeId_Fdcnk"
},
"type": "ShellCommandActivity",
"stage": "true"
}
Short answer: Your pipeline definition looks correct. You need to ensure you're running the latest version of the Task Runner. I will try to reproduce your issue and let you know.
P.S. Let's keep conversation within a single thread here or in AWS Data Pipeline forums to avoid confusion.
Answer on official AWS Data Pipeline Forum page
This issue is resolved when I downloaded new TaskRunner-1.0.jar. I was running older version.