I have deployed my expressjs code to lambda using claudiajs.When I hit the API endpoint generated every alternate request gives me internal server error.I checked the logs and found this
"errorType": "Error",
"errorMessage": "EROFS: read-only file system, mkdir '/var/task/logs'",
"code": "EROFS",
"stack": [
"Error: EROFS: read-only file system, mkdir '/var/task/logs'"
],
"cause": {
"errorType": "Error",
"errorMessage": "EROFS: read-only file system, mkdir '/var/task/logs'",
"code": "EROFS",
"stack": [
"Error: EROFS: read-only file system, mkdir '/var/task/logs'"
],
"errno": -30,
"syscall": "mkdir",
"path": "/var/task/logs"
},
"isOperational": true,
"errno": -30,
"syscall": "mkdir",
"path": "/var/task/logs"
}
I am not able to figure out what could be the issue and why is it occurring only on alternate requests and not every request.How do I go on about it?
Any help would be appreciated.
Lambda dosen't allow you to write the space allocated to you.
Since, you are trying to create a directory may be to write logs, so its giving you the error.
To write application logs with lambda you can use AWS Cloudwatch log streaming.
https://www.npmjs.com/package/lambda-log
You can use /tmp directory for some temporary files, but there is also a limit 512Mb.
Check these links:
AWS Lambda Limits
Accessing Amazon CloudWatch Logs for AWS Lambda
Related
I've been trying to make API calls to create snapshot of GCP disk through my code but it keeps giving me this error:
Failed to check snapshot status before creating GCPDisk swift-snapshot-gkeos-dkdwl-dynamic-pvc-1b13097f-7-1630579922. HTTP Error: GET request to the remote host failed [HTTP-Code: 404]: {
"error": {
"code": 404,
"message": "The resource 'projects/rw-migration-dev/global/snapshots/swift-snapshot-gkeos-dkdwl-dynamic-pvc-1b13097f-7-1630579922' was not found",
"errors": [
{
"message": "The resource 'projects/rw-migration-dev/global/snapshots/swift-snapshot-gkeos-dkdwl-dynamic-pvc-1b13097f-7-1630579922' was not found",
"domain": "global",
"reason": "notFound"
}
]
}
}
My program worked fine for a considerable amount of time, but now sometimes it gives errors.
I tried passing the same query through postman and it works fine. Some times it works fine through .
The main problem is with the snapshot creating API,
https://compute.googleapis.com/compute/v1/projects/{projectName}/zones/{diskLocation}/disks/{diskName}/createSnapshot
This URL works fine on postman, after creation you can see the snapshot when you list them, but through code once this API is called it returns an OK 200, but no snapshot is created
Can someone tell me why this is happening?
I think you are trying to use this operation:
https://cloud.google.com/compute/docs/reference/rest/v1/disks/createSnapshot
Which is a long running operation, a 200 response indicates that the snapshot operation started, but it does not indicate that it has finished.
The documentation points to:
https://cloud.google.com/compute/docs/api/how-tos/api-requests-responses#handling_api_responses
You may need to poll the operation until it completes before trying to use the snapshot.
I am using an AWS lambda function to serve my NodeJS codebase for an Alexa Skill.
The skill makes external API calls to a custom API as well as the Amazon GameOn API, it also uses URL's which serve audio files and images from an S3 Bucket.
The issue I am having is intermittent, and is affecting about 20% of users. At random points of the skill, the user request will produce an invalid response from the skill, with the following error:
{
"Request": {
"type": "System.ExceptionEncountered",
"requestId": "amzn1.echo-api.request.ab35c3f1-b8e6-4478-945c-16f644359556",
"timestamp": "2020-05-16T19:54:24Z",
"locale": "en-US",
"error": {
"type": "INVALID_RESPONSE",
"message": "Read timed out for requestId amzn1.echo-api.request.323b1fbb-b4e8-4cdf-8f31-30c9b67e4a5d"
},
"cause": {
"requestId": "amzn1.echo-api.request.323b1fbb-b4e8-4cdf-8f31-30c9b67e4a5d"
}
},
I have looked up this issue, I believe it's something wrong with the lambda function configuration but can't figure out where!
I've tried increasing the Memory the function uses (now 256MB).
It should be noted that the function timeout is 8000ms, since this is the max time you are allowed for an Alexa response.
What causes this Read timeout issue, and what measures can I take to debug and resolve it?
Take a look at AWS XRay. By using this with your Lambda you should be able to identify the source of these timeouts.
This link should help you understand how to apply it.
We found that this was occurring when the skill was trying to access a resource which was stored on our Azure website.
The CPU and Memory allocation for the azure site was too low, and it would fail when facing a large amount of requests.
To fix, we improved the plan the app service was running on.
I have one simple cloud task queue and have successfully submitted a task to the queue. It is supposed to deliver a JSON payload to my API to perform a basic database update. The task is created at the end of a process in a .net core 3.1 app running locally on my desktop triggered by postman and the API is a golang app running in cloud run. However, the task never seems to fire and never registers an error.
The tasks in queue is always 0 and the tasks running is always blank. I have hit the "Run Now" button dozens of times but it never changes anything and no log entries or failed attempts are ever registered.
The task is created with the OIDCToken with a service account and audience set for the service account that has the authorization to create tokens and execute the cloud run instance.
Screen Shot of Tasks Queue in Google Cloud Console
Task creation log entry shows that it was created OK:
{
"insertId": "efq7sxb14",
"jsonPayload": {
"taskCreationLog": {
"targetAddress": "PUT https://{readacted}",
"targetType": "HTTP",
"scheduleTime": "2020-04-25T01:15:48.434808Z",
"status": "OK"
},
"#type": "type.googleapis.com/google.cloud.tasks.logging.v1.TaskActivityLog",
"task": "projects/{readacted}/locations/us-central1/queues/database-updates/tasks/0998892809207251757"
},
"resource": {
"type": "cloud_tasks_queue",
"labels": {
"target_type": "HTTP",
"project_id": "{readacted}",
"queue_id": "database-updates"
}
},
"timestamp": "2020-04-25T01:15:48.435878120Z",
"severity": "INFO",
"logName": "projects/{readacted}/logs/cloudtasks.googleapis.com%2Ftask_operations_log",
"receiveTimestamp": "2020-04-25T01:15:49.469544393Z"
}
Any ideas as to why the tasks are not running? This is my first time using Cloud Tasks so don't rule out the idiot between the keyboard and the chair.
Thanks!
You might be using a non-default service. See Configuring Cloud Tasks queues
Try creating a task from the command line and watch the logs e.g.
gcloud tasks create-app-engine-task --queue=default \
--method=POST --relative-uri=/update_counter --routing=service:worker \
--body-content=10
In my own case, I used --routing=service:api and it worked straight away. Then I added AppEngineRouting to the AppEngineHttpRequest.
I am writing a script that will create an EFS file system with a name from input. I am using the AWS SDK for PHP Version 3.
I am able to create the file system using the createFileSystem command. This new file system is not usable until it has a mount target created. If I run the CreateMountTarget command after the createFileSystem command then I receive an error that the file system's life cycle state is not in the 'available' state.
I have tried using createFileSystemAsync to create a promise and calling the wait function on that promise to force the script to run synchronously. However, the promise is always fulfilled while the file system is still in 'creating' life cycle state.
Is there a way to force the script to wait for the file system to be in the available state using the AWS SDK?
One way is to check the status of the file system using DescribeFileSystems API. In the response look at the LifeCycleState, if it is available fire the CreateMountTarget API. You can keep checking the DescribeFileSystems in a loop with a few seconds delay until the LifeCycleState is Available
It looks like you want a waiter for FileSystemAvailable, but the elasticfilesystem files don't specify one. I'd file an issue on GitHub asking for one. You'd need to wait for DescribeFileSystems to have a LifeCycleState of available.
In the mean time, you can probably write your own with something like the following and following the waiters guide.
{
"version":2,
"FileSystemAvailable": {
"delay": 15,
"operation": "DescribeFileSystems",
"maxAttempts": 40,
"acceptors": [
{
"expected": "available",
"matcher": "pathAll",
"state": "success",
"argument": "FileSystems[].LifeCycleState"
},
{
"expected": "deleted",
"matcher": "pathAny",
"state": "failure",
"argument": "FileSystems[].LifeCycleState"
},
{
"expected": "deleting",
"matcher": "pathAny",
"state": "failure",
"argument": "FileSystems[].LifeCycleState"
}
]
},
}
Promises in the AWS SDK for PHP are used for making the HTTP request concurrently. This doesn't help in this case because the behavior of the API call is to start an asynchronous task in EFS.
In AWS API Gateway, I have a GET method that invokes a lambda function.
When I test the method in the API Gateway dashboard, the lambda function executes successfully but API Gateway is not mapping the context.success() call to a 200 result despite having default mapping set to yes.
Instead I get this error:
Execution failed due to configuration error: No match for output mapping and no default output mapping configured
This is my Integration Response setup:
And this is my method response setup:
Basically I would expect the API Gateway to recognize the successful lambda execution and then map it by default to a 200 response but
that doesn't happen.
Does anyone know why this isn't working?
I have same issue while uploading api using serverless framework. You can simply follow bellow steps which resolve my issue.
1- Navigate to aws api gateway
2- find your api and click on method(Post, Get, Any, etc)
3- click on method response
4- Add method with 200 response.
5- Save it & test
I had the similar issue, got it resolved by adding the method response 200
This is a 'check-the-basics' type of answer. In my scenario, CORS and the bug mentioned above were not at issue. However, the error message given in the title is exactly what I saw and what led me to this thread.
Instead, (an an API Gateway newb) I had failed to redeploy the deployment. Once I did that, everything worked.
As a bonus, for Terraform 0.12 users, the magic you need and should use is a triggers parameter for your aws_api_gateway_deployment resource. This will automatically redeploy for you when other related APIGW resources change. See the TF documentation for details.
There was an issue when saving the default integration response mapping which has been resolved. The bug caused requests to API methods that were saved incorrectly to return a 500 error, the CloudWatch logs should contain:
Execution failed due to configuration error:
No match for output mapping and no default output mapping configured.
Since the 'ENABLE CORS' saves the default integration response, this issue also appeared in your scenario.
For more information, please refer to the AWS forums entry: https://forums.aws.amazon.com/thread.jspa?threadID=221197&tstart=0
Best,
Jurgen
What worked for me:
1. In Api Gateway Console created OPTIONS method manually
2. In the Method Response section under created OPTIONS method added 200 OK
3. Selected Option method and enabled CORS from menu
I found the problem:
Amazon had added a new button in the API-Gateway resource configuration
titled 'Enable CORS'. I had earlier clicked this however once enabled
there doesn't seem to be a way to disable it
Enabling CORS using this button (Instead of doing it manually which is what I ended up doing) seems to cause an internal server error even on a
successful lambda execution.
SOLUTION: I deleted the resource and created it again without clicking
on 'Enable CORS' this time and everything worked fine.
This seems to be a BUG with that feature but perhaps I just don't
understand it well enough. Comment if you have any further information.
Thanks.
Check the box which says "Use Lambda Proxy integration".
This works fine for me. For reference, my lambda function looks like this...
def lambda_handler(event, context:
# Get params
param1 = event['queryStringParameters']['param1']
param2 = event['queryStringParameters']['param2']
# Do some stuff with params to get body
body = some_fn(param1, param2)
# Return response object
response_object = {}
response_object['statusCode'] = 200
response_object['headers'] = {}
response_object['headers']['Content-Type']='application/json'
response_object['body'] = json.dumps(body)
return response_object
I just drop this here because I faced the same issue today and in my case was that we are appending at the end of the endpoint a /. So for example, if this is the definition:
{
"openapi": "3.0.1",
"info": {
"title": "some api",
"version": "2021-04-23T23:59:37Z"
},
"servers": [
{
"url": "https://api-gw.domain.com"
}
],
"paths": {
"/api/{version}/intelligence/topic": {
"get": {
"parameters": [
{
"name": "username",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "version",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "x-api-key",
"in": "header",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "X-AWS-Cognito-Group-ID",
"in": "header",
"schema": {
"type": "string"
}
}
],
...
Remove any / at the end of the endpoint: /api/{version}/intelligence/topic. Also do the same in uri on apigateway-integration section of the swagger + api gw extensions json.
Make sure your ARN for the role is indeed a role (and not, e.g., the policy).