Error when trying to access Lambda logs on CloudWatch? - amazon-web-services

I created some Lambda-Edge functions but I'm unable to set up the logs for it. When trying to access them I am seeing the error message:
There was an error loading Log Streams. Please try again by refreshing
this page.
I have gone to everything I could find on google, but as far as I can see my permissions are set up fine. I've created a custom role for them like this.
The role contains the following permissions:
I can't really figure out, what else could cause this error. It has been around 2h since setting up the functions and permissions.

For anyone experiencing the same problem. There is a weird quirk to LambdaEdge.
The logs will be stored in the AWS location closest to the user that executes it.
Even if you've deployed your functions in us-east-1, switch location to the destination that is closest to you.

Related

Serverless VPC access connector is in a bad shape

Our project is using a Serverless VPC access connector to allow access to DB over private IP from cloud functions and cloud runs. It was working flawlessly for a few months, but today I tried to deploy one of the functions that use such a connector and I got the message:
VPC connector
projects/xxxx/locations/us-central1/connectors/vpc-connector is not
ready yet or does not exist. Please visit
https://cloud.google.com/functions/docs/troubleshooting for in-depth
troubleshooting documentation.
I went to the Serverless VPC access view and found out that indeed the connector has a red marking on it. When I hover on it it says
Connector is in a bad state, manual deletion recommended
but I don't know for what reason, Link to logs doesn't show anything for the past 3 months.
I tried to google about the such error but without success.
I also tried to search through logs but also didn't find anything relevant.
I'm looking for any hints:
Why it happened?
How to fix it? I don't want to recreate the connector, it is related to many functions, and cloud runs
As the issue was blocking us from the deployment of cloud functions I was forced to recreate the connector.
But this time API returned an error:
Error: Error waiting to create Connector: Error waiting for Creating Connector: Error code 7, message: Operation failed: Google APIs Service Agent (<PROJECT_NUMBER>#cloudservices.gserviceaccount.com) needs editor role in the project.
After adding such permission old connector started to work again...
Before there was no such requirement, but it changed in meantime.
Spooky, one time something works other not.

AWS S3 - Getting 400 error when trying to access from aws web console; Also getting lots of errors in devtools at AWS home

just noticed this and am worried. Looking for advice on the cause and possible solutions.
S3 does not open on either sub-admin or root account. Getting 400 error.
haven't noticed any other resources that aren't loading.. I can access lambda, ddb, apigateway, etc. just fine.
Not sure if it's related but I noticed a bunch of errors in dev tools console when I sign into AWS. Not sure if those are normal as I've never taken notice before.. but thought it might be worth including.
Included screenshots of both below.
400 Error when attempting to access S3
Errors in devtools at AWS home page
Seems that the request your submitting has a header which is too large.
Have you tried using a different browser?
If that works you may just need to clear the cache in the current.
Assuming it still happens let us know what you tried.
The error 413 means Request entity too large.

"internal server error" with API gateway and lambda on AWS

There are tons of similar questions both on this site and on the web, which leads me to believe there is something really wrong with AWS' documentation due to this causing grief to so many people.
So, I decided to post the most basic example step by step.
First, we create a new function:
It has default "everything", I don't touch a single line of code.
(the red error message is AWS not playing nice with Firefox)
The default code passes the test:
Now I add a trigger:
This gives me the link for the trigger:
I can go to the API endpoint: https://spy3z1jvu8.execute-api.ap-northeast-1.amazonaws.com/default/test
And it works:
Now, the problems will start. I open the API gateway that was created:
and try the default link: https://spy3z1jvu8.execute-api.ap-northeast-1.amazonaws.com
And...
Most of the people having similar questions seem to be having an issue with the gateway expecting some json content, etc, but here is an untouched AWS sample and the gateway link doesn't work.
The troubleshooting steps say to add logging and troubleshoot it that way, but there is nothing of interest in the logs.
What could be the origin of that problem?
What could be the origin of that problem?
You are correct. This is AWS/console fault. Specifically, it provides wrong permissions in the lambda's resource-based permissions for the default route to work. To fix that you have to edit the permissions.
Specifically, go to your function's Resource-based policy (this is different then execution role). You should find one Policy statement there which you have to edit. Then change in Source ARN from something like:
arn:aws:execute-api:ffffff:xxxx:api-id/*/*/function-name
to
arn:aws:execute-api:ffffff:xxxx:api-id/*/*

unable to see any logs after updating cloud function

Suddenly I am not getting any logs except deployment logs for google cloud functions
Till now it worked fine but, after updating the function I haven't seen any logs. So I have done some research and deleted the cloud functions logs file and also the cloud function and again I have created a new function. Even then I am not able to see any logs related to the project excepted audit logs (i.e whenever the function gets updated)
Any clues what's wrong? I am not able to understand what exact problem.
any help is appreciated
I have viewed the Issue Tracker issuetracker.google.com/issues/155215191 and have found that work is still being done to address the scenario.

Serverless Framework AWS 403 Forbidden Error with Domain Only

I am working on a serverless setup for a project and ran into a strange error. This was working fine before I had to delete my old certificates and make a new one.
In short, I am following the tutorial series at serverless-stack.com for reference, and when running the apig-test command I get the following error.
{ status: 403,
statusText: 'Forbidden',
data: { message: 'Forbidden' } }
This screams to me policy error. So I went to check my policy to make sure it allows execution for the AuthRole and indeed it does. I verified this in IAM section under Roles and looked my services Auth_Role that I created when I set up Cognito.
I don't want to give information overload here, but if anyone has any ideas for where to look next I would be much appreciative and I'll give any details you want to see here.
One thing I want to note is that if I run the apig-test command with the direct URL to the Lambda function instead of my domain it works perfectly fine.
This proves that nothing is wrong with my code but more a policy setting regarding how I setup the domain.
I ran sls create_domain accordingly and I see the entries in the Route53 & API Gateway and they have finished their 40 minutes many hours ago. I insured its using correct certificate since I wiped out the other one.
My custom domains have worked in the past thanks to a plugin I found and this tutorial here (https://serverless.com/blog/serverless-api-gateway-domain/), its only recently that it stopped working when I realized I needed to add some more domains to my SSL cert.
So I assume the policy error is somewhere around this but not sure where to look?
Ok I found the answer. In the API Gateway under custom domains there is a section called Base Path Mappings This MUST be set to one of your functions with the default path of / (or just enter nothing for the path) and then the destination to your lambda service. This seemed to make it work for me.