Serverless Framework AWS 403 Forbidden Error with Domain Only - amazon-web-services

I am working on a serverless setup for a project and ran into a strange error. This was working fine before I had to delete my old certificates and make a new one.
In short, I am following the tutorial series at serverless-stack.com for reference, and when running the apig-test command I get the following error.
{ status: 403,
statusText: 'Forbidden',
data: { message: 'Forbidden' } }
This screams to me policy error. So I went to check my policy to make sure it allows execution for the AuthRole and indeed it does. I verified this in IAM section under Roles and looked my services Auth_Role that I created when I set up Cognito.
I don't want to give information overload here, but if anyone has any ideas for where to look next I would be much appreciative and I'll give any details you want to see here.
One thing I want to note is that if I run the apig-test command with the direct URL to the Lambda function instead of my domain it works perfectly fine.
This proves that nothing is wrong with my code but more a policy setting regarding how I setup the domain.
I ran sls create_domain accordingly and I see the entries in the Route53 & API Gateway and they have finished their 40 minutes many hours ago. I insured its using correct certificate since I wiped out the other one.
My custom domains have worked in the past thanks to a plugin I found and this tutorial here (https://serverless.com/blog/serverless-api-gateway-domain/), its only recently that it stopped working when I realized I needed to add some more domains to my SSL cert.
So I assume the policy error is somewhere around this but not sure where to look?

Ok I found the answer. In the API Gateway under custom domains there is a section called Base Path Mappings This MUST be set to one of your functions with the default path of / (or just enter nothing for the path) and then the destination to your lambda service. This seemed to make it work for me.

Related

"internal server error" with API gateway and lambda on AWS

There are tons of similar questions both on this site and on the web, which leads me to believe there is something really wrong with AWS' documentation due to this causing grief to so many people.
So, I decided to post the most basic example step by step.
First, we create a new function:
It has default "everything", I don't touch a single line of code.
(the red error message is AWS not playing nice with Firefox)
The default code passes the test:
Now I add a trigger:
This gives me the link for the trigger:
I can go to the API endpoint: https://spy3z1jvu8.execute-api.ap-northeast-1.amazonaws.com/default/test
And it works:
Now, the problems will start. I open the API gateway that was created:
and try the default link: https://spy3z1jvu8.execute-api.ap-northeast-1.amazonaws.com
And...
Most of the people having similar questions seem to be having an issue with the gateway expecting some json content, etc, but here is an untouched AWS sample and the gateway link doesn't work.
The troubleshooting steps say to add logging and troubleshoot it that way, but there is nothing of interest in the logs.
What could be the origin of that problem?
What could be the origin of that problem?
You are correct. This is AWS/console fault. Specifically, it provides wrong permissions in the lambda's resource-based permissions for the default route to work. To fix that you have to edit the permissions.
Specifically, go to your function's Resource-based policy (this is different then execution role). You should find one Policy statement there which you have to edit. Then change in Source ARN from something like:
arn:aws:execute-api:ffffff:xxxx:api-id/*/*/function-name
to
arn:aws:execute-api:ffffff:xxxx:api-id/*/*

Error when trying to access Lambda logs on CloudWatch?

I created some Lambda-Edge functions but I'm unable to set up the logs for it. When trying to access them I am seeing the error message:
There was an error loading Log Streams. Please try again by refreshing
this page.
I have gone to everything I could find on google, but as far as I can see my permissions are set up fine. I've created a custom role for them like this.
The role contains the following permissions:
I can't really figure out, what else could cause this error. It has been around 2h since setting up the functions and permissions.
For anyone experiencing the same problem. There is a weird quirk to LambdaEdge.
The logs will be stored in the AWS location closest to the user that executes it.
Even if you've deployed your functions in us-east-1, switch location to the destination that is closest to you.

TextRact Response status code does not indicate success: 400 (Bad Request) + IAM Keys not found

I have pushed my textract code on staging server, and now I am receiving an error.
It is working on a development system. I can't understand why it is happening.
I am using dotnet core 3.0
I am following code sample provided here. [https://github.com/aws-samples/amazon-textract-code-samples/tree/master/src-csharp]
I have a doubt regarding IAM credentials. For this, I installed AWS SDK tools for Windows and AWS CLI on staging server and, after that, ran commands (mentioned here [https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration] ) using Command prompt for configuring. I thought it (IAM) might be getting saved into the environment. But no success.
Code which uploads a file on S3 bucket is uploading it, but while making a request to Textract service, it is crashing.
System.Net.Http.HttpRequestException: Response status code does not indicate success: 400 (Bad Request)
I can't understand what's the issue.
On development, it is working.
Any help?
Finally we found solution. It was very weird issue, never thought it will come.
First thing
Message thrown by API was not clear. So we hosted it on some other server which had upgraded windows OS. There we came to know it is related to keys which was generated during creation of IAM user.
Second thing
Although we got cleared that our application (Amazon's Textract dll) is not able to read keys which we configured from here .
When we configure through CLI, it creates two files for saving credentials and read it from here. Refer below screen shot.
It was there but still application was not able to locate it on staging server. After searching for 4-5 days talking to the AWS support nothing happened.
Finally we dived into IIS made few changes and came to know it is happening at IIS. In IIS there is setting in Appool of instance it was Load User Profile.. By default it come as false but when we turned as true it crated a user like we have user for system log-in.
Refer below screen shots for changing this.
It creates user like this.
Hope it helps some one

Google Cloud Run - Domain Mapping stuck at Certificate Provisioning

Is anyone getting this issue with Google Cloud Run Domain Mapping? When I add a custom domain to my domain mappings, I get this:
Waiting for certificate provisioning. You must configure your DNS records for certificate issuance to begin.
I know it says it's only added 1 day ago and I should give it time, but I actually let it go for 5 days, deleted it, and this is my second try.
You can see in the below screenshot that it is added via Cloudflare. I even tried toggling the Proxy service on and off with no luck.
Turning proxying off in CloudFlare resolved the issue in my case (keeping it as DNS only).
Most likely the Google balancer needs to get the request first-hand in order to make the certificate safe.
I faced the same issue with exact error:
Waiting for certificate provisioning. You must configure your DNS records for certificate issuance to begin.
After digging a bit more the error actually made sense. Before generating the cert Google is trying to check if our DNS records are properly configured and well propagated through all regions which is not the case for me due to some glitch at the nameserver level. I raised a ticket with my nameserver vendor with the DNS propagation report from the below tools/websites which clearly showed that the DNS records are not available in some regions. Once they fixed the propagation issue, all my reports started to show positive results after which I recreated my domain mapping and it worked within few minutes.
Tools used to check DNS propagation status:
https://dnspropagation.net/
https://www.whatsmydns.net/
https://dnschecker.org/
At the moment, seems like Domain Mapping is just a buggy service.
Seems like the solution at the moment is to be patient and to try several times until it works. I'd suggest to give it some time between attempts.
The reasons why I feel it's a buggy service:
gcloud beta run domain-mappings create stucks at Creating......⠼.
gcloud beta run domain-mappings describe shows messages such as:
"Domain mapping '[...domain_name...]' already exists for this application.
You can modify this domain mapping with DomainMappings.PATCH".
"Waiting for certificate provisioning. You must configure your DNS records
for certificate issuance to begin." - Even though the DNS records are fine.
User Interface isn't any better. It also can stuck while creating... And in the console, it says that it may fail silently, suggesting to use gcloud CLI as a workaround
Update 2022
It's been a while since I last used this feature but it is still taking ~2 hours for the domain to become available.
I just tried Toggling the proxy off again it seemed to work. They must have fixed something internally.
I had the same issue in past few days, the loading icon was spinning for hours/day and my DNS records were correct (checked in google toolbox). I "resolved" this issue just by repetitive add/remove of the domain, after like four attempts it suddenly started to working. I always waited for hour+ before each attempt. I used the GCR interface, not the console solution. I guess, as was mentioned before, it's because it's still BETA, but maybe this comment might help someone till they resolve this issue.
Adding the domain mapping via the console does not show the correct DNS records to be added as is it missing the name field. If you run gcloud beta run domain-mappings create it shows the DNS records as having a name field with the value of the cloud run service.
I had a similar error on a domain I bought with Goddady, the issue was a result of a parking domain whose source I can't tell unless it was set by the vendor. It mapped my domain to this page and its IP 34.102.136.180 was preventing my service from mapping correctly. After chatting with a gae assistant I was able to resolve the issue by deleting the IP, but of course, sought clarification from the vendor themselves. It was my first time using Godaddy and for the life of me I couldn't figure out the problem.
I had the same situation. Additionally incurred me error message on cloud domains.
Your domain is suspended because the registrant email address has not
yet been verified. Check your email and follow the instructions to
remove the suspension.

AWS Serverless Image Handler - Lambda Error

My Serverless image handler was working fine till now and now i'm getting the following error.
start_thumbor error: pycurl: libcurl link-time ssl backend (openssl) is different from compile-time ssl backend (nss)
This looks like a problem with the version of pycurl.
Please help me resolve it.
Tried changing the pythong version to 3.6 in ServerlessImageHandler lambda function configuration.
I found a discussion about that issue on https://forums.aws.amazon.com/thread.jspa?messageID=909444, which sent me to https://github.com/awslabs/serverless-image-handler/issues/127#issuecomment-514757029.
Github user timkelty has the solution:
go to my CloudFormation Stack
click Update
"replace template"
paste in https://cf-templates-nestrom.s3-eu-west-1.amazonaws.com/serverless-image-handler/1.0/serverless-image-handler.template
so far has worked for me in us-east-1 and us-west-1
AWS has released a new version of Serverless Image Handler this is why everybody suffers now because Thumbor functionalities fail in the new version.
In the new version, SharpJS is used instead of Thumbor API calls.
You can check the new version and download it from here.
Even though you are able to construct urls in old style, images in subfolders are not possible to access anymore without encoding the url.
Old way:
abcdef.cloudfront.net/team/team1.png
New way:
abcdef.cloudfront.net/{base64encodedPath}
Note 1: If your images are in the root directory of the bucket, you are still able to access them old style like this:
abcdef.cloudfront.net/team1.png
Note 2: If you update your existing CloudFormation stack, you will have your old cloudfront domain (which is a good part).
You can also follow the current fixes from here.