Getting a 403 when making a POST request to AWS Lightsail Distribution - amazon-web-services

I'm making a POST HTTP request to an AWS Lightsail distribution and am getting a 403 response. I checked that the distribution's cache allows POST requests (see screenshot below). GET requests are successful.
And my cache is set to cache everything:
Could someone please help resolve this?

Related

Getting "oversizeFields":["REQUEST_BODY"] on AWS WAF logs when trying to upload document to API POST endpoint

Iam getting 504 error on my app when trying to upload a document(nearly 10mb) to API POST endpoint.
I've already tried creating custom rules to allow the URI path of the API and also created condition within a size limit for body as well as http method.
The WAF log is showing ALLOW for the request but document is not getting uploaded and getting error.
when I disassociate the API from WAF, everything is working fine.

EC2 Instance is setting Access-Control-Allow-Origin: * for all responses

So I want a same-origin policy which only allows my API to be called from the same-origin in browser, I don't want CORs.
After hours testing whether nginx or my node web app was setting Access-Control-Allow-Origin:* , it turns out that AWS EC2 is setting CORs headers without my permission. I can override this using Nginx to remove response headers and replace (if necessary)...
However I do not believe this is how it should be done, why is AWS putting extra strain on my web server without giving me the option to customise their default "allow all origins"?
This is such an unnecessary problem AWS is creating for me and was wondering if anyone else is experiencing the same and how we should go about it?
What I've tried:
In local development without AWS, neither nginx nor my node app add any access control headers (without my permission) - there is no mention of it. I even disabled CORS on my node app to make sure!
Turning on cors in my node app to see if I can override the response that is being set by AWS EC2 downstream.
This results in two separate Access-Control-Allow-Origin headers, the AWS one taking precedence over mine.
Using Nginx to respond to Options, so AWS knows that I have considered CORs requests and that I want to reject them... However my nginx response to Options is once again overrided by AWS downstream on the response! Additionally I would add CORs options to my responses using NGinx but they are still overrided by AWS.
when I say AWS overrides my response I mean that, my response is included but so is AWS response.
[example AWS with Nginx response][1]
[1]: https://i.stack.imgur.com/9xnlr.png
Maybe AWS are saying something, that all API's should be accessible from all origins? just doesn't make sense to me!
Btw here is what amazon have to say about cors, that it is "standardised" https://docs.aws.amazon.com/AWSEC2/latest/APIReference/cors-support.html
I don't understand the difference between an EC2 instance running MY API, vs and EC2 API? my main concern is changing the AWS cors headers, which I cant find any help on!
So after playing around more and realising that the Access-Control-Allow-Origin:* is being returned only by all GET requests, and all successful POST requests (from my own domain - no CORs, as i have it disabled), I thought something has to be up (all my get requests are allowed cross domain).
This post explains it beautifully by Amazon themselves, pretty annoyed they have taken the liberty to take my web security into their own hands, but atleast we know what side they are on now!
https://docs.aws.amazon.com/AWSEC2/latest/APIReference/cors-support.html
"For all simple requests such as GET & POST(simple post), the Access-Control-Allow-Origin * is returned" by AMAZON.
"Therefore, Amazon EC2 allows any cross-domain origin, and never allows browser credentials, such as cookies." because " Access-Control-Allow-Credentials is never returned" - so cookies should not be at risk of cross-site attacks from this setup I hope. However this setup also encourages spam botting from any domain, which will make amazon more money - which is the only plausible reason for why they have decided to enabled CORs for all simple requests.. #jeff.
If you prefer images, and so we can quote amazon on this!
Simple Requests Response from EC2
Finally, Amazon EC2 even have the courtesy to accept OPTIONS requests on our behalf, for more dangerous requests - to which they send a response of:
Access-Control-Allow-Origin: * "This is always returned with a * value."
Access-Control-Allow-Credentials is NOT returned. Setting browser for default false which will not send any cookies with the now permitted cross origin requests.
Access-Control-Expose-Headers is NOT returned. EC2 doesn't permit your browser to read response headers?
Access-Control-Allow-Methods: GET, POST, OPTIONS, PUT, DELETE - "this depends on whether you use our Query API or REST" have no idea what that means but a cross site preflight for my REST API returned GET, OPTIONS, POST - thankfully.
Access-Control-Allow-Headers "Amazon EC2 accepts any headers in preflight requests."
The general sense is that Amazon wants us to outsource basic CORs security to the EC2 instance? Would love to test out the pre-flight they provide better but it seems they do most of the boilerplate and let you decide if you want to accept any complex/notsimple CORs requests - as detailed here https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-cors.html
Took me a while and a bit of shock, but glad we got here!

Display error page for HTTP 400 coming from AWS CloudFront

I have an application which is hosted in S3 bucket and that is exposed to the public via CloudFront. OAI is use to access the content from the S3.
In this application, we have complex URL patterns and we are getting 400 error for some of the URLs due to incorrect URLs patterns(invalid encodings, etc). Eventhough, CF is allow to set custom error pages, we cannot place a custom error page for these requests because these requests blocked from the CF level and not forwarding these requests to the OAI. As per the AWS documentation, error pages can be set for the HTTP error codes which are coming from the OAI level. Therefore, I am trying to find the solution for this issue with existing architecture. Please let me know if you have some solution for this.
I need to display a custom page for the 400 errors coming from the cloudfront

AWS S3 with Basic Auth with Lambda gives AccessDenied when refreshing the page

I have setup S3 with CloudFront to serve static site behind Basic HTTP authentication, similar to this setup here: Basic User Authentication for Static Site using AWS & S3 Bucket
Everything seems to work fine, but for some reason when I do a refresh of the site, CloudFront responds with 403 AccessDenied. This is also happening only when navigating somewhere to the site, like example.com/somepath and refreshing the site. If I stay at the root level: example.com and hit that refresh button everything seems to work fine.
I have configured routing on react app, so just to be clear that when navigating site via application links all seems to be normal. Only refreshing the page causes above issue. I have static website hosting disabled on S3 as I don't want anyone accessing my S3 files via S3 links directly.
I have added a custom Error Page to the CloudFront distribution. For all 403 errors it should fetch root from origin / and return 200 HTTP status code.
Any ideas where to look for the issue?
The issue was with the incorrectly setup CloudFront distribution error page. Having error page configured for the 403 error to navigate to the S3 bucket root with the HTTP status code 200 solves the issue. It just took some time to take it into effect thus causing confusion.

AWS Cloudfront Not Following Whitelist?

This is more of a general question to see if anyone has encountered similar behaviors with AWS Cloudfront. I've had a distribution running a static website with Geo-restrictions applied as follows:
However, when looking at the logs, I see the following:
So my question is - Is CloudFront monitoring ALL requests, even restricted ones? I would think Geo-restriction would implement an ACL and would block all requests at the network level before getting to the distribution to request data.
CloudFront does not block geo-restricted requests at the network level. It serves a 403 response, which you can customize.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html
CloudFront returns an HTTP status code of 403 (Forbidden) to the user.
You can optionally configure CloudFront to return a custom error message to the user, and you can specify how long you want CloudFront to cache the error response for the requested file; the default value is five minutes.