I use Amazon S3 as storage for my files, which connected as subdomain.
For example: base domain is site.com, s3 storage is s3.site.com. Sometime I want get a part of file (range of bytes) with ajax GET request:
$.ajax({
type: 'GET',
url: "//s3.site.com/1.py",
headers: {"Range": "bytes=50-100"}
}).done(function( data ) {
alert(data);
});
It works fine, I get response... But browser generate some parasitic request with type OPTIONS. As I know, it happens according to CORS policy. Can I avoid it?
No, you can't avoid it. It's standard and correct behavior.
Adding the Range: header changes a GET request in a way that triggers a CORS pre-flight check, which sends the OPTIONS request.
The request no longer qualifies as "simple" and pre-flight is required, since the request is cross-origin.
Same-origin (non-cross-origin) requests must have the exact same scheme, host, and port. Subdomains are not exempt from this.
Two origins are "the same" if, and only if, they are identical.
https://www.rfc-editor.org/rfc/rfc6454#page-11
Related
we are using aws cloudfront to distribute our angular webapps to the web. We have this setup and configured so our webapps are live and can be accessed. However, due to the need to allow for authentication across these apps we have implemented cookies where the domain is set to the core domain with wildcard for subdomains. For example we may have two webapps one at appone.example.com and the other at apptwo.example.com. Both of which are looking at the same selection of cookies shared across subdomains by setting the cookie domain to .example.com.
Now this setup works great, we do not send our cookies with api requests and instead just send an authentication header so no issues with header size there and we do not have a need for these cookies to make there way into the request to cloudfront. However, these requests are instigated by the browser so the request can not be manipulated to remove the cookie header which is where the issues arise.
When we have quite a few cookies (around 30) two lots of cognito cookies and a few others containing information to instigate the cognito setup to utilise the cognito cookies. It means the request size falls around 22,000 bytes. This exceeds the limit aws of 20,480 bytes which is stated here. If my request is below 20,480 bytes it completes successfully.
Now considering I do not need these cookies to hit the cloudfront request I assumed you would be able to strip them from the header in part of the origin request policies or using Viewer request Lambda#Edge function. However, it does not seem to be getting far enough to hit this functionality.
Here is some example code provided by aws template. This Lambda#Edge function does not strip the headers as suggested above however should still log the event if being hit, if the request is less that 20,480 bytes it does. If not it does not log...:
exports.handler = async (event, context) => {
console.log(event);
/*
* Generate HTTP response using 200 status code with a simple body.
*/
const response = {
status: '200',
statusDescription: 'OK',
headers: {
vary: [{
key: 'Vary',
value: '*',
}],
'last-modified': [{
key: 'Last-Modified',
value: '2017-01-13',
}],
},
body: 'Example body generated by Lambda#Edge function.',
};
return response;
};
Now, I think I could mitigate the issue by removing one set of the cognito cookies and the configuration cookies involved in instantiating that, however this is a less than ideal situation because it means that each time you chop and change between the two systems you will need to re-login which is not great and does not fit our use case which is quite specialised.
The other solution is to remove the use of cookies and switch to local storage shared across domains. This however, then brings a security challenge with XSS and from initial research seems unviable and unacceptable.
So overall based on my current limited understanding on aws cloudfront my question becomes can the cookie header be stripped from the request and hence allow the request to be accepted without 494 error page. In our use case we only wish to use cookies as a means of cross domain storage so do not need thee cookies to venture up to the static js files request.
494 error image link
If you just want to avoid sending cookies with requests to static assets set up a different domain and serve them from there.
But I'm a really wondering what you're thinking here; If your angular app can access the token stored as a cookie then it isn't any safer than localstorage from an XSS standpoint.
In Brief
In order to keep the uploaded media (S3 objects) private for all the clients on my multi-tenant system I implemented a Cloudfront CDN deployment and configured it (and its Origin S3 Bucket) to force the use of signed URLs in order to GET any of the objects.
The Method
First, the user is authenticated via my system, and then a signed URL is generated and returned to them using the AWS.CloudFront.Signer.getSignedUrl() method provided by the AWS JS SDK. so they can make the call to CF/S3 to download the object (image, PDF, docx, etc). Pretty standard stuff.
The Problem
The above method works 95% of the time. The user obtains a signed URL from my system and then when they make an XHR to GET the object it's retrieved just fine.
But, 5% of the time a 403 is thrown with a CORS error stating that the client origin is not allowed by Access-Control-Allow-Origin.
This bug (error) has been confirmed across all environments: localhost, dev.myapp.com, prod.myapp.com. And across all platforms/browsers.
There's such a lack of rhyme or reason to it that I'm actually starting to think this is an AWS bug (they do happen, from time-to-time).
The Debugging Checklist So Far
I've been going out of my mind for days now trying to figure this out. Here's what I've attempted so far:
Have you tried a different browser/platform?
Yes. The issue is present across all client origins, browsers (and
versions), and all platforms.
Is your S3 Bucket configured for CORS correctly?
Yes. It's wide-open in fact. I've even set <MaxAgeSeconds>0</MaxAgeSeconds> in
order to prevent cacheing of any pre-flight OPTIONS requests by the
client:
Is the signed URL expired?
Nope. All of the signed URLs are set to expire 24hrs after generation. This problem has shown up even seconds
after any given signed URL is generated.
Is there an issue with the method used to generate the signed URLs?
Unlikely. I'm simply using the AWS.CloudFront.Signer.getSignedUrl()
method of their JS SDK. The signed URLs do work most of the time, so
it would seem very strange that it would be an issue with the signing
process. Also, the error is clearly a CORS error, not a signature
mis-match error.
Is it a timezone/server clock issue?
Nope. The system does serve users across many timezones, but that
theory proved to be false given that the signed URLs are all generated
on the server-side. The timezone of the client doesn't matter, it gets
a signed URL good for 24hrs from the time of generation no matter what
TZ it's in.
Is your CF distro configured properly?
Yes, so far as I can make out by following several AWS guides,
tutorials, docs and such.
Here's a screenshot for brevity. You can see that I've disabled
cacheing entirely in an attempt to rule that out as a cause:
Are you seeing this error for all mime-types?
No. This error hasn't been seen for any images, audio, or video files
(objects). With much testing already done, this error only seems to
show up when attempting to GET a document or PDF file (.doc, .docx,
.pdf). This lead me to believe that this was simply an Accept header
mis-match error: The client was sending an XHR with the the header
Accept: pdf, but really the signature was generated for Accept: application/pdf.
I haven't yet been able to fully rule this out as a
cause. But it's highly unlikely given that the errors are
intermittent. So if it were a Accept header mis-match problem then it
should be an error every time.
Also, the XHR is sending Accept: */* so it's highly unlikely this is where the issue is.
The Question
I've really hit a wall on this one. Can anyone see what I'm missing here? The best I can come up with is that this is some sort of "timing" issue. What sort of timing issue, or if it even is a timing issue, I've yet to figure out.
Thanks in advance for any help.
Found the solution for the same on serverfault.
https://serverfault.com/questions/856904/chrome-s3-cloudfront-no-access-control-allow-origin-header-on-initial-xhr-req
You apparently cannot successfully fetch an object from HTML and then
successfully fetch it again with as a CORS request with Chrome and S3
(with or without CloudFront), due to peculiarities in the
implementations.
Adding the answer from original post so that it does not get lost.
Workaround:
This behavior can be worked-around with CloudFront and Lambda#Edge, using the following code as an Origin Response trigger.
This adds Vary: Access-Control-Request-Headers, Access-Control-Request-Method, Origin to any response from S3 that has no Vary header. Otherwise, the Vary header in the response is not modified.
'use strict';
// If the response lacks a Vary: header, fix it in a CloudFront Origin Response trigger.
exports.handler = (event, context, callback) => {
const response = event.Records[0].cf.response;
const headers = response.headers;
if (!headers['vary'])
{
headers['vary'] = [
{ key: 'Vary', value: 'Access-Control-Request-Headers' },
{ key: 'Vary', value: 'Access-Control-Request-Method' },
{ key: 'Vary', value: 'Origin' },
];
}
callback(null, response);
};
I am looking to add the Lambda#Edge to one of our services. The goal is to regex the url for certain values and compare those against a header value to ensure authorization. If the value is present then it is compared and if rejected should return a 403 immediately to the user. If the value compared matches or the url doesn't contain a particular value, then the request continues on as an authorized request.
Initially I was thinking that this would occur with a "viewer request" event. Some of the posts and comments on SO suggest that the "origin request" is more ideal for this check. But right now I've been trying to play around with the examples in the documentation on one of our CF end points but I'm not seeing expected results. The code is the following:
'use strict';
exports.handler = (event, context, callback) => {
const request = event.Records[0].cf.request;
request.headers["edge-test"] = [{
key: 'edge-test',
value: Date.now().toString()
}];
console.log(require('util').inspect(event, { depth: null }));
callback(null, request);
};
I would expect that there should be a logged value inside cloudwatch and a new header value in the request, yet I'm not seeing any logs nor am I seeing the header value when the request comes in.
Can someone shed some light on why things don't seem to be executing as to what I would think should be the response? Is my understanding of what the expected output wrong? Is there configuration that I may be missing (My distribution ID on the trigger is set to the instance we want, and the behavior was set to '*')? Any help is appreciated :)
First, a few notes;
CloudFront is (among other things) a web cache.
A web cache's purpose is to serve content directly to the browser instead of sending the request to the origin server.
However, one of the most critical things a cache must do correctly is not return the wrong content. One of the ways a cache can return the wrong content is by not realizing that certain request headers may cause the orogin server to vary the response it returns for a given URI.
CloudFront has no perfect way of knowing this, so its solution -- by default -- is to remove almost all of the headers from the request before forwarding it to the origin. Then it caches the received response against exactly the request that it sent to the origin, and will only use that cached response for future identical requests.
Injecting a new header in a Viewer Request trigger will cause that header to be discarded after it passes through the matching Cache Behavior, unless the cache behavior specifically is configured to whitelist that header for forwarding to the origin. This is the same behavior you would see if the header had been injected by the browser, itself.
So, your solution to get this header to pass through to the origin is to whitelist it in the cache behavior settings.
If you tried this same code as an Origin Request trigger, without the header whitelisted, CloudFront would actually throw a 502 Bad Gateway error, because you're trying to inject a header that CloudFront already knows you haven't whitelisted in the matching Cache Behavior. (In Viewer Request, the Cache Behavior match hasn't yet occurred, so CloudFront can't tell if you're doing something with the headers that will not ultimately work. In Origin Request, it knows.) The flow is Viewer Request > Cache Behavior > Cache Check > (if cache miss) Origin Request > send to Origin Server. Whitelisting the header would resolve this, as well.
Any header you want the origin to see, whether it comes from the browser, or a request trigger, must be whitelisted.
Note that some headers are inaccessible or immutable, particularly those that could be used to co-opt CloudFront for fraudulent purposes (such as request forgery and spoofing) and those that simply make no sense to modify.
I want to restrict my Lambda function (created with the Serverless Framework tool) to accept requests only from abc.com and def.com. It should reject all other requests. How can I do this? I tried setting access control origins like this:
cors: true
response:
headers:
Access-Control-Allow-Origin: "'beta.leafycode.com leafycode.com'"
and like this in the handler:
headers: {
"Access-Control-Allow-Origin" : "beta.leafycode.com leafycode.com"
},
but nothing worked. Any idea why?
The issue with your code is that Access-Control-Allow-Origin doesn't accept multiple domains.
From this answer:
Sounds like the recommended way to do it is to have your server read
the Origin header from the client, compare that to the list of domains
you'd like to allow, and if it matches, echo the value of the Origin
header back to the client as the Access-Control-Allow-Origin header in
the response.
So, when writing support to the OPTIONS verb, which is the verb where the browser will preflight a request to see if CORS is supported, you need to write your Lambda code to inspect the event object to see the domain of the client and dynamically set the corresponding Access-Control-Allow-Origin with the domain.
In your question, you have used a CORS configuration for two different types: Lambda and Lamba-Proxy. I recommend that you use the second option, so you will be able to set the domain dynamically.
headers: {
"Access-Control-Allow-Origin" : myDomainValue
},
See more about CORS configuration in the Serverless Framework here.
I'm running parse server behind AWS CloudFront and I'm still trying to figure out what the best configuration would be. Currently I've configured the CloudFront behavior to:
Allowed HTTP Methods: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE
Cached HTTP Methods: GET, HEAD (Cached by default)
Forward Headers: Whitelist
Accept-Language
Content-Type
Host
Origin
Referer
Object Caching: Customize:
Minimum TTL: 0
Maximum TTL: 31536000
Default TTL: 28800
Forward Cookies: All
My GET requests (using the parse REST API) seem to be cached as expected with this configuration. All requests that are made using the parse JS SDK seem to be called via POST and produce a 504 error in the browser console:
No 'Access-Control-Allow-Origin' header is present on the requested resource.
For some reasons those requests are still fullfilled by the parse server because e.g. saving Objects still stores them into my MongoDB even though there's this Access Control Origin error.
The fix for this is not through cloud front but it will be from the Parse Server side.
In this file /src/middlewares.js add the below code and the cloud from will not thorough that exception.
var allowCrossDomain = function(req, res, next) {
res.header('Access-Control-Allow-Origin', '*');
res.header('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE,OPTIONS');
res.header('Access-Control-Allow-Headers', 'X-Parse-Master-Key, X-Parse-REST-API-Key, X-Parse-Javascript-Key, X-Parse-Application-Id, X-Parse-Client-Version, X-Parse-Session-Token, X-Requested-With, X-Parse-Revocable-Session, Content-Type');