I have a React app (hosted on Cloudflare Pages) consuming a Flask API which I deployed on DigitalOcean App platform. I am using custom domains for both, app.example.com and api.example.com respectively.
When I try to use the app through the domain provided by Cloudflare Pages, my-app.pages.dev, I have no issues.
But when I try to use it through my custom domain app.example.com, I see that certain headers get stripped in the response to the preflight OPTIONS request. These are
access-control-allow-credentials: true
access-control-allow-headers: content-type
access-control-allow-methods: DELETE, GET, HEAD, OPTIONS, PATCH, POST, PUT
allow: POST, OPTIONS
vary: Origin
This causes issues with CORS, as displayed in the browser console:
Access to XMLHttpRequest at 'https://api.example.com/auth/login' from origin 'https://app.example.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: The value of the 'Access-Control-Allow-Credentials' header in the response is '' which must be 'true' when the request's credentials mode is 'include'. The credentials mode of requests initiated by the XMLHttpRequest is controlled by the withCredentials attribute.
Users can login without any issues on Cloudflare provided domain my-app.pages.dev, but whenever they try to login through the custom domain, they receive this error.
Another detail: the only difference between the preflight request in two cases is, the browser sets the following on app.example.com
origin: https://app.example.com
referer: https://app.example.com/login
sec-fetch-site: same-site
And the following on my-app.pages.dev
origin: https://my-app.pages.dev
referer: https://my-app.pages.dev/login
sec-fetch-site: cross-site
I am using Flask-CORS with support_credentials=True to handle CORS on the API, and axios with {withCredentials: true} to consume the API on the frontend.
Is this due to a Cloudflare policy that I'm not aware of? Does anyone have a clue?
I just solved this problem. It was due to the App Spec on DigitalOcean. I had CORS specific setting on the YAML file:
I changed
- cors:
allow_headers:
- '*'
allow_methods:
- GET
- OPTIONS
- POST
- PUT
- PATCH
- DELETE
allow_origins:
- prefix: https://app.example.com # <== I removed this line
- regex: https://*.example.com
- regex: http://*.example.com
to
- cors:
allow_headers:
- '*'
allow_methods:
- GET
- OPTIONS
- POST
- PUT
- PATCH
- DELETE
allow_origins:
- regex: https://*.example.com
- regex: http://*.example.com
For reference, this is the cURL command I used to debug the problem:
curl -I -XOPTIONS https://api.example.com \
-H 'origin: https://app.example.com' \
-H 'access-control-request-method: GET'
So it wasn't due to Cloudflare. Funny enough, DigitalOcean App Platform traffic goes through Cloudflare by default which added to my confusion.
Related
I have a custom origin that I am trying to cache locally. The default cache-control sends no-store, no-cache, must-revalidate, and the pre and post checks are set to 0. I actually need the opposite. I need the browser to store, to cache, NOT to revalidate until my max age (24h / 86400s) is hit. There is private data being used and I don't need users seeing other user's data. I set up a response header policy to override cache-control with "private, max-age=86400" and I only get 200 HTTP responses (I am looking for a 304) and in x-cache I keep getting "miss from cloudfront" as well.
Some info about my setup:
Protocol: Match Viewer
SSL: TLSv1.1 (Custom cert that AWS handles)
Methods: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE
Unrestricted viewer access
Legacy Cache: Headers(None), Query strings (ALL), Cookies (None)
Object cache: Use origin headers
Response Policy: Custom cache-control "private, max-age=86400"
The only other behaviors are using CachingDisabled on folders I don't want caching. I know this is likely just a bad config issue but I have no idea what's causing it.
I'm using AWS lambdas and cloudfront to serve a SPA.
Now that my lambdas are setting a cookie, I want to include that cookie in the requests I made to the backend (the cookie is HttpOnly and Secure).
Using Axios I set the withCredentials option to true and all my request are now being rejected because CORS.
The web app is being served from the main domain, while the backend lambdas are on the usual lambda weird UUID url. The lambdas are returning the proper headers, as you can see in the screenshot: access-control-allow-origin is set to the domain the web-app is being served from and access-control-allow-credentials is true. The screenshot is from the app without the withCredentials option activated, so it is being triggered from the web-app 100% sure.
Everything is being served over https with a valid certificate (I want to test this also on localhost, but that is a different story)
This is the error I'm getting on the console. One weird thing is that it claims that Access-Control-Allow-Credentials is set to '', which is not true
Access to XMLHttpRequest at 'https://p3doiszvgg.execute-api.eu-central-1.amazonaws.com/dev/sessions'
from origin 'https://pento.danielo.es' has been blocked by CORS policy:
Response to preflight request doesn't pass access control check:
The value of the 'Access-Control-Allow-Credentials' header in the response is '' which must be 'true' when the request's credentials mode is 'include'. The credentials mode of requests initiated by the XMLHttpRequest is controlled by the withCredentials attribute.
Is there anything missing?
EDIT:
This are the headers that I'm sending. The problem with this headers is that they are obtained without the withCredentials flag, because if I add such flag the only headers I can see are the provisional headers.
:authority: p3doiszvgg.execute-api.eu-central-1.amazonaws.com
:method: POST
:path: /dev/sessions
:scheme: https
accept: application/json, text/plain, */*
accept-encoding: gzip, deflate, br
accept-language: en-GB,en;q=0.9,es-ES;q=0.8,es;q=0.7,en-US;q=0.6
authorization: Bearer the.bearer.token
cache-control: no-cache
content-length: 58
content-type: application/json;charset=UTF-8
origin: https://pento.danielo.es
pragma: no-cache
referer: https://pento.danielo.es/
sec-fetch-dest: empty
sec-fetch-mode: cors
sec-fetch-site: cross-site
user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36
Here is a provisional headers screenshot:
The cookie sent by the server looks something like this:
Set-Cookie: refresh_token=uuid-string-with-letters-numbers; HttpOnly; Secure;
Finally I found the problem and a temporary solution (I'm not very happy with it).
The problem was not my lambda response, that was correct and including the required headers, the problem was with the preflight request. Your browser will send a preflight request almost for every CORS request you made, and, while that request was being successful it was missing some headers. This can be very confusing because the request that it is failing is your actual request (that is what the browser flags as failed) but the problem is on the preflight response.
To be fair, the error on the console was already pointing this out:
Response to preflight request doesn't pass access control check
But it is abit buried, easy to miss and the documentation about it is sparse.
The way I fixed it is by adding some extra props to the CORS definition of my serverless template:
authEcho:
handler: src/users/me.handler
events:
- http:
path: me
method: get
cors:
origin: https://frontend.domain.es
allowCredentials: true # <-- this is the key part
It is not clear on the serverless documentation, but those will be merged with the final response, so you don't need to specify everything or all the headers. The only thing I don't like is that I have to hardcode the origin, while on the actual labmda responses I can calculate it dynamically.
I was trying to make my react app running on my localhost to talk to the AWS. I have enabled CORS and the OPTIONS on the API.
Chrome gives this error now
Cross-Origin Read Blocking (CORB) blocked cross-origin response https://xxxxxx.execute-api.us-east-2.amazonaws.com/default/xxxxxx with MIME type application/json. See https://www.chromestatus.com/feature/5629709824032768 for more details.
I inspected the network tab and the options call is going through and the OPTIONS is sending this in the response header
access-control-allow-headers: Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token
access-control-allow-methods: DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT
access-control-allow-origin: *
How can I fix this CORB issue and get my first lambda function done?
I had to figure it out. I needed to do these two things to get it working
1. Enable CORS on the Amazon API gateway for your API
This will create an OPTIONS http method handler and you can allow posts from your website by setting the right value for access-control-allow-origin header.
2. Make sure your POST method handling is sending the right parameters when sending the response
import json
from botocore.vendored import requests
API_URL = "https://aladdin.mammoth.io/api/v1/user-registrations"
def lambda_handler(event, context):
if event['httpMethod'] == 'POST':
data = json.loads(event['body'])
# YOUR CODE HERE
return {
'statusCode': 200,
'body': json.dumps({}),
'headers': {
'access-control-allow-headers': 'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token',
'access-control-allow-methods': 'DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT',
'access-control-allow-origin': '*'
}
}
return {
'statusCode': 200,
'body': json.dumps({})
}
CORB is a Chromium specific protection and is not directly related to your CORS setup on the AWS side.
Does your server return the headers required by CORB ? X-Content-Type-Options: nosniff and a correct Content-Type ?
You can learn more about CORB on the Chromium web page at https://www.chromium.org/Home/chromium-security/corb-for-developers
I fixed this for image files by updating the Content-Type metadata under Properties in S3 - image/jpeg for JPEG files and image/png for PNG files.
My application uploads image files via multer-s3 and it seems it applies Content-Type: 'application/x-www-form-urlencoded'. It has a contentType option with content-type auto-detect feature - this should prevent improper headers and fix the CORB issue.
It seems the latest Chrome 76 version update includes listening to remote file URL headers, specifically Content-Type. CORB was not an issue for other browsers such as Firefox, Safari, and in-app browsers e.g. Instagram.
I'm running parse server behind AWS CloudFront and I'm still trying to figure out what the best configuration would be. Currently I've configured the CloudFront behavior to:
Allowed HTTP Methods: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE
Cached HTTP Methods: GET, HEAD (Cached by default)
Forward Headers: Whitelist
Accept-Language
Content-Type
Host
Origin
Referer
Object Caching: Customize:
Minimum TTL: 0
Maximum TTL: 31536000
Default TTL: 28800
Forward Cookies: All
My GET requests (using the parse REST API) seem to be cached as expected with this configuration. All requests that are made using the parse JS SDK seem to be called via POST and produce a 504 error in the browser console:
No 'Access-Control-Allow-Origin' header is present on the requested resource.
For some reasons those requests are still fullfilled by the parse server because e.g. saving Objects still stores them into my MongoDB even though there's this Access Control Origin error.
The fix for this is not through cloud front but it will be from the Parse Server side.
In this file /src/middlewares.js add the below code and the cloud from will not thorough that exception.
var allowCrossDomain = function(req, res, next) {
res.header('Access-Control-Allow-Origin', '*');
res.header('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE,OPTIONS');
res.header('Access-Control-Allow-Headers', 'X-Parse-Master-Key, X-Parse-REST-API-Key, X-Parse-Javascript-Key, X-Parse-Application-Id, X-Parse-Client-Version, X-Parse-Session-Token, X-Requested-With, X-Parse-Revocable-Session, Content-Type');
I am trying to build an app where users upload content on their browsers to an S3 bucket through CloudFront. I have enabled CORS on the S3 bucket and ensured that the AllowedOrigin is set to *. I can successfully push content from a browser to the S3 bucket directly so I know that CORS on S3 is configured correctly. Now, I am trying to do the same with browser -> CloudFront -> S3. CloudFront always rejects the pre-flight OPTIONS method request with a 403 forbidden response.
I have the following options enabled on CloudFront:
Allowed HTTP Methods: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE
Whitelist Headers: Access-Control-Request-Headers,
Access-Control-Request-Method, Origin OPTIONS requests are disabled
from the "Cached HTTP Methods"
CloudFront apparently now supports CORS but has anyone got it working for an HTTP method OPTIONS request? I tried asking this on the AWS forums but no responses.
Have your try adding a CNAME alias for your cloudfront domain ??
After setting up the CNAME alias, you can set the cookies on the base domain, then you will be able to pass your cookie.
Let's put more detail to it in case people want to know what would be the next step is, let's use the following example :-
You are developing on my.fancy.site.mydomain.com
Your Cloudfront CNAME alias is content.mydomain.com
Make sure you set your cloudfront signed cookies to .mydomain.com from your fancy app
From this point on, you are able to pass the cookie for the CF.
One quick way to test if your cookie is set appropriately, try to get your assets URL, and put the url in the browser directly. If the cookie set correctly, you will be able to access the file directly.
If you are using javascript to get the cdn assets, make sure in your JS code, you need to pass withCredentials option, or it won't work. For example, if you are using jQuery, you will need something like the following :-
$.ajax({
url: a_cross_domain_url,
xhrFields: {
withCredentials: true
}
});
And if the request is successful, you should get a response header from CloudFront with "Access-Control-blah-blah".
Hope it helps people if they search this answer.
I found a very similar issue. The CloudFront distribution was not sending the header information to S3. You can test this easily via:
curl -i -H "Origin: http://YOUR-SITE-URL" http://S3-or-CLOUDFRONT-URL | grep Access
If you have the same problem, you can see my solution here:
AWS S3 + CloudFront gives CORS errors when serving images from browser cache