I'm using Google Cloud Load Balancing service, and want to enable CORS for all subdomains.
For example, I want to be able to run an XHR request from
https://sub.mywebsite.example to https://www.mywebsite.example
Typically, I will do the below, but it does not work:
As mentioned by # derpirscher you must either specify * as Allow-Origin header or the exact protocol://host:port.
In your use case the response to the CORS request is missing the required Access-Control-Allow-Origin header, which is used to determine whether or not the resource can be accessed by content operating within the current origin.
You can also configure a site to allow any site to access it by using the * wildcard. You should only use this for public APIs. Private APIs should never use *, and should instead have a specific domain or domains set. In addition, the wildcard only works for requests made with the crossorigin attribute set to anonymous, and it prevents sending credentials like cookies in requests.
Access-Control-Allow-Origin: *
Ensure that the request has an Origin header and that the header value matches at least one of the Origins values in the CORS configuration. Note that the scheme, host, and port of the values must match exactly. Some examples of acceptable matches are as follows:
http://origin.example.com matches http://origin.example.com:80 (because 80 is the default HTTP port), but does not match https://origin.example.com, http://origin.example.com:8080, http://origin.example.com:5151, or http://sub.origin.example.com.
https://example.com:443 matches https://example.com but not http://example.com or http://example.com:443.
http://localhost:8080 only matches exactly http://localhost:8080 , not http://localhost:5555 or http://localhost.example.com:8080 .
Related
Note: My site is in production mode, not testing. It is pending verification due to me adding an icon. This issue persisted before the verification was started.
Whenever my browser makes a request to Google for the one-tap widget or the pill, both requests return 400 Bad Request with an empty HTML page and the console is sent a message stating "The given origin is not allowed for the given client ID." I've gone onto the Google Cloud Console and checked my origins. I have only one listed, and it's the exact site I'm sending requests from my browser. My site also has its traffic proxied through Cloudflare if that makes a difference. In addition, I am using JavaScript callbacks (which work when used in PI#1).
Potential issue #1: The URLs are typed in wrong
When I insert localhost (I add https and http because I test with a HTTPS webserver locally using a Cloudflare origin certificate), the requests go through perfectly. However, the moment the requests are from my browser when it's not localhost, the requests fail. I've copied and pasted straight from the URL bar just to make sure that there's no typos or anything but the same results return.
Potential issue #2: The widget is making bad requests
I do open the URLs in other tabs (Which yield the same results from PI#1) and insert bogus URLs like example.com and thisisnotaurl.com to ensure it's not just dropping every request. Those requests return 403 Forbidden instead of 400 Bad Request.
Potential issue #3: The issue is browser specific
I've checked this issue on both Firefox and Microsoft Edge, both on the stable branches and completely up to date. I've disabled my ad block (UBlock Origin & Firefox built-in protection) to ensure they aren't messing with requests but everything except the crucial requests fail with 400 Bad Request. I have yet to test other browsers as I do not have them installed but I assume the same results come from them.
An example of the code can be found here: https://gist.github.com/Coder-Tavi/772ea25b16f3fa0b6b0e04739a1689dd.
The origins shown below are the exact website I am accessing. In addition, I've verified the client IDs are exactly the same as the ones I have added
Referrer Policy is improperly configured
The HTTP header Referrer-Policy controls the exact amount of data sent to servers regarding the origin of the request. In most cases, this is set to same-origin which means that the Referer header will send the origin to servers with the same origin.
Consider you have a webserver at example.com and another at web.example.com with a Referrer-Policy of same-origin. When example.com sends a request to web.example.com, the Referer header will contain the origin of the request, since it is the same origin. However, if example.com sends a request to google.com, the Referer header will not send any origin data, as google.com and example.com are not the same origin.
If we look at the requests, this directive is what we see
As such, we need to update the directive to allow the browser to send the origin in the Referer header. This can be done by inserting the following into the HTML of the current page.
<meta name="referrer" content="origin">
This meta tag will allow the browser to send the origin only to other webservers, and as such, Google will see the origin.
Consider the example above again. This time, when example.com sends a request to google.com, the request will contain a Referer header with the origin, as the directive allows for sharing of the origin. However, with this current policy, only the origin is sent, not the query parameters and other parts of the URL. With the following URL: https://example.com/test/123, google.com will only see https://example.com. The MDN Web Docs contain all the possible values and their effects.
I have a custom domain example.com that is redirecting to my API gateway api-example.com, but it doesn't seem to pass the user-agent field, all my user-agent values are AmazonAPIGateway_5rfp2g9h9b.
If I call directly the api-example.com then it works fine, but if I call example.com, doesn't work.
Any idea on how I could pass the correct user-agent HTTP Header?
Thanks
It’s not clear what you mean by redirect or the domains you have listed, so you have two custom domains ? And if so how did you do that, Cloudfront with a custom origin? And what type of integration request do you have? Is this a REST or HTTP API? Probably why you are getting down voted because you don’t have any detail and the domains don’t make sense.
Either way in your API make sure you have the user-angent field defined where it is applicable:
Request Part of your API, and make sure your integration request is forwarding this header
Likewise make sure Cloudfront forwards the ‘user-agent’ header, that it is also whitelisted if you are using Cloudfront
Note this header comes from your Web browser or SDK being used sometimes sets this too. So if you don’t set this header for whatever reason that could be a problem, I don’t know if for example when you say from this domain that means you are using a hosted website, and another means making a request from Postman, etc.
Short answer: Validate the contents of your header
Ref AWS user-agent redirect here.. as listed below.
Redirects and HTTP user-agents:
..Programs that use the Amazon S3 REST API should handle redirects either at the application layer or the HTTP layer. Many HTTP client libraries and user agents can be configured to correctly handle redirects automatically; however, many others have incorrect or incomplete redirect implementations.
Before you rely on a library to fulfill the redirect requirement, test the following cases:
Verify all HTTP request headers are correctly included in the redirected request (the second request after receiving a redirect) including HTTP standards such as Authorization and Date.
Verify non-GET redirects, such as PUT and DELETE, work correctly.
Verify large PUT requests follow redirects correctly.
Verify PUT requests follow redirects correctly if the 100-continue response takes a long time to arrive.
HTTP user-agents that strictly conform to RFC 2616 might require explicit confirmation before following a redirect when the HTTP request method is not GET or HEAD. It is generally safe to follow redirects generated by Amazon S3 automatically, as the system will issue redirects only to hosts within the amazonaws.com domain and the effect of the redirected request will be the same as that of the original request...
Optional/Additional help, I was trying to understand your description, if you're going across domains, thats CORS.
Please consider CORS which you seem to be missing, please see configuration
here.
Also very important you Enabling CORS support for a resource and its methods does not recursively enable it for child resources and their methods.
If you want to setup your custom header for
user-agent
Setup CORS in Console
How to setup from console under the resources enable the CORS.
Setup your Headers
As a last step you have to REdeploy to a stage, for the settings to take effect!
I followed the guide to set up an external global, app engine based, load balancer. I linked it to Google's CDN by ticking the little box in the LB configuration settings.
Now, when I load my domain name, it says CANNOT GET /. The request returns a 404, along with some C S P: The page’s settings blocked the loading of a resource at inline (“default-src”). error messages.
It was working well before adding the CDN. So, I'm assuming my app server configuration is fine.
In the Load Balancer details, there is a little chart under the Monitoring section, with how traffic flows.
It shows traffic coming from the 3 global regions, going to the frontend of the LB, then to / (unknown) and / (unmatched) as URL Rule, then to the backend service I defined, and finally to a backend instance labelled NO_BACKEND_SELECTED.
I'm guessing the issue comes either from the URL Rule or Backend Instance, but there is little in the doc. to troubleshoot.
I followed the doc. to setup the LB. Settings are pretty simple using App Engine, so there is little room for wrong doing. But I may have missed something still.
In the 'create serverless NEG', I did select App Engine, and default as the service name (although i'm not sure what default actually means).
Any idea what's missing ?
EDIT :
So, in the load balancing menu, I go to the 'Backends' section at the top, and select my backend. Here I have the list of 'General properties' of my backend. Except, under 'Backends', it says the following : Backends contain instance groups of VMs or network endpoint groups. This backend service has no backends yet edit
From there, I can click the edit link, which redirects me to the 'Backend service edit' menu. I DO have a backend selected in there. I did create a serverless NEG using App Engine.
So, what's missing ? Is there anything wrong with Google's serverless backend ?
I wanna help you with the issue that you are facing.
If the responses from your external backend are not cached by Cloud CDN
Ensure that:
-You have enabled Cloud CDN on the backend service containing the NEG that points to your external backend by setting enableCDN to true. (DONE as per your description).
-Responses served by your external backend meet Cloud CDN caching requirements. For example, you are sending Cache-Control: public, max-age=3600 response headers from the origin.
The current implementation of Cloud CDN stores responses in cache if all of the following are true.
Attribute:
Served by
Requirement:
Backend service, backend bucket, or an external backend with Cloud CDN enabled
Attribute:
In response to
Requirement:
GET request
Attribute:
Status code
Requirement:
200, 203, 204, 206, 300, 301, 302, 307, 308, 404, 405, 410, 421, 451, or 501.
Attribute:
Freshness
Requirement:
The response has a Cache-Control header with a max-age or s-maxage directive, or an Expires header with a timestamp in the future.
For cacheable responses without an age (for example, with no-cache), the public directive must be explicitly provided.
With the CACHE_ALL_STATIC cache mode, if no freshness directives are present, a successful response with static content type is still eligible for caching.
With the FORCE_CACHE_ALL cache mode, any successful response is eligible for caching.
If negative caching is enabled and the status code matches one for which negative caching specifies a TTL, the response is eligible for caching, even without explicit freshness directives.
Attribute:
Content
Requirement:
Contains a valid Content-Length, Content-Range, or Transfer-Encoding: chunked header.
For example, a Content-Length header that correctly matches the size of the response.
Attribute:
Size
Requirement:
Less than or equal to the maximum size.
For responses with sizes between 10 MB and 5 TB, see the additional cacheability constraints described in byte range requests.
Please validate the URL Mapping too:
This is an example as reference adapt this according your project.
Create a YAML file /tmp/http-lb.yaml, making sure to substitute PROJECT_ID with your project ID.
When a user requests path /*, the path gets rewritten in the backend to the actual location of the content, which is /love-to-fetch/*.
defaultService: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendBuckets/cats
hostRules:
- hosts:
- '*'
pathMatcher: path-matcher-1
name: http-lb
pathMatchers:
- defaultService: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendBuckets/cats
name: path-matcher-1
pathRules:
- paths:
- /*
routeAction:
urlRewrite:
pathPrefixRewrite: /love-to-fetch/
service: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendBuckets/dogs
tests:
- description: Test routing to backend bucket, dogs
host: example.com
path: /love-to-fetch/test
service: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendBuckets/dogs
Validate the URL map.
gcloud compute url-maps validate --source /tmp/http-lb.yaml
If the tests pass and the command outputs a success message, save the changes to the URL map.
Update the URL map.
gcloud compute url-maps import http-lb \
--source /tmp/http-lb.yaml \
--global
Using URL maps
As the owner of domain example.com with many content what security risks arising from providing subdomain to third party company. We don't want to share any of the content and the third company would have complete control over the application and machine hosting the subdomain site.
I'm concerned mainly about:
Shared cookies
We have cookies .example.com, so there will be sent also in the requests to subdomain. Is it possible for us to point A record to reverse proxy where we strip the cookies and send the request to third party provider without them?
Content loading from main domain
Is it possible to set document.domain to example.com and do XMLHttpRequest to the example.com?
Cross site scripting
I guess that it would be no problem because of the same origin policy. Subdomain is treated as separate domain?
Any other security issues?
We have cookies .example.com, so there will be sent also in the
requests to subdomain. Is it possible for us to point A record to
reverse proxy where we strip the cookies and send the request to third
party provider without them?
Great idea, you could do this yes, however you will also need to set the HttpOnly flag, otherwise they would be able to retrieve them with JavaScript.
Is it possible to set document.domain to example.com and do
XMLHttpRequest to the example.com?
No, subdomains for Ajax are treated as a different Origin. See this answer.
I guess that it would be no problem because of the same origin policy.
Subdomain is treated as separate domain?
JavaScript code could interact with each other subdomains - but only with the cooperation of your site. You would also need to also set document.domain = 'example.com'; If you do not do this, you are secure against this threat.
See here:
When using document.domain to allow a subdomain to access its parent
securely, you need to set document.domain to the same value in both
the parent domain and the subdomain. This is necessary even if doing
so is simply setting the parent domain back to its original value.
Failure to do this may result in permission errors.
Any other security issues?
You need to be aware of cookie poisoning. If evil.example.com sets a non host-only cookie at .example.com that your domain believes it has set itself, then the evil cookie may be used for your site.
For example, if you display the contents of the cookie as HTML, then this may introduce XSS. Also, if you're using the double submit cookies CSRF prevention method an evil domain may be able to set their own cookie value to achieve CSRF. See this answer.
I'm trying to make Cloudfront work on my solution. I'm using Route 53 + CloudFront + ELB.
Consider the following:
1. Route 53 is pointing to CloudFront through a record set alias.
2. CloudFront is pointing to the ELB through a origin domain name.
3. CloudFront has an Alternate Domain Name set to my custom domain (mysite.com)
If I make a request using the CloudFront domain name (d1ngxxxx.cloudfront.net) or the custom domain (mysite.com), the initial request goes to CloudFront which responds with a HTTP 302. All the subsequent requests (for resources like images, css, js..) are made directly to the ELB domain name bypassing CloudFront.
What should I do to make all requests go throuhg CloudFront?
Thanks is advance!
I can't come up with a circumstance where Cloudfront would issue these redirects.
It seems likely that what's happening is that your server itself is issuing the 302 redirect, because it doesn't like the Host: header it's getting from Cloudfront.
Host: CloudFront sets the value to the domain name of the origin that is associated with the requested object.
— http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html
Cloudfront is then returning the redirect to the browser.
Cloudfront can also cache such a redirect, so be mindful of that as you're troubleshooting. The response headers should indicate whether cloudfront went to the origin for the particular reponse:
X-Cache: Miss from cloudfront
...or whether cloudfront served the request from cache.
X-Cache: Hit from cloudfront
Two possible approaches to resolve this:
If your legacy code is reacting to the Host: header in a negative way, you might be able to reconfigure the web server to modify that value before the code is able to see it, so the redirection wouldn't occur.
Alternately, you could use something outboard, a reverse-proxying engine like Varnish or HAProxy (of which I have touched on elsewhere). In HAProxy, for a simple example:
reqirep ^Host:\ .* Host:\ expected-domain.example.com if { hdr(host) -i unexpected-domain.example.com }
A rule in form similar to this would replace the Host: unexpected-domain.example.com header with Host: expected-domain.example.com in all incoming requests where that header was present, which should keep your legacy code happy and avoid the redirects. Running HAProxy in front of your legacy system doesn't impose a significant load, since the code is very tight. All of my legacy web systems are now fronted with these systems, to give me the ability to manipulate and modify behavior much more easily than might otherwise be possible.