HTTPS and HSTS headers issue AWS - amazon-web-services

In my scenario we currently have www3.qwerty.com routing through a few different paths. Could you please advise how we should correct this to be a better approach, possibly just redirecting even?
"The HTTP site redirects users to a new URL in a way that cannot be secured with HTTPS and HSTS headers. This leaves users open to man-in-the-middle attackers who can redirect them to a fraudulent/ spoofed version of the intended site.
“Site Does Not Enforce HTTPS” issue type for more information regarding man-in-the-middle scenarios."
From "
http://www3.qwerty.com/, 301, https://www.qwerty.com/
"
we don't need that domain though so it'd best to just have it go directly to www.qwerty.com rather than the reroutes either cname or load balancer came to mind.
What is the best way to accomplish this?

Related

Redirect to url from load balancer without CORS error

I was wondering if any of you know how to achieve that a GCP load balancer redirect to an url with "CORS enabled". What do I mean by that?, well I have the following scenario:
One load balancer that has to redirect to other load balancers depending on the path of the URL (LB A)
"Simple" load balancer that has many backends attached (LB B, LB C, etc)
So my flow is as follow:
LB A (/pathB) -- redirect -> LB B
LB A (/pathC) -- redirect -> LB C
This works as expected if requested by a simple HTTP Request (like cURL or Postman) but fails if its requested on a website. Why?, because the preflight OPTIONS request is redirected and that brings a CORS error Redirect is not allowed for a preflight request, and even if the OPTIONS request is skipped, a simple GET request will also have a redirected response without the CORS headers (which will fail).
Is this possible?, if so how can I achieve it?, I tried to add a cors policy on LB A but a LB can't have a routeAction with a urlRedirect.
Practically I just want to inject the CORS headers on the 301 Response to avoid the error.
After a long time searching for a solution I finally got to a "working" conclusion.
If you want that a google's load balancer inject some headers (specially cors headers) on a request thats redirected to another domain, then you are out of luck (also mentioned here). Even with the new LB version it seems that is not possible (at the moment, am sure this feature will release in some time). Maybe there is a way of doing it, but neither the docs nor the api seems to tell you how.
Then you just "can't" do this?. Well, no, the classic load balancer has something call an Internet NEG which is like having "external backends" you can point to, so I just create various NEGs to meet my needs and attach them in the LB as backends. So I accomplish my example as follow:
If LB B has the domain lb-b.com and LB C has lb-c.com
Create 2 Global NEGs (Internet NEGs) with the full domain name as lb-b.com and lb-c.com respectibly
Then create 2 backend services on LB A with each one associated with each NEG.
Finally select Advanced host and path rule and create one Path rule for each LB, for example for LB B:
Create a path rule of /pathB/*
Select the option Route traffic to a single backend
Use path prefix as / (if you want to "remove" the pathB prefix on the forward)
Select the backend service you previously create corresponding to lb-b.com
Sharing this information with documentation and guidance as a workaround for handling and enabling CORS.
Another thing that I can think of for this scenario is by using cloud function for redirects and CORS headers. You can use this link as a guidance for this scenario. Let me also share another link where you can see different code sample.
To use a reverse proxy in front of the load balancer that can handle the CORS header and the redirecting request we can use Apache or Nginx. For this scenario we can follow this link as a guidance.

AWS API Gateway Custom Domain not passing the user-agent

I have a custom domain example.com that is redirecting to my API gateway api-example.com, but it doesn't seem to pass the user-agent field, all my user-agent values are AmazonAPIGateway_5rfp2g9h9b.
If I call directly the api-example.com then it works fine, but if I call example.com, doesn't work.
Any idea on how I could pass the correct user-agent HTTP Header?
Thanks
It’s not clear what you mean by redirect or the domains you have listed, so you have two custom domains ? And if so how did you do that, Cloudfront with a custom origin? And what type of integration request do you have? Is this a REST or HTTP API? Probably why you are getting down voted because you don’t have any detail and the domains don’t make sense.
Either way in your API make sure you have the user-angent field defined where it is applicable:
Request Part of your API, and make sure your integration request is forwarding this header
Likewise make sure Cloudfront forwards the ‘user-agent’ header, that it is also whitelisted if you are using Cloudfront
Note this header comes from your Web browser or SDK being used sometimes sets this too. So if you don’t set this header for whatever reason that could be a problem, I don’t know if for example when you say from this domain that means you are using a hosted website, and another means making a request from Postman, etc.
Short answer: Validate the contents of your header
Ref AWS user-agent redirect here.. as listed below.
Redirects and HTTP user-agents:
..Programs that use the Amazon S3 REST API should handle redirects either at the application layer or the HTTP layer. Many HTTP client libraries and user agents can be configured to correctly handle redirects automatically; however, many others have incorrect or incomplete redirect implementations.
Before you rely on a library to fulfill the redirect requirement, test the following cases:
Verify all HTTP request headers are correctly included in the redirected request (the second request after receiving a redirect) including HTTP standards such as Authorization and Date.
Verify non-GET redirects, such as PUT and DELETE, work correctly.
Verify large PUT requests follow redirects correctly.
Verify PUT requests follow redirects correctly if the 100-continue response takes a long time to arrive.
HTTP user-agents that strictly conform to RFC 2616 might require explicit confirmation before following a redirect when the HTTP request method is not GET or HEAD. It is generally safe to follow redirects generated by Amazon S3 automatically, as the system will issue redirects only to hosts within the amazonaws.com domain and the effect of the redirected request will be the same as that of the original request...
Optional/Additional help, I was trying to understand your description, if you're going across domains, thats CORS.
Please consider CORS which you seem to be missing, please see configuration
here.
Also very important you Enabling CORS support for a resource and its methods does not recursively enable it for child resources and their methods.
If you want to setup your custom header for
user-agent
Setup CORS in Console
How to setup from console under the resources enable the CORS.
Setup your Headers
As a last step you have to REdeploy to a stage, for the settings to take effect!

Naked domain and http to https redirects

Hope you're all doing well!
I have a question I'm hoping to get some help with. I have a static site served through S3 with CloudFront distributions in front.
My main site is served on www.xyz.xyz and the cloudfront distribution connected ha a behavior http to https redirect.
Then I also want people to be able to access http://xyz.xyz, therefore I have created another bucket for the naked domain, with a redirect policy to www.xyz.xyz with http as protocol. In the CloudFront distribution connected to this the origin is the direct S3 website link, and not the bucket.
In the end this ensures all guests end at https://www.xyz.xyz, however when running Google Lighthouse for a SEO check, if I enter http://xyz.xyz it seems to go through 2 redirects, one to https and one to www and I'm assuming, according to Lighthouse, that this has some negative effects in that regard, both in terms of time to serve, but also SEO.
Am I doing something wrong? I hope you can help me. I really thought it was simpler, also with all the buckets and such :-)
I noticed in AWS Amplify you need to setup redirect/rewrites, but I guess in S3 + CloudFront terms, that's what I'm already doing.
Best,
To maintain compatibility with HSTS, you must perform your redirection in two steps. The first redirect should upgrade the request to https. The second can canonicalize the domain (add or remove www). So this behavior is desirable.

Third party code on subdomain

As the owner of domain example.com with many content what security risks arising from providing subdomain to third party company. We don't want to share any of the content and the third company would have complete control over the application and machine hosting the subdomain site.
I'm concerned mainly about:
Shared cookies
We have cookies .example.com, so there will be sent also in the requests to subdomain. Is it possible for us to point A record to reverse proxy where we strip the cookies and send the request to third party provider without them?
Content loading from main domain
Is it possible to set document.domain to example.com and do XMLHttpRequest to the example.com?
Cross site scripting
I guess that it would be no problem because of the same origin policy. Subdomain is treated as separate domain?
Any other security issues?
We have cookies .example.com, so there will be sent also in the
requests to subdomain. Is it possible for us to point A record to
reverse proxy where we strip the cookies and send the request to third
party provider without them?
Great idea, you could do this yes, however you will also need to set the HttpOnly flag, otherwise they would be able to retrieve them with JavaScript.
Is it possible to set document.domain to example.com and do
XMLHttpRequest to the example.com?
No, subdomains for Ajax are treated as a different Origin. See this answer.
I guess that it would be no problem because of the same origin policy.
Subdomain is treated as separate domain?
JavaScript code could interact with each other subdomains - but only with the cooperation of your site. You would also need to also set document.domain = 'example.com'; If you do not do this, you are secure against this threat.
See here:
When using document.domain to allow a subdomain to access its parent
securely, you need to set document.domain to the same value in both
the parent domain and the subdomain. This is necessary even if doing
so is simply setting the parent domain back to its original value.
Failure to do this may result in permission errors.
Any other security issues?
You need to be aware of cookie poisoning. If evil.example.com sets a non host-only cookie at .example.com that your domain believes it has set itself, then the evil cookie may be used for your site.
For example, if you display the contents of the cookie as HTML, then this may introduce XSS. Also, if you're using the double submit cookies CSRF prevention method an evil domain may be able to set their own cookie value to achieve CSRF. See this answer.

https vs signed url with Cloudfront

I know this is an apples and oranges question but I'd like to understand the pros and cons of using https and signed urls with AWS Cloudfront. Might people please comment on and add to this list?
HTTPS
PROS
Security: https is more secure than http. Though, I'm not sure what this mean b/c if you can't trust that the URL is actually from Amazon, who can you trust?
Preserve your application's status quo: Your site is already fully https for another reason, like you handle credit cards. Using https for cloudfront prevents alerting the user that you are serving insecure content, i.e., the dreaded "yellow" indicator symbol. Could this also be a con if you're site is fully http (honest question)?
Degree of difficulty: 0/10. Just change http to https in your url, it works either way out of the box. On the other hand, if you want to use your own CNAME with https, this seems significantly more confusing, 7/10, though I haven't tried it due to con #1 below...
CONS
Cost: $600/month !! to use https with own CNAME, e.g., images.mysite.com instead of blah123.cloudfront.com. On the other hand, my understanding is that using CNAMEs with http is free?
SIGNED URLS
PROS
REAL security: signed urls would seem the most commonly needed method to control who has access to your site's content. You can control things like the user IP address and the time duration of who has access.
Cost: none
CONS
Degree of difficulty: 9/10. Creating signed urls is relatively confusing. There's lots of terminology to learn and possibly some libraries not part of the AWS SDK you'll need to track down.
HTTPS helps secure data in transit, which is helpful if you are already using SSL for access to your application. With the CNAME issue, most people are likely not going to realize that your images and other static content are being delivered from cloudfront.net instead of yourdomain.com
Signing URLs only helps control who can access a given file and how long they can access it for. You may use this for delivery digital purchases, or other private files to logged in users. You also loose some of the caching benefit of cloudfront.