I have setup my nopCommerce 4.10 (.net Core) in the Cloud Ec2 instance.
And also setup the CDN CloudFront for it using a Load-balancer.
The main purpose of going to cloud and cdn was to improve the page speed for the client(the client asked for this).
Page speed after this has not improved and is showing that the image header has "cache policy" is not effective.
For this I need to set cache-control in the header.
I checked the original image has this value but the load-balancer and cdn doesn't have this value for the images.
Please let me know how to set the cache-control in the header for CloudFront cdn.
Cache-control header should come from CloudFront Origin
(application that is behind Cloudfront).
Then cache-control header will be used by:
Cloudfront to cache objects in edge locations
User's browser, to cache objects directly in browser
In case of image, proper cache headers can be set in a place image is stored: S3 bucket, Apache config, etc..
Cloudfront does not strip cache headers, coming from origin. But your load balancer could. Open image via CloudFront Origin URL, to make sure headers are there.
Thanks for your response.
I think there was some issue with the load balancer configuration because of which this was happening to me.
After re-configuring the load balancer it has started to work.
You can now create a "Response headers policy"
Then, specify a custom header that is "Cache-Control" with the desired value.
Related
I'm new to AWS and setting up a Cloudfront distribution. From what I understand Cloudfront is designed to be used as a CDN.
I have a single-page-app that requires to communicate with the api from the same domain under /api/graphql path pointing to a GQL server that is not hosted in AWS.
My question is is there a way to bypass the cloudfront cache for /api* and proxy to the server?
So far I tried to create a new custom origin in the same distribution setup a cache behavior for the /api* path to point to the custom origin, but it seems the viewer request headers are not sent to the origin server and things don't work properly.
You can add/attach a cache policy for /api*.
See here: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/working-with-policies.html
This way, you can disable cache for Cloudfront and forward all headers to your origin.
My Situation
I have a web api hosted in an EC2 instance. I am trying to configure a cloudfront instance "infront" of that EC2 instance.
However, I have not been able to get my cloudfront to forward requests to the EC2 instance. I get hit with an error response like this:
Access to XMLHttpRequest at 'https://api.example.com' from origin 'https://example.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No access-control-Allow-Origin header is present on the requested resource
However, if I change my DNS to point https://api.example.com to EC2 instance's IP address, it works.
What I have done so far
Configured to use correct SSL certificate (for a different problem earlier)
Configured my CF distribution's behaviors to Whitelist Headers: "Origin"
Configure my CF distribution's behaviors to "All" - (which disables caching)
Invalidated cloudfront cache
What I am trying to do
I came across this AWS doc titled "Configuring CloudFront to Respect CORS Settings".
Link
However, it only says "Custom origins – Forward the Origin header along with any other headers required by your origin."
But... How do I do that? How do I forward origin header along with any other headers required? The docs doesn't specify or link to another docs to do it.
I have spent 4 hours or so now and it's extremely frustrating because Cloudfront takes ~30 minutes to deploy.
I have managed to fix this issue it turned out I had overlooked another error returned by Cloudfront: 502 Bad Gateway. Even though Chrome will show the abovementioned error "Access to XMLHttpRequest...". This was caused by my improper DNS and SSL certificates configuration due to my inexperience.
I will try to answer my own question, seeing after hours of searching, there wasn't a straight answer regarding (Cloudfront, EC2 and HTTPS) in Stackoverflow and there are many unaswered questions.
The goal my group was trying to achieve was enabling HTTPS connectivity for the entire set-up: Users' browsers, Cloudfront distribution and my EC2 instance.
What I did to fix this:
Generated a free SSL certificate (e.g. Let's Encrypt) to use for EC2 instance using a sub-domain (i.e. ec2.example.com or wildcard *.example.com). *Note: ACM does not allow public SSL certificates to be exported that can be used in EC2 instances, so use other free online SSL services. Do not use self-signed certs.
Import this certificate into ACM to be used for Cloudfront later too.
Created a new DNS A record to map the sub-domain to the EC2 instance. (e.g. ec2.example.com to ec2-xx-xxx-xx.ap1-location.amazonaws)
Created a new Cloudfront distribution and set the origin as the sub-domain, ec2.example.com. Also, under "Cache Based on Selected Request Headers", set it to "Whitelist" and to forward "Origin" headers. For SSL cert in Cloudfront, use back the one generated back in step 1)
Created a new DNS A record and map an "api" sub-domain to the Cloudfront. (e.g. api.example.com to abcdxyz.cloudfront.net)
I am now able to use a sub-domain (api.example.com) to communicate with Cloudfront which in turns communicates back to my EC2 and performs caching, using HTTPS all along.
Reference links: link1,
link2
There is probably a better way to set this up and if so, please do correct me so I can improve too! Hopefully this answer will help someone else new like me in the future too.
first of all sorry for my ignorance but there is a concept which is not very clear for me in the AWS ELB word.
I have a frontend site deployed on cloudfront and an API running into an EC2 instance.
What I want to avoid is having 2 domains to serve the same data.
For example, I want to access my site using https://example.com/post and you will see the site itself (HTML from cloudfront).
But if you are trying to access to https://example.com/post passing the HTTP/HEADER Accept: application/json you would be able to see the json content from the API server itself (EC2 Instance).
Is that possible using an ELB? or do I have to make some trick into the EC2 instance like having an nginx seted up as a proxy and serve the cloudfront content if no header is present?
Thanks in advance.
I'm not sure if this can be done using the accept header. But, if you separate the static and dynamic content with different root paths then it's a pretty standard deployment.
So for example, if all dynamic content is prefixed with /api (or alternatively, all static content is prefixed with /static/) then what you'll need is:
create an origin in cloudfront pointing to ELB/EC2
create a static origin in cloudfront pointing to S3 bucket.
create a behavior in cloudfront for the /api/ path (make sure it caches nothing and passes all headers and cookies), it should point to the ELB/EC2 origin.
create a static behavior for the root path (default) pointing to the s3 origin, this behavior can have cache static content where applicable.
See this guide for more details on this approach:
https://aws.amazon.com/blogs/networking-and-content-delivery/dynamic-whole-site-delivery-with-amazon-cloudfront/
I have an Amazon CloudFront setup that points to an S3 bucket as a CDN. I also have an alternate domain name (not on Route53) that points to this CloudFront.
I kept having trouble getting scripts to pull through the CDN when using the alternate domain name - but if I use the native one from the CloudFront control panel, it works.
Is there something special I need to do other than just set the domain name CNAME to point to the amazon CloudFront address for CORS to work?
An important part of correctly caching web requests is to ensure that a response to be served from the cache is "correct," in the sense if whether it will match the response generated by the origin, for the same request.
This isn't as simple as it sounds, since responses can vary based on the content if certain request headers.
CloudFront adopts a conservative and safe approach, by stripping most request headers as it forwards requests to the origin server -- if the server can't see the header, it can't use the header to vary its response.
In the case of CORS, it's critical for the origin server to see the Origin:, Access-Control-Request-Headers:, and Access-Control-Request-Method: headers so it can respond accordingly.
But forwarding unnecessary headers to the origin server causes inefficient caching, since the cached response will only be served against identical future requests, identical including the forwarded headers.
So the three CORS request headers must be "whitelisted" in the CloudFront cache behavior, so that they will be forwarded to the origin server (in this case, S3).
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/header-caching.html#header-caching-web-cors
I have a domain formulagrid.com.
I am using AWS S3 to host it as a static website. My problem was that I wanted to redirect the www subdomain to the bare domain like so:
https://www.formulagrid.com -> https://formulagrid.com
http://www.formulagrid.com -> https://formulagrid.com
Amazon provides URL redirecting from S3 bucket to S3 bucket if both are setup for static website hosting.
So what I had to do was set up two buckets:
formulagrid.com - actual website
www.formulagrid.com - exists solely to redirect to the actual website
This works perfectly fine if you're operating only over HTTP, but S3 has absolutely no support for HTTPS.
The way that one can use HTTPS to connect to an S3 static website is by setting up a CloudFront distribution in front of an S3 bucket. CloudFront, however, while it does provide HTTPS, mainly exists to function as a CDN.
Initially, I had a single CloudFront distribution setup in front of the S3 bucket holding the actual site. Everything seemed operational: the site was distributed over the CDN, it had HTTPS, and HTTP redirected to HTTPS.
There was one exception.
https://www.formulagrid.com was a completely broken page
After trying to find the source of the error for a while, I realized it's because it wasn't going through the CDN, and trying to access S3 over HTTPS doesn't work.
Finally, what I ended up having to do was provision another distribution to sit in front of the www S3 bucket so it was accessible over HTTPS. This is where my concerns come in because, like I mentioned earlier, CloudFront's main purpose is to be a CDN.
It doesn't make any sense to me to have a CDN sit in front of a url that just redirects to another. Also it brings up the question of whether I would be double charged for every request that hits the www subdomain because it'd hit the other CloudFront distribution after being redirected.
This is frustrating because I'm trying to do a "serverless" architecture using Lambda, and having to provision an EC2 instance just to do url rewriting isn't something I want to do unless it's my last resort.
The solution would be trivial if Amazon offered any form of URL rewriting or if CloudFront itself did redirecting, but neither of these exist as far as I know (let me know if they do).
I'm new to AWS so I'm hoping someone with more experience can point me in the right direction.
You're thinking too narrowly -- there's nothing wrong with this setup.
The solution would be trivial if Amazon offered any form of URL rewriting
They do -- the empty bucket.
S3 has absolutely no support for HTTPS.
Not for web site hosted buckets, no... but CloudFront does.
CloudFront is not just a CDN. It's also an SSL offloader, Host: header rewriter, path prepender, geolocator, georestrictor, secure content gateway, http to https redirector, error page customizer, root page substituter, web application firewall, origin header injector, dynamic content gzipper, path-based multi-origin http request router, viewer platform identifier, DDoS mitigator, zone apex alias target... so don't get too hung up on "CDN" or on the fact that you're stacking one service in front of another -- CloudFront was designed, in large part, to complement S3. They each specialize in certain facets of storage and delivery.
So, you did it right... most of it, anyway... Create a bucket, configure it for web site hosting, set it to redirect all requests to another site (the non-www) and put a CloudFront distribution in front of it -- using the web site endpoint URL for with bucket in CloudFront, not the one from the drop-down list -- configured with high TTLs so that CloudFront will send a minimal number of requests to S3 then put your (free!) SSL certificate from Amazon Certificate Manager. HTTPS alternate domain routing: solved. No servers, no troubleshooting, and cheap. The only charges are the usage -- there is no background recurring charge as there would be with servers.
Extra credit: configure the redirecting CloudFront distribution for the cheapest rate tier. Redirects from more expensive locations will either be routed to a cheaper edge location or -- at CloudFront's option -- may be served out of a higher cost location but billed at the lower rate.
Note that most of the time, CloudFront should serve the redirects from S3 from it's cache... and when you configure a bucket to redirect all requests to another hostname, the redirect is a 301 permanent redirect -- which browsers are supposed to cache, themselves.