Whenever we purge cache in Cloudflare, we somehow lose the personalization on the Sitecore SXA page. This happens regularly and also on the subpages especially (basically the parent page shows everything correctly). This happens also most of the time inside the footer page design, as it has the personalization activated for certain component inside the footer.
Does anyone knows what the issue can be? Like I said the problem happens whenever we purge the cache inside the Cloudflare.
Also, this always happens inside the en-us language culture, nowhere else.
With no more info is not so easy to understand where to fix this issue.
In your application you should try to disable cache for static assets, you will get load if under real heavy traffic then take care.
This way you will make resources not cached forcing clients to retrieve all data (css, js and so on).
When you will find the bunch of files which makes you hungry (aka the files which cause your issue) you can restore default expire headers.
You can try this if you are not on free plan (if are on free plan min TTL on edge is 2h):
Cloudflare cache rule
set edge cache TTL to 30s
set browser cache TTL to respect existing
headers
set origin cache control to ON
set performance to OFF
Related
Currently our organization is using the Akamai Fast Purge v3 API to invalidate cache records by cache tag. The problem I'm running into is that some of our lower environments are configured with a TTL of 0 seconds, apparently to facilitate business user testing.
As a result, I'm finding it strangely difficult to manually test the new purge system we have in place because Akamai isn't actually caching anything. We're working with the business to set this to closer match production environments, but in the meantime I'm wondering if there are any debug headers that can be used to figure out if and when an invalidation / deletion occurred.
I have of course googled and found this somewhat useful if not incomplete and IMO strangely half-baked article from Akamai themselves that discuss debug cache headers and their meaning, but it seems very incomplete.
As far as I can tell, X-Cache is my best option from this article and as mentioned because of the 0 second TTL I will always get a MISS.
Are there any additional debug headers that could be useful in determining if my purge logic is effective, despite the 0 second TTL?
The better resource for Akamai's Pragma headers is the documentation site.
That said, there is no response header that will tell you when a cache purge was issued to the platform. Instead, you would have to look at the Control Center Event Viewer where you see and filter all events including just Fast Purge calls.
I would like to have AWS CloudFront's cache policy handle the caching of an October CMS website instead of October CMS.
Is there a setting in config/cache.php or somewhere for the CMS to bypass the cache?
Thank you.
Depends on which cache you're talking about. If you're talking about route caching, you're looking for cms.urlCacheTtl (https://github.com/octobercms/october/blob/develop/config/cms.php#L172). If you're talking about the parsed page cache you're looking for cms.parsedPageCacheTtl (https://github.com/octobercms/october/blob/develop/config/cms.php#L185). If you're talking about the generated asset cache you just have to set cms.enableAssetCache to false (https://github.com/octobercms/october/blob/develop/config/cms.php#L185).
October doesn't have a cache of fully rendered responses built in by default, so if you have any plugins enabled that implement that just disable them.
Additionally, if you truly wanted to remove every single cache that could be used throughout the entire system you can set the default cache driver to array, but be warned that this is only meant for local development and will cause issues in production (most visibly the Image Resizing functionality built into the core will stop working for resizing new images).
We have deployed our website on GCP VM, and enabled GCP CDN in front of the VM. When we browse website in most of the cases GCP CDN making requests to the Origin VM.
I am using below stack driver query to check the cache hits.
resource.type="http_load_balancer"
resource.labels.forwarding_rule_name="rule_name"
httpRequest.serverIp="gcpvmip"
httpRequest.requestUrl="request_url"
httpRequest.cacheFillBytes > 0
Based on your latest comment, it sounds like you're expecting all requests to your site to be served from Cloud CDN's caches without contacting your origin server. However, it's normal to see cache misses when using a CDN. Each CDN operates numerous caches, not one big global cache. The fact the content for one URL has been inserted into one cache does not mean it will be present in all caches everywhere. Further, unpopular cache entries are routinely evicted from cache to make room for more popular content.
Here are some relevant excerpts from the Cloud CDN docs:
Cloud CDN uses caches in numerous locations around the world. Caching
is reactive in that an object is stored in a particular cache if a
request goes through that cache and if the response is cacheable. An
object stored in one cache does not automatically replicate into other
caches; cache-to-cache fill happens only in response to a
client-initiated request.
https://cloud.google.com/cdn/docs/overview
Note that the expiration time is an upper bound on how long a cache
entry remains valid. There is no guarantee that a cache entry will
remain in the cache until it expires.
https://cloud.google.com/cdn/docs/caching
Note, though, that Cloud CDN operates numerous caches around the
world, and old cache entries are routinely evicted to make room for
new content. As a result, multiple cache fills per resource are
expected as part of normal operation.
https://cloud.google.com/cdn/docs/support#low-hit-rate
If you're seeing low cache hit rates for popular content, that last link has suggestions that should help.
I know exactly what the problem is... GCP CDN does not have Origin Shield feature. Even worse, with GCP almost every request comes from a different one of its massive number of CDN PoPs around the world. Without Origin Shield, your app server is the origin server and it has to fill the cache of every CDN edge point.
In my experience you should use GCP CDN only for DOS protection & caching and improving the TTFB performance of HTML requests (specially to offload SSL handshake). Use another CDN for caching other assets with better Cache/Hit ratio.
Some CDN providers have Origin Shield which helps with the cache hit ratio. E.g. create cdn.yourdomain.com with a CDN provider that has Origin Shield Feature and serve all other static content from there.
I know it may sound crazy to put a CDN in front of your CDN, but trust me it works amazing and you can even save money if you go with a CDN that charges less for bandwidth. Also, GCP CDN only caches content up to 10MB.
I have a script that I install on a page and it will load some more JS and CSS from an S3 bucket.
I have versions, so when I do a release on Github for say 1.1.9 it will get deployed to /my-bucket/1.1.9/ on S3.
Question, if I want to have something like a symbolic link /my-bucket/v1 -> /my-bucket/1.1.9, how can I achieve this with AWS or CloudFlare?
The idea is that I want to release a new version by deploying it, to my bucket or whatever CDN, and than when I am ready I want to switch v1 to the latest 1.x.y version released. I want all websites to point to /v1 and get the latest when there is new release.
Is there a CDN or AWS service or configuration that will allow me to create a sort of a linux-like symbolic link like that?
A simple solution with CloudFront requires a slight change in your path design:
Bucket:
/1.1.9/v1/foo
Browser:
/v1/foo
CloudFront Origin Path (on the Origin tab)
/1.1.9
Whatever you configure as the Origin Path is added to the beginning of whatever the browser requested before sending the request to the Origin server.
Note that changing this means you also need to do a cache invalidation, because responses are cached based on what was requested, not what was fetched.
There is a potential race condition here, between the time you change the config and invalidate -- there is no correlation in the order of operations between configuration changes and invalidation requests -- a config change followed by an invalidation may be completed after,¹ so will probably need to invalidate, update config, invalidate, verify that the distribution had progressed a stable state, then invalidate once more. You don't need to invalidate objects individually, just /* or /v1*. It would be best if only the resource directly requested is subject to the rewrite, and not it's dependencies. Remember, also, that browser caching is a big cost-saver that you can't leverage as fully if you use the same request URI to represent a different object over time.
More complicated path rewriting in CloudFront requires a Lambda#Edge Origin Request trigger (or you could use Viewer Request, but these run more often and thus cost more and add to overall latency).
¹ Invalidation requests -- though this is not documented and is strictly anecdotal -- appear to involve a bit of time travel. Invalidations are timestamped, and it appears that they invalidate anything cached before their timestamp, rather than before the time they propagate to the edge locations. Architecturally, it would make sense if CloudFront is designed such that invalidations don't actively purge content, but only serve as directives for the cache to consider any cached object as stale if it pre-dates the timestamp on the invalidation request, allowing the actual purge to take place in the background. Invalidations seem to complete too rapidly for any other explanation. This means creating an invalidation request after the distribution returns to the stable Deployed state would assure that everything old is really purged, and that another invalidation request when the change is initially submitted would catch most of the stragglers that might be served from cache before the change is propagated. Changes and invalidations do appear to propagate to the edges via independent pipelines, based on observed completion timing.
I'm setting up the output caching for sitecore with guide provided on SDN.
I've ticked the cacheable and clear on index update and by Data options.
However I noticed that every hard refresh, the full image gets requested i.e. 4 MB.
IS this a an expected behaviour?
Output cache stores generated html instead of executing the process of rendering your component.
It has nothing to do with sending images and caching them in browser cache.
Read How the Sitecore ASP.NET CMS Caches Output JW blog post for more details and see the links in comments.
I am going to assume that by saying "hard refresh" you mean bypassing your browser cache.
What you are seeing is not specific to Sitecore, or any server-side technology. It is your browser that uses caching to keep local copies of images and other "static" resources that it has loaded in the past. This cache is used to speed up page loads and reduce network traffic.
When you perform a hard refresh, the browser will ignore its cache and load all resources from the server. This is why there's a request to your image after a hard refresh.
This is not a normal behaviour.
Sitecore stores all media cache to file system, unlike all other caches, stored in RAM. Media items are stored in in database, so media cache is required to reduce database calls and serve media files faster to end-user. Let's understand Sitecore media cache mechanism.
Please check next link for details: http://sitecoreblog.patelyogesh.in/2014/04/how-sitecore-media-cache-is-works.html