Basically, the hash on the cache busting file is not updating.
class S3PipelineStorage(PipelineMixin, CachedFilesMixin, S3BotoStorage):
pass
PIPELINE_JS = {
'main.js': {
'output_filename': 'js/main.min.js',
'source_filenames': [
'js/external/underscore.js',
'js/external/backbone-1.0.0.js',
'js/external/bootstrap-2.2.0.min.js',
]
}
}
When I first ran the collectstatic command, it properly created a cache busting file named "main.min.d25bdd71759d.js
Now when I run the command, however, it is failing to overwrite that cached file (and update the hash) during the post process phase.
It keeps updating "main.min.js", such that main.min.js is current with my filesystem. A new cached file, however is not created. It keeps the same old hash even though the underlying main.min.js file has changed.
When I manually delete the cached file on AWS, I get the following message from running collectstatic with verbosity set to 3:
Post-processed 'js/main.min.js' as 'js/main.min.d25bdd71759d.js
settings.DEBUG is set to False
Why won't the hash update?
Try using the manifest storage instead:
class S3PipelineManifestStorage(PipelineMixin, ManifestFilesMixin, S3BotoStorage):
pass
According to the django docs here https://docs.djangoproject.com/en/1.11/ref/contrib/staticfiles/#cachedstaticfilesstorage it's not recommended to use CachedStaticFilesStorage.
Your files names for your static files are probably getting cached. So use the manifest one.
CachedStaticFilesStorage isn’t recommended – in almost all cases ManifestStaticFilesStorage is a better choice. There are several performance penalties when using CachedStaticFilesStorage since a cache miss requires hashing files at runtime. Remote file storage require several round-trips to hash a file on a cache miss, as several file accesses are required to ensure that the file hash is correct in the case of nested file paths.
Note this is also documented at django-pipelines http://django-pipeline.readthedocs.io/en/latest/storages.html#using-with-other-storages
Related
I added some custom error messages to the APIM according to documentation https://apim.docs.wso2.com/en/4.0.0/troubleshooting/error-handling/ - I created custom file in
<API-M_HOME>/repository/deployment/server/synapse-configs/default/sequences and added references to that file in some of the default files in that directory (so that it is called to transform error message).
Everything seemed to be working just fine until the restart of WSO2 - after that, changes made to default files were present, but the custom file was removed, so that custom error message handling didn't work.
I resolved this by adding non-removable attribute (chattr +i) to the file, but I wonder is there other, more elegant way to prevent the file from being deleted every time restart is being made?
There are 'template' files placed in: <API-M_HOME>/repository/resources/apim-synapse-config. Maybe, those files are overriding files in the ../synapse-configs/default/ location.
Second thing, which came on my mind, is using specific High Avability scenario. Where artifacts are shared files in system as the content synchronization mechanism, it can override local changes.
At the startup gateway removes these files. You can add the following configuration to the deployment.toml and place the file in the sequence directory.
Sample Config:
[apim.sync_runtime_artifacts.gateway.skip_list]
apis = ["api1.xml","api2.xml"]
endpoints = ["endpoint1.xml"]
sequences = ["post_with_nobody.xml"]
local-entries = ["file.xml"]
For your case:
[apim.sync_runtime_artifacts.gateway.skip_list]
sequences = ["name_of_the_file.xml"]
Refer - https://apim.docs.wso2.com/en/latest/install-and-setup/setup/distributed-deployment/deploying-wso2-api-m-in-a-distributed-setup/#configure-the-gateway-nodes
I'm trying to cache a set of strings per session by storing each one in their own variable and by using django.contrib.session.
I have the following code:
import copy
def get_result(request, operation):
previous_result = request.session.get(operation.name)
if previous_result:
result = copy.deepcopy(previous_result)
else:
result = get_json_response(operation)
request.session[operation.name] = copy.deepcopy(result)
return result
get_result() is
triggered via ajax requests
used for many different operations which may be called at the same time
may be called multiple times per operation in one session
This code works perfectly fine on my local environment. However, in production server where gevent and chausette is installed, it fails.
Most of the time, request.session.get(operation.name) would return None even when it is not the first time that get_result is called for that operation. In some cases, it returns a value but in some, it doesn't. There seems to be no pattern on when it does and doesn't work.
I suspect that the inconsistency is because different threads are referencing the session variable at different states. What would be the proper way to handle session variables in this case?
I did in fact have the same problems and also tried to save the session properly with the tweaks you posted.
In the end, what solved my problem was changing the default cache in settings.py to
'BACKEND': 'django.core.cache.backends.dummy.DummyCache',
Using FileBasedCache instead helps as well, but it crashes in the local environment (development). Dummy works for local as well as production.
I am building a simple CEF3 based browser. I want the cache to be removed/deleted after the user closes/ends all the sessions. First I have tried to store the cache on the hard drive by using Cefsettings.cache_path but the folder is empty.
Here is my code:
CefSettings settings;
const char* path = "E:\\test\\Cefclient\\cache";
//store cache on hdd
CefString(&settings.cache_path).FromASCII(path);
The cache folder is empty, also this path: C:\Users\user\AppData\Local\CEF\User Data, which was generated before I changed the path is empty. What could be the problem? And what method do I use to clear/delete this cache?
you can delete the cache path after you called CefShutDown,
you can also confirm you cache path by "chrome://version"
A template stored in a database as described here, then edited and persisted cannot be rendered for review without clearing the cache, even in dev mode. Once the cache is cleared, the template remains fixed until cache is cleared again.
Presumably there is a method somewhere that allows such an edited template to be available immediately without having to clear the cache?
There is two options
1) disable the cache on twig:
twig:
cache: false
2) remove the cached file when you updated the view on the database:
$fileCache = $this->container->get('twig')->getCacheFilename('YourBundle:Default:index.html.twig');
if (is_file($fileCache)) {
#unlink($fileCache);
}
Let me know if that works for you.
I'm pretty new to AppFabric and what I'm trying to understand is how to stipulate that I want data to go into the Distributed cache as well as the Local Cache
I read the post here which is doing this based on config. I am not using any XML config but rather creating my objects with configuration programmatically. I am playing around with the following code:-
// Declare array for cache host(s).
List<DataCacheServerEndpoint> servers = new List<DataCacheServerEndpoint>();
servers.Add(new DataCacheServerEndpoint("SERVER1", 10023));
servers.Add(new DataCacheServerEndpoint("SERVER2", 10023));
servers.Add(new DataCacheServerEndpoint("SERVER3", 10023));
DataCacheLocalCacheProperties localCacheConfig;
TimeSpan localTimeout = new TimeSpan(0, 5, 0);
localCacheConfig = new DataCacheLocalCacheProperties(10000, localTimeout, DataCacheLocalCacheInvalidationPolicy.TimeoutBased);
// Setup the DataCacheFactory configuration.
DataCacheFactoryConfiguration factoryConfig = new DataCacheFactoryConfiguration();
factoryConfig.Servers = servers;
factoryConfig.SecurityProperties = new DataCacheSecurity(DataCacheSecurityMode.None, DataCacheProtectionLevel.None);
factoryConfig.LocalCacheProperties = localCacheConfig;
DataCacheFactory factory = DataCacheFactoryExtensions.Create(factoryConfig);
DataCache dataCache = factory.GetCache("MyCache");
dataCache.Put("myKey", "MyValue");
Am I right to assume that because I have added the local cache config to the factoryConfig object that my cached item will be automatically added to local cache as well as the distributed cache?
And therefore if I want items only cached to distributed cache do I just need to drop off adding the local cache config to the factoryConfig object?
Or do I need two separate factory config objects - one for each cache?
You can see here that, yes, the object will be stored in the local cache, if the local cache is enabled:
When local cache is enabled, the cache client stores a reference to the object locally.
The instructions for "enabling the local cache" are exactly as you've done -- basically just using the DataCacheLocalCacheProperties (although the local cache can also be enabled using app.config settings instead).
So it's exactly as you say -- to use the distributed cache only, without the local, then use a DataCache object taken from a DataCacheFactory that does not use DataCacheLocalCacheProperties.
Note also that items in the local cache can be evicted depending on the policies configured:
The lifetime of an object in the local cache is dependent on several factors, such as the maximum number of objects in the local cache and the invalidation policy.