I have a small django site which controls an anstronomy dome and house automation. On start up the project loads 3 json files: relays, conditions and homeautomation. To avoid constant reading and writing to the Pi4's ssd I load the json files into REDIS (on start up in apps, see below). I already have REDIS running in a docker as the project uses celery.
My problem is that within a few minutes of loading the json into REDIS it clears the data out of cache.
I load the json file in the form of a dictionary (dict) in apps
cache.set("REDIS_ashtreeautomation_dict", dict, timeout=None)
and set
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://redis:6379",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
"SERIALIZER": "django_redis.serializers.json.JSONSerializer",
"TIMEOUT": None
}
}
}
I don't need the data to persist if the dockers go down and I don't need db functions. Caching these files is ideal but I need them to 'stay alive' for the lifetime of the server.
Thank you.
Thank you Kevin.
Moving TIMEOUT solved the issue.
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://redis:6379",
"TIMEOUT": None,
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
"SERIALIZER": "django_redis.serializers.json.JSONSerializer",
}
}
}
I am going to include some code to catch the long term REDIS 'eviction' policies (i.e. reload the json data). I don't want to delve into the REDIS docker.
Thanks
Ian
Related
I'm trying to use Multiple Database Tables and BigQuery Multi Table Data Fusion plugin to import multiple table in one pipeline
But when I try to execute I get the following error
java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: BigQuery Multi Table has no outputs. Please check that the sink calls addOutput at some point.
I'm using Data Fusion version 6.1.4 Multiple Database Tables version 1.2.0 and BigQuery Multi Table version 0.14.8.
Any suggestion on what may be the problem?
Edit:
following the configuration of multiple table database source
{
"name": "Multiple Database Tables",
"plugin": {
"name": "MultiTableDatabase",
"type": "batchsource",
"label": "Multiple Database Tables",
"artifact": {
"name": "multi-table-plugins",
"version": "1.2.0",
"scope": "USER"
},
"properties": {
"splitsPerTable": "1",
"referenceName": "multiTable",
"connectionString": "${secure(connection)}",
"jdbcPluginName": "netezza",
"user": "${secure(username)}",
"password": "${secure(password)}",
"whiteList": "categoria_l,cliente_l,regione_l"
}
},
"outputSchema": [
{
"name": "etlSchemaBody",
"schema": ""
}
]
},
After further test the problem is that the source response is empty because data fusion is not reading view from source database but only tables
It seems like the Multiple Database Tables source produced no records ("Out 0"). I'd check there first. You can do a quick check using the Preview mode. Plugin doc here.
Related answer here.
I am running a web application, front-end with angular and back-end with django. the thing is: These two frameworks are not running on the same server. how can I configure angular to work remotely with APIs? (I have tested the APIs, and they are just fine)
Check setup proxy for your project from Proxying to a backend server
Basically you need to create a proxy.conf.json file and have settings like:
{
"/api": {
"target": "http://localhost:3000",
"secure": false
}
}
Then you can define your backend hostname, port and available APIs and other settings.
OK, after hours of debugging I finally found it.
FIRST Create a file named proxy.conf.json in /src folder and fill it with this json:
{
"/api": {
"target": "http://test.com/",
"secure": false,
"changeOrigin": true,
"logLevel": "info"
}
}
This line is ESSENTIAL:
"changeOrigin": true,
THEN Edit the angular.json file.In the projects section, find architect and append this line to optionssection:"proxyConfig":"src/proxy.conf.json". So it should look like this:
.
.
.
"options": {
"browserTarget": "some-name:build",
"proxyConfig": "src/proxy.conf.json"
},
.
.
.
NOTE1 Trailing comma is not allowed in JSON.
NOTE2 Loglevel gives you more information.
NOTE3 Thanks to Haifeng for his guide.
Is it possible to upload a stopwords.txt onto AWS Elasticsearch and specify it as a path by stop token filter?
If your using aws elasticsearch, the only option to do this is using the elasticsearch rest APIs.
To import large data sets, you can use the bulk API.
Edit: You can now upload "packages" to AWS Elasticsearch service, which lets you add custom lists of stopwords etc. See https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/custom-packages.html
No, it isn't possible to upload a stopwords.txt file to the hosted AWS Elasticsearch service.
What you will have to do is specify the stopwords in a custom analyzer. More details on how to do that can be found in the official documentation.
The official documentation then says to "close and reopen" the index, but again, AWS Elasticsearch doesn't allow that, so you will then have to reindex.
Example:
1. Create an index with your stopwords listed inline within a custom analyzer, e.g.
PUT /my_new_index
{
"settings": {
"analysis": {
"analyzer": {
"english_analyzer": {
"type": "english",
"stopwords": "['a', 'the', 'they', 'and']"
}
}
}
}
}
2. Reindex
POST _reindex
{
"source": {
"index": "my_index"
},
"dest": {
"index": "my_new_index"
}
}
Yes it is possible by setting stopwords_path while defining your stop token filter.
stopwords_path => A path (either relative to config location, or
absolute) to a stopwords file configuration. Each stop word should be
in its own "line" (separated by a line break). The file must be UTF-8
encoded.
Here is how I did it.
Copied stopwords.txt file in the config folder of my elasticsearch home path.
Created a custom token filter with the path set in stopwords_path
PUT /testindex
{
"settings": {
"analysis": {
"filter": {
"teststopper": {
"type": "stop",
"stopwords_path": "stopwords.txt"
}
}
}
}
}
Verified if the filter was working as expected with _analyze API.
GET testindex/_analyze
{
"tokenizer" : "standard",
"token_filters" : ["teststopper"],
"text" : "this is a text to test the stop filter",
"explain" : true,
"attributes" : ["keyword"]
}
The tokens 'a', 'an', 'the', 'to', 'is' were filtered out since I had added them in config/stopwords.txt file.
For more info:
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-stop-tokenfilter.html
https://www.elastic.co/guide/en/elasticsearch/reference/2.2/_explain_analyze.html
In a views I have this cache which is supposed to save some costly queries:
from django.core.cache import cache
LIST_CACHE_TIMEOUT = 120
....
topics = cache.get('forum_topics_%s' % forum_id)
if not topics:
topics = Topic.objects.select_related('creator') \
.filter(forum=forum_id).order_by("-created")
print 'forum topics not in cache', forum_id #Always printed out
cache.set('forum_topics_%s' % forum_id, topics, LIST_CACHE_TIMEOUT)
I don't have problem using this method to cache other queryset results and can not think of the reson of this strange behavior, so I appreciate your hints about this.
I figured out what caused this: memcache hash value can not be larger than 1mb.
So I switched to redis, and the problem was gone:
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://127.0.0.1:6379/1",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
}
}
}
IMPORTANT: make sure that redis version is 2.6 or higher.
redis-server --version
In older versions of redis, apparently redis does not recognize key timeout parameter and throughs error. This tripped me a while because the default redis on Debian 7 was 2.4.
What I want to do is to failover Redis with Django, but cannot find the way to do it.
What I've setup is as follows:
I'm using Redis as a session backend.
I've setup two Redis servers in master-slave relationship that when master fails, slave automatically becomes master (using Sentinnel)
I setup settings.py like this
CACHES = {
'default': {
'BACKEND': 'redis_cache.RedisCache',
'LOCATION':[
"127.0.0.1",
"IPofSlave"
],
'OPTIONS': {
'PASSWORD': "xxxxxxxx",
'DB': 0,
}
}
}
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
SESSION_CACHE_ALIAS = "default"
I want Django to use only the master normally, and switch automatically to slave when it can't connect to the master.
How could I do this by editing settings.py or should I take another way around?
I would probably go with something like https://github.com/KabbageInc/django-redis-sentinel/blob/master/README.md which adds sentinel support to the Django Redis plugin. There may be others more suitable, this was top of the list in a Google search for Django sentinel.