A strange (to the uninitiated lol) issue with models using a custom CharField for a primary key/id:
id = models.CharField(max_length=10, primary_key=True)
Feeling I know what I'm doing, I've created the following (json) fixtures file:
[
{
"model": "products.product",
"id": "am11",
"fields": {
"title": "Test Product A",
}
},
{
"model": "products.product",
"id": "am22",
"fields": {
"title": "Test Product B",
}
}
]
, and proceeded with loading it:
✗ python manage.py loaddata fixtures/products.json
Installed 2 object(s) from 1 fixture(s)
Well, it kinda lied. A check on the admin page or in a shell shows that there's only one Product in the database - the last one in the fixture's list. Curiously enough, attempts to delete this Product via the admin page silently fail, only via the shell can it actually be deleted. Further investigation (in the shell) revealed an interesting problem - that (single) Product created has pk/id set to an empty string(?!..but explains why admin fails to delete it). If I manually create another, either on admin page or in the shell, the new product appears without any issues, both id and pk set to the string given. But loaddata with fixture fails on this. Originally discovered this problem when a basic test failed - given the same fixture, in failed the assertion on the number of products in the queryset, claiming there's just one.
Now, I was able to "fix" the problem by renaming 'id' to 'pk' in the fixture file. I say 'fix', because I don't understand what bit me here. Any clue will be appreciated. Thanks!
Related
We have been working with Django and Django RestFramework for quite a time now and facing a lot of challenging to manage cache in Redis, our models look like
(1) School (Details of School)
(2) Teacher (FK School, with all details of teacher)
(3) Student (FK Teacher, with all details of Teacher)
Our users would be operating CRUD operation on School, like /get_all_info should return a JSON object like,
{
"name": "something"
"teachers": [
{
"name": "teacher1 name",
"students": [
{
"name" : "student1 name"
}, ... all students of that teacher
]
}, ... all teacher of that school
]
}
Also, the whole system is very dynamic, each component keeps on changing. Around 90% of requests are of the type stated above.
We were thinking of adding post-save signals to delete full cache each time for the school like a student is updated in post-save first we will find his-her school then delete cache for that school. Is there is some more elegant/better approach? Is there any python library that can handle all this?
I have a Warehouse Model which is getting index as follows
class WarehouseIndex(SearchIndex, Indexable):
"""
SearchIndex Class that stored indexes for Model Warehouse
"""
text = CharField(document=True, use_template=True)
search_auto = NgramField()
....
def get_model(self):
return WareHouse
In my shell I am running the following sqs query.
>>> sqs = SearchQuerySet().models(WareHouse)
>>> sqs.filter(customers=3).filter(search_auto='pondicherry')
This returns result consisting of results that do not have exact term pondicherry it also provides me some results that match terms like ich, che, ndi, etc.
I have even tried using __exact and Exact but all return the same result?
EDIT: Index mapping, Index Setting
How can I avoid this and provide result only of term pondicherry?
It seems to be related to this open issue
This is because your search_auto ngram field has the same index and search analyzer and hence your search term pondicherry also gets ngramed at search time. The only way to fix this is to set a different search_analyzer for your search_auto field, standard would be a good fit.
You can change your search_auto field mapping with this:
curl -XPUT localhost:9200/haystack/_mapping/modelresult -d '{
"properties": {
"search_auto": {
"type": "string",
"analyzer": "ngram_analyzer",
"search_analyzer": "standard"
}
}
}'
As #Val has stated in the above answer, the error was because search_analyzer and indexed_analyzer are same which caused the issue,
As we all know haystack is very inflexible in setting up the basic elasticsearch configuration, I installed elasticstack and in my setting.py changed the backend to it's elasticsearch_backend as suggest and additionally added the following 2 configurations
# elasticslack setting
ELASTICSEARCH_DEFAULT_ANALYZER = 'snowball'
ELASTICSEARCH_DEFAULT_NGRAM_SEARCH_ANALYZER = 'standard'
this seemed to solve my problem.
I had two models called CombinedProduct and CombinedProductPrice which I renamed to Set and SetPrice respectively. I did this by changing their model name in the models.py file and replaced all occurrences of it. This also included renaming a foreignkey field in another model from combined_product to set (pointing to a CombinedProduct).
When running makemigrations django properly detected the renaming and asked if I had renamed all three of those things and I pressed 'yes' for all. However when running 'migrate', after applying some stuff, I get asked:
The following content types are stale and need to be deleted:
product | combinedproduct
product | combinedproductprice
Any objects related to these content types by a foreign key will also
be deleted. Are you sure you want to delete these content types?
If you're unsure, answer 'no'.
I backed up my data and entered 'yes' which deleted all instances of Set (previously CombinedProduct) and SetPrice (previously CombinedProductPrice). If I roll back and tick no, then this question comes up every time I migrate.
This is weird since I don't use any of the django ContentType framework anywhere. When inspecting which fields point to ContentType however I see that auth.permission points to it, and I use permissions for those models. So maybe the deletion cascades from old permissions pointing to the old model names which in turn would delete my instances? If that is the case, how can I prevent this situation?
This is the migration that was generated:
operations = [
migrations.RenameModel(
old_name='CombinedProduct',
new_name='Set',
),
migrations.RenameModel(
old_name='CombinedProductPrice',
new_name='SetPrice',
),
migrations.AlterModelOptions(
name='setprice',
options={'ordering': ('set', 'vendor', 'price'), 'verbose_name': 'Set price', 'verbose_name_plural': 'Set prices'},
),
migrations.RenameField(
model_name='setprice',
old_name='combined_product',
new_name='set',
),
]
If you want to rename your table, please take a look to RenameModel. Yes, Django do not detect the renamed model. So, you need to add it manually.
How can I create relationships via slc or by directly editing models in a text editor?
I could not find a way to do so using strong arc composer. Does this feature exist?
You cannot create relations using Arc (unfortunately!). It's sure nice to have.
To create relations you can use the command in the cli from the project's root:
slc loopback:relation
This will prompt you with the models available. You can then select the type of relationship you want to have with the selected models. eg, one to many or many to many.
Then you can see the modified .json file in the common folder to view the relations created.
Alternatively, you can also edit the .json file directly. see the example which sets the relation between user and user-tokens
{
"name": "User",
. .
.
"relations": { // relations
"accessTokens": { // specify relation name
"type": "hasMany", // type of relation
"model": "AccessToken", // model to which relation is made
"foreignKey": "userId" // foreign key
}
}
}
After much hardship, I have managed to convert my django project, that previously ran with sqlite, to run with MongoDB.
This is great, beside from the fact that my old version had a massive initial_data.json file, which now fails to work with the new db when running django's syncdb command.
EDIT:
this is an example of the initial_data.json file :
[{"pk":1,
"model": "vcb.dishtype",
"fields": {
"photo": "images/dishes/breakfast.jpg",
"name": "Breakfast"
}
},
{"pk":2,
"model": "vcb.dishtype",
"fields": {
"photo": "images/dishes/bread_and_pastry.jpg",
"name": "Bread and pastry"
}
}]
and after running the syncdb I get:
DeserializationError: Problem installing fixture
'C:\Users..\initial_data.json' : u'pk'
It seems to be a problem with the MongoDB objectId and how I defined the initial_data file.
I tried to remove all the pks fields from the file, but still the same error.
EDIT
I tried putting just two fixtures, if I don't set the pk, I get the same error as above. If I do set it, I get :
"DatabaseErroe: Problem installing fixture 'C:..\initial_data.json':
could not load vcb.dishtype(pk=1): AutoField (default primary key)
values must be strings representing an ObjectId on MongoDB (got u'1'
instead)".
which is a similar problem I had with the django Site, that was solved with the help of this thread: Django MongoDB Engine error when running tellsiteid
This raises my suspicion that there's a different way to set the fixtures in this infrastructure. Maybe syncdb isn't the way to go, and there should be a sort of dump maybe?
I've searched google, and nothing seems to tackle this problem. I'm obviously asking the wrong questions.
what should I do, to create fixtures in my altered project?
thanks a lot,
Nitzan
From your error message, I assume you are using Django MongoDB Engine?
Your pk values must be valid ObjectIds, try using values like:
'000000000000000000000001'
'000000000000000000000002'
etc
You can get ObjectIds or check that you have correct values:
>>> from bson.objectid import ObjectId
>>> ObjectId()
> ObjectId('52af59bac38f8420734d064d')
>>> ObjectId('000000000000000000000001')
> ObjectId('000000000000000000000001')
>>> ObjectId('bad')
*error*