I am trying to write a program in C++ to read, manipulate, and update my database. I am having a problem inserting my data into mongo. So for my work flow, I get some type of request to update a document. I query the document, and update the data. I then try to do an update on the document.
I have a function that converts my class object to a BSONObj through a BSONObjBuilder. I seem to be having a problem with large arrays of sub-objects. For example, I have a field in my document called geo that looks like this:
geo: [{"postal": 10012},{"postal":10013},...,{"postal":90210}]
and is stored in C++ as:
std::vector<mongo::BSONObj> geo;
this field might have thousands of postal codes in it. When doing:
db.get()->update("db.collection",BSON("id"<<id_), BSON($set<<updateObj));
where updateObj is the obj I got from my BSONObjBuilder, nothing is updated in mongo. If I remove the geo field, everything is inserted.
I tried to just do
db.get()->update("db.collection",BSON("id"<<id_), BSON($set<<BSON("geo" << geo)));
thinking maybe it necessary to do separate queries due to the size of the obj and this also result in no update.
I was wondering if somehow I was hitting some sort of BSON size limit in C++.
The only reason I believe it is a size limit is because while trying to debug this problem, I tried to call updateObj.toString() in order to print out the object I was trying to insert and it threw an exception: Element extends past end of object. I assume that this means I hit some type of max size of an object/element.
Any insight into this problem will be greatly appreciated.
Thank you
I seemed to have figured it out. What happened was I got the geo field in one function, stored it in a vector and was using it in another. I did not used .Obj().copy() while storing the object in the vector, I just stored the .Obj() from the query results and when I went to insert I guess the invalid pointers blew up the BSONObj and caused an error.
Related
I found that the objects could be duplicate in a queryset. However, when I try to access each of the object and do nothing, it changes and seems to be right.
Here are the commands I have typed into the shell
At first I gained a queryset orderby the field 'receiveTime'. Then it seems that ds[1996] equals to ds[1997]. And I try to use the loop:
for d in ds:
pass
Then the ds[1996] isn't equal to ds[1997], but what have I done?
Maybe it is a feature of the lazy search?
plus 1:I have reproduced it just now. I didn't do any inserting or deleting just now.
These are the commands I just typed into the shell.
plus 2:I have seen the raw sql queries when I call the ds[0] and ds[1] which I have shown in the picture 2. The sql queries are correct but the answer seems to be wrong. I think maybe the reason is that the sorting parameter receiveTime of two objects are the same, which lead to the disorder of the objects?
Here are the raw sql queries
Replace order_by("receive_time") with order_by("receive_time", "id"). PostgreSQL uses qsort which is an unstable sort. Given only receive_time, if values are the same, the order is not guaranteed.
Don't post code or logs in images. Ever.
Trying to figure out the right way to parse key-value pairs produced by a Cypher query:
#app.route('/about')
def about():
data = graph.run("MATCH (n) RETURN n.level")
for record in data:
return render_template("output.html",output=record)
Please disregard the fact that I'm not combining the returned records into a list prior to populating the template. I do get one record as output, and am ok with that for now.
What I'm struggling with is - how do I handle the resulting k/v pair
(u'n.level': u'high')
I mean, if I'm just interested in the value 'high', is there a clean way to get hold of it?
Sorry if this sounds too basic. I do understand, there must be some parsing tools, but at this point, I just don't know where to look.
Sorry, the solution is simple. Flask returns a py2neo.database.record object, which can be indexed just like a list, the only caveat being that the list has only one element (not two, as it might appear).
So, if variable record above equals to (u'n.level': u'high'), record[0] will be equal to 'high'.
And the 'u's can be just ignored altogether, as explained elsewhere on SO.
I would like to do something like:
App.Model.find({unique_attribute_a: 'foo'}).objectAt(0).get('attribute_b')`
basically first finding a model by its unique attribute that is NOT its ID, then getting another attribute of that model. (objectAt(0) is used because find by attribute returns a RecordArray.)
The problem is App.Model.find({unique_attribute_a: 'foo'}).objectAt(0) is always undefined. I don't know why.
Please see the problem in the jsbin.
It looks like you want to use a filter rather than a find (or in this case a findQuery). Example here: http://jsbin.com/iwiruw/438
App.Model.find({ unique_attribute_a: 'foo' }) converts the query to an ajax query string:
/model?unique_attribute_a=foo
Ember data expects your server to return a filtered response. Ember Data then loads this response into an ImmutableArray and makes no assumption about what you were trying to find, it just knows the server returned something that matched your query and groups that result into a non-changable array (you can still modify the record, just not the array).
App.Model.filtler on the other hand just filters the local store based on your filter function. It does have one "magical" side affect where it will do App.Model.find behind the scenes if there are no models in the store although I am not sure if this is intended.
Typically I avoid filters as it can have some performance issues with large data sets and ember data. A filter must materialize every record which can be slow if you have thousands of records
Someone on irc gave me this answer. Then I modified it to make it work completely. Basically I should have used filtered.
App.Office.filter( function(e){return e.get('unique_attribute_a') == 'foo'}).objectAt(0)
Then I can get the attribute like:
App.Office.filter( function(e){return e.get('unique_attribute_a') == 'foo'}).objectAt(0).get('attribute_b')
See the code in jsbin.
Does anyone know WHY filter works but find doesn't? They both return RecordArrays.
I am trying to retrieve a single row from a table. This row contains filed that hold foreign keys into another table, which in turns is related to yet another table. I am trying to get just one row returned, yet, the problem is, it returns not only the row but ALL the objects that are jointly related to that table as well. As I have to deal with a fairly large amount of data, the returned object is very cumbersome as it contains all the related data as well. In some cases my script simply times out because there is just far too much data to grab.
My question is; is there a way to retrieve just a single record without the associated fluff with it? I am basically accessing the table via the entityManager from the repository, then trying to get my record by using the ->find($id) method.
I am sure this is something stupidly simple but I can't seem to figure this out. Thanks in advance for any help, it is much appreciated.
Doctrine 2 use "lazy loading", it means that the associated objects are not really retrieved from the database while you don't try to access them.
So the find($id) is just fine.
I am using James Bennetts code (link text) to create a dynamic form. Everything is working ok but I have now come to the point where I need to save the data and have become a bit stuck. I know I can assess the data returned by the form and simply save this to the database as a string but what I'd really like to do is save what type of data it is e.g. date, integer, varchar along with the value so that when it comes to viewing the data I can do some processing on it depending on what type it is e.g. get dates greater than last week.
So my question is how do I access what database type the form element is based on what type of form element it is e.g. a django.forms.IntegerField has a database field type of int, django.forms.DateField would be a date field and django.forms.ChoiceField would be a varchar field?
Why do you want to know, what kind of database field you are using? Are you storing information from the form through raw sql? You should have some model, that you are storing information from the form and it will do all the work for you.
Maybe you could show some form code? Right now it's hard to determine, what exactly you are trying to do.
I can not understand the exact problem, so forgive me if I get things wrong.
If you are using models, then you don't need to know about database-level data types. They are defined by django according to your model fields.
However, since you are talking about dynamic forms (I've read the article), you are probably not working with models, at least not directly. In that case, it should not matter as well, because you are using form validation so, for example, you can be absolutely sure that an integer comes out of a forms.IntegerField field, unicode comes out of forms.CharField and so on.
If you are writing your database-interaction routies by hand (raw sql), then you have to map python-types to db-types yourself, for example <type 'int'> goes to a column of type integer (or something), <type 'datetime.datetime'> goes to a datetime type of column (or not, this example is arbitrary) and so on. When you are using models, django does this type of mapping for you in a database-engine-independent way.
Either way, you, yourself are defining the datatypes on the python side and you or django must also define the datatypes on the db side. The choice of those types is, at times, not an automatic 1:1 type of decision, but, rather, a design decision, based on what this data is used for in your application.
Sorry, if this makes little sense, but, I must admit, that I don't quite understand the problem behind your question.
If you're using James' code, then you don't get a Model out of the form per se, rather a list of form field elements. That means that you can't save the data as a Model instance.
I think you have two choices; bundle the whole form into a JSON object and save that into a LONGTEXT variable in your database, or save each form element into a row of the database on its own, saving it into a BLOB entry. In this case, you'll need to 'pickle' the object before saving it. If you pickle the object and save it into the database, when you retrieve it and unpickle it, you'll have all the python class information associated with the object.
Trying to make this clearer - if you have the bytes; 2009-11-28 21:34:36.516176, is this a str or a datetime object? You can't tell if it's stored in the database as a VARCHAR or LONGTEXT. Which is the core of your question - you do get object information if you save it as a pickled object though.
By extension, you could save your whole Form object into the database, either as a JSON object, or pickle the object and save that.
I'm struggling with something very similar at the moment, as I'm trying to put together a dynamic form system, and thinking of going the 'individual form field element, pickled' and then saved into the database. So I'll be watching how this question works out! :)