Get all entities for namespace -- KindError while trying to fetch all entities of a namespace from datastore appengine python 2.7 - python-2.7

I try to get all entities from a namespace in order to delete them in a later step.
Im using appengine together with datastore and ndb lib using python2.7
I have a simple query to get all entities:
def get_entities(namespace_id):
return [entity for entity in ndb.Query(namespace=namespace_id).fetch()]
Also modified it to avoid the dunder kinds/entities Datastore Statistics in legacy bundled services:
def get_entities(namespace_id):
return [entity for entity in ndb.Query(namespace=namespace_id).fetch() if not entity.key.id_or_name.startswith('__')]
While running locally using datastore emulator works just fine.
But I get this error when deployed in the cloud:
KindError: No model class found for kind '__Stat_Ns_Kind_IsRootEntity__'. Did you forget to import it?
I found this post Internal Kinds Returned When Retrieving All Entities Belonging to a Particular Namespace but not a clear answer.
If you have another way to get all the entities fro a specific namespace will be welcome!!

Per the documentation you have referenced, it's the kind name that begins and ends with two underscores.
Each statistic is accessible as an entity whose kind name begins and ends with two underscores
However, your code is checking for entity keys that starts with an underscore. You should be checking the kinds instead
Modify your code to
return [entity for entity in ndb.Query(namespace=namespace_id).fetch(keys_only=True) if not entity.kind().startswith('__')]
Note: I switched your query to only fetch keys since all you want is to delete the records

Related

Create sqlalchemy engine from django.db connection

With DJANGO app and postgresql db, I use panda read_sql quite a bit for rendering complex queries into dataframe for manipulation and ultimate rendering to JSON. Historically have used django.db dbconnections to map from multi db environment and pass that connection directly into read_sql function. As panda evolves, and becomes less tolerant of non sqlalchemy connections as an argument, I am looking for a simple method to take my existing connection and use it to create_engine.
I've seen some related past comments that suggest
engine = create_engine('postgresql+psycopg2://', creator=con)
but for me that generates an error: TypeError: 'ConnectionProxy' object is not callable
I've played with various attributes of the ConnectionProxy...no success. Would like to avoid writing a separate connection manager or learning too much about sqlalchemy. Django is using psycopg2 to create its connections and I am long time user of same. Some of the add-ins like aldjemy are too intrusive since I neither need or want models mapped. Some have suggested ignoring warning message since dbconnection still works...seems like risk longterm...

Can't access to django models with an external script

I have created a Django project with a series of tables (models) using postgresql. The thing, is that one of them I want to be able to access it also from outside the Django project, because from Django I simply want to see its content, but with an external script I want to insert the data.
The problem I have is that when I try to access any of the tables created in django from an external script, the windows terminal, or even the program that postgresql offers. It tells me that the table does not exist. What am I leaving or doing wrong?
The problem I have is that when I try to access any of the tables created in django from an external script, the windows terminal, or even the program that postgresql offers. It tells me that the table does not exist. What am I leaving or doing wrong?
Below I show a screenshot with the tables I have and how it gives me an error.
As you can see I have the ability to see all the tables, but then it doesn't let me select any of them. I have tried everything with lowercase and neither, removing the prefix Platform_App_ and neither How can I access them?
Here I leave a question that was asked similarly but I can't get it to work.
Thank you.
I will expend answer to be more clear of why this helped.
Short answer: all identifiers without double-quoting are always folded to lower case in PostgreSQL.
Almost that short answer: the main problem is that your table name uses mixed-case table name. PostgreSQL require using double-quotes to make identifier case-sensitive.
So, if your table was named as platform_app_fleet, then this will work:
select * from platform_app_fleet;
because table name is in lower case. But when you have table named with mixing lower and upper cases like Platform_App_fleet - you need to use quoting:
select * from "Platform_App_fleet";

Why is ActiveRecord creating different ruby object when querying the same record?

I'm trying to test a named scope in my Rails model with RSpec using FactoryBot. I'm creating several records, where only one is returned by the scope.
RSpec.describe GemNamespace::GemModel, type: :model do
before(:all)
FactoryBot.create(:gem_model, :trait1) # id 1
FactoryBot.create(:gem_model, :trait2) # id 2
FactoryBot.create(:gem_model, :trait3) # id 3
end
let(:included_record) { GemNamespace::GemModel.find 1 }
describe 'my_named_scope' do
it 'returns only records matching the conditions' do
scope_results = GemNamespace::GemModel.my_named_scope
expect(scope_results).to contain_exactly(included_record)
end
end
end
The test is failing because even though included_record is the only record in the scope_results, some debugging shows that the included_record is actually a different Ruby object than the one in the results for some reason. Thus, the contain_exactly fails.
I've done scope testing like this on tons of models and it's always worked. The only difference with this one is that the model is defined inside a gem, and I'm extending its functionality by adding my named scope to it in my Rails app.
What am I missing? Why is it behaving like this only for this model?
If it matters:
Ruby 2.5.0
Rails 5.1.5
rspec 3.7.0
rspec-rails 3.7.2
factory_bot(_rails) 4.8.2
UPDATE: I'll put this here instead of editing the above. I am actually testing a database view as opposed to a table. The views do not have a unique id column, so I'm not actually doing a GemNamespace::GemModel.find 1 above, but instead a where(column: <condition value>).
I solved this with a workaround. I don't know too much about the internals of Rails, but it seems that the database view (and corresponding model) not having an id column kinda screws things up (i.e., the separate Ruby objects being created). So I simply compared all the values of the two objects "manually"
# As a workaround, we're just gonna convert them both to Ruby hashes using
# the #as_json method, and compare those instead.
expect(scope_results.as_json).to contain_exactly(included_record.as_json)

"Type" used as keyword raising an exception in RSpec but not in production or development environments

I'm working on a large web app that uses "type" as a column in the database for many of the tables. I understand that the word "type" is a keyword in Ruby, and should not be used as columns. However, why is it that I can still run the web app on my local server just fine, and that there aren't any apparent problems in the production environment? Will using "type" as a column potentially cause any trouble in the future?
This behavior is even more confusing because it does cause my RSpec feature tests to fail when creating a video (one of the resources) and then redirecting to the show view. (Note that the video as attributes that have associations with several of the tables which have "type" as a column).
This is the error message that is raised :
"The single-table inheritance mechanism failed to locate the subclass:
'reference'. This error is raised because the column
'type' is reserved for storing the class in case of
inheritance. Please rename this column if you didn't intend it to
be used for storing the inheritance class or overwrite
Tag.inheritance_column to use another column for that information."
(Pulled from the HTML generated and displayed by print page.body)
Why would this exception to raised in my test specs but not in the development or production environments? (I'm in charge of putting together test specs, so you have in your device on ways to get around this error, that would be helpful too!)
Notes on my configuration:
I'm using Ruby 2.1.2 and rails 4.1.1
Using capybara, factory girl, and capybara-WebKit as the web driver
As it turns out, there was an explicit type column in the schema but it was pulled from the subclass of the resource. The reason that RSpec had a problem is that I was trying to define the type column without making it a subclass. The solution was to use subclassed notation when inputting data into the type. In my case, this means the string in the type column needed to be put in as: "Tags::Reference" rather than "reference".

Django import error from foreign key in another application model

I followed this post here and sorted out how to set the ForeignKey of one model to a model in another application. However, when I try it a second time I get an error and not sure why.
I have Central app with models for a 'project' and an 'annotation', and a Reports app with a report model. An 'annotation' has a FK to a 'report' in the Reports app, and that seems to work fine with this code:
#models.py for Central app
from GIanno.pt_reports.models import Report
class annotation(models.Model):
...
report=models.ForeignKey(Report)
But, in the Reports app, when I try to set a FK for the 'report' to link it to a 'project' from the 'Central' app using the same format as above, I get an error "cannot import name 'project' from the import line.
Any ideas on why it works one way and not the other. Does order somehow matter? Thanks
My guess is that you have created a circular import condition. This occurs when you import something from one python module which in turns imports from the module which is trying to import it, thus preventing the import from ever resolving.
In general there are three strategies for dealing with circular imports, two of which will work in this case:
Move around your classes and imports so that the imports only go one direction.
Use lazy evaluation. In Django's case this can be accomplished for a ForeignKey by passing a string specifying the app name and model using dot notation: report=models.ForeignKey('central.Report')
Move the import statement out of the global module scope and into the scope of a function within the module. That way the import isn't evaluated immediately and the module can be successfully imported as a whole while still allowing the import within the module to happen when it's called. (Note: this won't work for ForeignKey relationships)
The lazy FK resolution (#2) is probably your best bet here. In general, though the best strategy is to simplify your model/module arrangement to avoid circular imports whenever possible.
Try:
class annotation(models.Model):
...
report=models.ForeignKey('centralapp.Report')
Replace 'centralapp' with name of your central app name without needing to import.
Lazy Relationships
Another scenario where the Lazy Relationships might be useful is with import order. It's not a circular reference (where it can't tell who's first) but a case where one piece of code is loaded before the other can be.
For example, let's say I have a Doc Model and a Log Model. The Log model has a FK for the Doc so I can record changes in the document. This works fine until, let's say, I try to generate a Log record in my save method for my Doc model (to make a save event log entry). There is no Log PK in the Doc object in this case but is a similar issue.
In this case you get an import order problem, where one will try to reference something that has not been loaded into Python yet. It's similar to a Circular Reference but a different cause.
This can be solved other ways but is another example where you will run into this problem.