Data not inserted into database with sqlAlchemy [closed] - flask

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 9 days ago.
Improve this question
I am new to Flask and webdevelopment.
When I am trying to insert data to an EMPTY database, either the data simply does not insert into database successfully, or an error saying "table check has no column named name" pops up (which is right now quoted, and deleted when I instantiate object "check"). \
app = Flask(__name__)
app.app_context().push()
app.config.from_object(Config)
#not let people to modify the cookies
app.config['SECRET_KEY'] = 'you-will-never-guess'
#database adn migration are instances in python
with app.app_context():
db = SQLAlchemy(app)
db.create_all()
db.session.commit()
db.init_app(app)
......
#app.route('/images/', methods=['GET', 'POST'])
def image():#fetch the building by id
#!!! use requenst, to link the arguments passed in from href, then to the python
id = request.args.get('id', default='', type=str)
form = CheckForm()
if int(id)> 0 and int(id) <= image_len:
if form.validate_on_submit():
check = Check( yes=form.yes.data, maybe_yes = form.maybe_yes.data, no=form.no.data)
db.session.add(check)
db.session.commit()
flash('inserted!')
return redirect(url_for('index'))
return render_template('images.html', images=images, id = id, form = form)
else:
flash("You reach the top", "warning")
return render_template('allImage.html', images=images)
class Check(db.Model):
dbid = db.Column(db.Integer, primary_key=True)
# !!!!! name = db.Column(db.Integer, unique = True)
yes = db.Column(db.Boolean)
maybe_yes = db.Column(db.Boolean)
no = db.Column(db.Boolean)
def __repr__(self):
return f"Check('{self.name}', '{self.yes}', '{self.maybe_yes}','{self.no}')"
if __name__ == "__main__":
u = Check( yes=False, maybe_yes = True, no =None)
db.session.add(u)
db.session.commit()
Here is my code, and please help me to find the bug and shed some light! Thanks a lot!

It seems like you have defined a column "name" in your Check model, but you have commented it out. So when you create an instance of the Check model, you are trying to pass a value to the "name" attribute, which is not defined. Hence the error "table check has no column named name".
To fix this error, you should remove the reference to "name" in the repr method of the Check model:
def __repr__(self):
return f"Check('{self.yes}', '{self.maybe_yes}','{self.no}')"
Also, you can initialize your database by using the following code:
if __name__ == '__main__':
with app.app_context():
db.create_all()
u = Check(yes=False, maybe_yes = True, no =None)
db.session.add(u)
db.session.commit()
This code creates a database and tables only if they don't already exist, and then adds a new instance of Check to the database and commits the changes to the database.

Related

Why in django-import-export doesn't work use_bulk?

I use django-import-export 2.8.0 with Oracle 12c.
Line-by-line import via import_data() works without problems, but when I turn on the use_bulk=True option, it stops importing and does not throw any errors.
Why does not it work?
resources.py
class ClientsResources(resources.ModelResource):
class Meta:
model = Clients
fields = ('id', 'name', 'surname', 'age', 'is_active')
batch_size = 1000
use_bulk = True
raise_errors = True
views.py
def import_data(request):
if request.method == 'POST':
file_format = request.POST['file-format']
new_employees = request.FILES['importData']
clients_resource = ClientsResources()
dataset = Dataset()
imported_data = dataset.load(new_employees.read().decode('utf-8'), format=file_format)
result = clients_resource.import_data(imported_data, dry_run=True, raise_errors=True)
if not result.has_errors():
clients_resource.import_data(imported_data, dry_run=False)
return HttpResponseRedirect(request.META.get('HTTP_REFERER'))
data.csv
id,name,surname,age,is_active
18,XSXQAMA,BEHKZFI,89,Y
19,DYKNLVE,ZVYDVCX,20,Y
20,GPYXUQE,BCSRUSA,73,Y
21,EFHOGJJ,MXTWVST,93,Y
22,OGRCEEQ,KJZVQEG,52,Y
--UPD--
I used django-debug-toolbar and saw a very strange behavior with import-queries.
With Admin Panel doesnt work. I see all importing rows, but next it writes "Import finished, with 5 new and 0 updated clients.", and see this strange queries
Then I use import by my form and here simultaneous situation:
use_bulk by django-import-export (more)
And for comparing my handle create_bulk()
--UPD2--
I've tried to trail import logic and look what I found:
import_export/resources.py
def bulk_create(self, using_transactions, dry_run, raise_errors, batch_size=None):
"""
Creates objects by calling ``bulk_create``.
"""
print(self.create_instances)
try:
if len(self.create_instances) > 0:
if not using_transactions and dry_run:
pass
else:
self._meta.model.objects.bulk_create(self.create_instances, batch_size=batch_size)
except Exception as e:
logger.exception(e)
if raise_errors:
raise e
finally:
self.create_instances.clear()
This print() showed empty list in value.
This issue appears to be due to a bug in the 2.x version of django-import-export. It is fixed in v3.
The bug is present when running in bulk mode (use_bulk=True)
The logic in save_instance() is finding that 'new' instances have pk values set, and are then incorrectly treating them as updates, not creates.
I cannot determine how this would happen. It's possible this is related to using Oracle (though I cannot see how).

Django default data throw an error during migration

I'm using Django 1.7
class MyModel(models.Model):
my_random_field_1 = models.ForeignKey(
MyOtherModel, null=True, blank=True, related_name="random_1", default=get_random_1
)
my_random_field_2 = models.ForeignKey(
MyOtherModel, null=True, blank=True, related_name="random_2", default=get_random_2
)
And 'random functions':
def get_random_1():
ob = MyOtherModel.objects.filter(...some filtering...)
try:
x = ob[0]
return x
except:
return None
def get_random_2():
ob = MyOtherModel.objects.filter(...some other filtering...)
try:
x = ob[1]
return x
except:
return None
And when I'm trying to migrate I gave this error:
TypeError: int() argument must be a string, a bytes-like object or a number, not 'MyOtherModel'
Sentry is attempting to send 2 pending error messages
Waiting up to 10 seconds
But after that, when I open admin panel and go to MyOtherModel I have this random field, and they are properly init by 'ob[0]' and 'ob[1]'
To make this code work you should be sending instances primary key as a default, not instance itself.
def get_random_1():
ob = MyOtherModel.objects.filter(...some filtering...)
try:
x = ob[0]
return x.pk
except:
return None
def get_random_2():
ob = MyOtherModel.objects.filter(...some other filtering...)
try:
x = ob[1]
return x.pk
except:
return None
But mind you that this value will stay "baked" in your migrations file and all instances that are in your db at the time of migration (old data for instance) will get that one single value so maybe this is not what you wante.
Newer versions of Django don't even allow such a thing as baking in an object instance into migration file :D
ValueError: Cannot serialize: <Model: instance name>
There are some values Django cannot serialize into migration files.

Python interpreter not returning user info correctly in Flask tutorial [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 years ago.
Improve this question
I am doing Miguel grinberg's flask tutorial and on step 4 Database.
http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-iv-database
In this step, in the playtime section, when I open python interpreter from virtual env and type in the commands for adding a user:
>> from app import db,models
>> u= models.User(nickname='john', email='john2#gmail.com')
>> db.session.add(u)
>> db.session.commit()
Now to retrieve the user info I did the following:
>> users= models.User.query.all()
>> users
and instead of returning [<User u'john'>] I am getting:
[<app.models.User object at 0xb74bd1ac>]
which seems like I am returned the memory location of John rather than actual name. So what am I doing wrong? Any suggestion?
My code for models.py:
from app import db
class User(db.Model):
id = db.Column(db.Integer,primary_key= True)
nickname = db.Column(db.String(64),index= True,unique= True)
email = db.Column(db.String(120),index= True,unique= True)
posts = db.relationship('Post',backref= 'author', lazy='dynamic')
def _repr_(self):
return '<User %r>'% (self.nickname)
class Post(db.Model):
id = db.Column(db.Integer, primary_key= True)
body = db.Column(db.String(140))
timestamp = db.Column(db.DateTime)
user_id = db.Column(db.Integer, db.ForeignKey('user.id'))
def _repr_(self):
return '<Post %r>' % (self.body)
code for db_create.py:
#!flask/bin/python
from migrate.versioning import api
from config import SQLALCHEMY_DATABASE_URI
from config import SQLALCHEMY_MIGRATE_REPO
from app import db
import os.path
db.create_all()
if not os.path.exists(SQLALCHEMY_MIGRATE_REPO):
api.create(SQLALCHEMY_MIGRATE_REPO, 'database repository')
api.version_control(SQLALCHEMY_DATABASE_URI, SQLALCHEMY_MIGRATE_REPO)
else:
api.version_control(SQLALCHEMY_DATABASE_URI, SQLALCHEMY_MIGRATE_REPO,
api.version(SQLALCHEMY_MIGRATE_REPO))
code for db_migrate.py:
#!flask/bin/python
import imp
from migrate.versioning import api
from app import db
from config import SQLALCHEMY_DATABASE_URI
from config import SQLALCHEMY_MIGRATE_REPO
v = api.db_version(SQLALCHEMY_DATABASE_URI, SQLALCHEMY_MIGRATE_REPO)
migration = SQLALCHEMY_MIGRATE_REPO + ('/versions/%03d_migration.py' % (v+1))
tmp_module = imp.new_module('old_model')
old_model = api.create_model(SQLALCHEMY_DATABASE_URI, SQLALCHEMY_MIGRATE_REPO)
exec(old_model, tmp_module.__dict__)
script = api.make_update_script_for_model(SQLALCHEMY_DATABASE_URI,
SQLALCHEMY_MIGRATE_REPO,
tmp_module.meta, db.metadata)
open(migration, "wt").write(script)
api.upgrade(SQLALCHEMY_DATABASE_URI, SQLALCHEMY_MIGRATE_REPO)
v = api.db_version(SQLALCHEMY_DATABASE_URI, SQLALCHEMY_MIGRATE_REPO)
print('New migration saved as ' + migration)
print('Current database version: ' + str(v))
Also to clarify, I have followed all the steps till now in all 4 parts of the tutorial. working on it in Ubuntu 16.04 vmware version. Any help will be appreciated. Thanks in advance!
First of all, your database is working correctly, and you are getting the correct user when you query it. What's not working is how the user object prints itself to the console.
This is controlled by the __repr__ method defined in the User class. You have a typo there, you use _repr_ instead of __repr__ (one underbar instead of two on each side of repr).

get() in Google Datastore doesn't work as intended

I'm building a basic blog from the Web Development course by Steve Hoffman on Udacity. This is my code -
import os
import webapp2
import jinja2
from google.appengine.ext import db
template_dir = os.path.join(os.path.dirname(__file__), 'templates')
jinja_env = jinja2.Environment(loader = jinja2.FileSystemLoader(template_dir), autoescape = True)
def datetimeformat(value, format='%H:%M / %d-%m-%Y'):
return value.strftime(format)
jinja_env.filters['datetimeformat'] = datetimeformat
def render_str(template, **params):
t = jinja_env.get_template(template)
return t.render(params)
class Entries(db.Model):
title = db.StringProperty(required = True)
body = db.TextProperty(required = True)
created = db.DateTimeProperty(auto_now_add = True)
class MainPage(webapp2.RequestHandler):
def get(self):
entries = db.GqlQuery('select * from Entries order by created desc limit 10')
self.response.write(render_str('mainpage.html', entries=entries))
class NewPost(webapp2.RequestHandler):
def get(self):
self.response.write(render_str('newpost.html', error=""))
def post(self):
title = self.request.get('title')
body = self.request.get('body')
if title and body:
e = Entries(title=title, body=body)
length = db.GqlQuery('select * from Entries order by created desc').count()
e.put()
self.redirect('/newpost/' + str(length+1))
else:
self.response.write(render_str('newpost.html', error="Please type in a title and some content"))
class Permalink(webapp2.RequestHandler):
def get(self, id):
e = db.GqlQuery('select * from Entries order by created desc').get()
self.response.write(render_str('permalink.html', id=id, entry = e))
app = webapp2.WSGIApplication([('/', MainPage),
('/newpost', NewPost),
('/newpost/(\d+)', Permalink)
], debug=True)
In the class Permalink, I'm using the get() method on the query than returns all records in the descending order of creation. So, it should return the most recently added record. But when I try to add a new record, permalink.html (it's just a page with shows the title, the body and the date of creation of the new entry) shows the SECOND most recently added. For example, I already had three records, so when I added a fourth record, instead of showing the details of the fourth record, permalink.html showed me the details of the third record. Am I doing something wrong?
I don't think my question is a duplicate of this - Read delay in App Engine Datastore after put(). That question is about read delay of put(), while I'm using get(). The accepted answer also states that get() doesn't cause any delay.
This is because of eventual consistency used by default for GQL queries.
You need to read:
https://cloud.google.com/appengine/docs/python/datastore/data-consistency
https://cloud.google.com/appengine/docs/python/datastore/structuring_for_strong_consistency
https://cloud.google.com/datastore/docs/articles/balancing-strong-and-eventual-consistency-with-google-cloud-datastore/
search & read on SO and other source about strong & eventual consistency in Google Cloud Datastore.
You can specify read_policy=STRONG_CONSISTENCY for your query but it has associated costs that you should be aware of and take into account.

SQLAlchemy query not retrieving committed data in alternate session

I have a problem where I insert a database item using a SQLAlchemy / Tastypie REST interface, but the item is missing when subsequently get the list of items. It shows up only after I get the list of items a second time.
I am using SQLAlchemy with Tastypie/Django running on Apache via mod_wsgi. I use a singleton Database Manager class to hold my engine and declarative_base, and with Tastypie, a separate class to get the session and make sure I roll-back if there is a problem with the commit. As in the update below, the problem occurs when I don't close my session after inserting. Why is this necessary?
My original code was like this:
Session = scoped_session(sessionmaker(autoflush=True))
# Singleton Database Manager class for managing session
class DatabaseManager():
engine = None
base = None
def ready(self):
host='mysql+mysqldb://etc...'
if self.engine and self.base:
return True
else:
try:
self.engine = create_engine(host, pool_recycle=3600)
self.base = declarative_base(bind=self.engine)
return True
except:
return False
def getSession(self):
if self.ready():
session = Session()
session.configure(bind=self.engine)
return session
else:
return None
DM = DatabaseManager()
# A session class I use with Tastypie to ensure the session is destroyed at the
# end of the transaction, because Tastypie creates singleton Resources used for
# all threads
class MySession:
def __init__(self):
self.s = DM.getSession()
def safeCommit(self):
try:
self.s.commit()
except:
self.s.rollback()
raise
def __del__(self):
try:
self.s.commit()
except:
self.s.rollback()
raise
# ... Then ... when I get requests through Apache/mod_wsgi/Django/Tastypie
# First Request
obj_create():
db = MySession()
print db.s.query(DBClass).count() # returns 4
newItem = DBClass()
db.s.add(newItem)
db.s.safeCommit()
print db.s.query(DBClass).count() # returns 5
# Second Request after First Request returns
obj_get_list():
db = MySession()
print db.s.query(DBClass).count() # returns 4 ... should be 5
# Third Request is okay
obj_get_list():
db = MySession()
print db.s.query(DBClass).count() # returns 5
UPDATE
After further digging, it appears that the problem is my session needed to be closed after creating. Perhaps because Tastypie's object_create() adds the SQLAlchemy object to it's bundle, and I don't know what happens after it leaves the function's scope:
obj_create():
db = MySession()
newItem = DBClass()
db.s.add(newItem)
db.s.safeCommit()
copiedObj = copyObj(newItem) # copy SQLAlchemy record into non-sa object (see below)
db.s.close()
return copiedObj
If someone cares to explain this in an answer, I can close the question. Also, for those who are curious, I copy my object out of SQLAlchemy like this:
class Struct:
def __init__(self, **entries):
self.__dict__.update(entries)
class MyTastypieResource(Resource):
...
def copyObject(self, object):
base = {}
# self._meta is part of my tastypie resource
for p in class_mapper(self._meta.object_class).iterate_properties:
if p.key not in base and p.key not in self._meta.excludes:
base[p.key] = getattr(object,p.key)
return Struct(**base)
The problem was resolved by closing my session. The update in the answer didn't solve the problem fully - I ended up adding a middleware class to close the session at the end of a transaction. This ensured everything was written to the database. The middleware looks a bit like this:
class SQLAlchemySessionMiddleWare(object):
def process_response(self, request, response):
try:
session = MyDatabaseManger.getSession()
session.commit()
session.close()
except Exception, err:
pass
return response
def process_exception(self, request, exception):
try:
session = MyDatabaseManger.getSession()
session.rollback()
session.close()
except Exception, err:
pass