using sqlalchemy cannot commit to database - flask

I'm making a video page can track played time and comment time, every time I reload the page, using {{ movie.play_num }} to display the played time, it's always working but it won't change the database, So I don't know how can it be right? Seems like the session.commit() doesn't work.
Besides, if I put some words in the comment field and submit, a waiting for response... shows up and looks like the server keep waiting until timeout.
Here's my code:
def play(id=None):
movie = Movie.query.join(Tag).filter(
Tag.id == Movie.tag_id,
Movie.id == int(id)
).first_or_404()
form = CommentForm()
if "user" in session and form.validate_on_submit():
data = form.data
comment = Comment(
content=data["content"],
movie_id=movie.id,
user_id=session["user_id"]
)
db.session.add(comment)
db.session.commit() # problem here
movie.comment_num = movie.comment_num + 1
flash("Success", "ok")
return redirect(url_for("home.play", id=movie.id))
movie.play_num = movie.play_num + 1
try:
db.session.commit() # and problem here
except:
db.session.rollback()
return render_template("home/play.html", movie=movie, form=form)
see the play_num has changed to 1 from 0, this is the first time I reload the page, but at the second time, the page can't be open, and the console can't collect any data.An error occurred:
This Session's transaction has been rolled back due to a previous exception during flush. To begin a new transaction with this Session, first issue Session.rollback(). Original exception was: (raised as a result of Query-invoked autoflush; consider using a session.no_autoflush block if this flush is occurring prematurely) (pymysql.err.InternalError)
How to fix this

you try catch the exception msg:
from sqlalchemy.exc import SQLAlchemyError
try:
db.session.commit()
except SQLAlchemyError as e:
print(str(e))
db.session.rollback()
check what the error msg first

Related

How to stop scrapy from paginating the pages with repetitive records?

I tried to crawl a website whit pagination by scrapy, and it was ok! But, as this website gets update and new posts are added to this website, I need to run my code every day, so each time I run my code, it crawls all the pages. Fortunately, I'm using django and in my django model, I used
unique=True
So there are no duplicate records in my database, but I want to stop the pagination crawling as soon as it finds a duplicate record. how should I do this?
here is my spider snippet code:
class NewsSpider(scrapy.Spider):
name = 'news'
allowed_domains = ['....']
start_urls = ['....']
duplicate_record_flag = False
def parse(self, response, **kwargs):
next_page = response.xpath('//a[#class="next page-numbers"]/#href').get()
news_links = response.xpath('//div[#class="content-column"]/div/article/div/div[1]/a/#href').getall()
for link in news_links:
if self.duplicate_record_flag:
print("Closing Spider ...")
raise CloseSpider('Duplicate records found')
yield scrapy.Request(url=link, callback=self.parse_item)
if next_page and not self.duplicate_record_flag:
yield scrapy.Request(url=next_page, callback=self.parse)
def parse_item(self, response):
item = CryptocurrencyNewsItem()
...
try:
CryptocurrencyNews.objects.get(title=item['title'])
self.duplicate_record_flag = True
return
except CryptocurrencyNews.DoesNotExist:
item.save()
return item
I used a class variable (duplicate_record_flag) to have access to it in all functions and also to know that when I am facing a duplicate record?
The problem is that the spider doesn't stop in real time when the first duplicate record is founded! For more clarification: In the for iteration in the parse function, if we have 10 news_links and in the first iteration we find a duplication record, our flag wouldn't change at that moment and if we print the flag in the for loop, it will print 10 "False" values for each iteration!!! While it should be changed to "True" in the first iteration!
in other words, the crawler cralws all the links in each page in each parse!
How can I prevent this?
If you want to stop the spider after meeting certain criteria, you can raise the CloseSpider
if some_logic_to_check_duplicates:
raise CloseSpider('Duplicate records found')
# This message shows up in the logs
If you just want to skip the duplicate item, you can raise a DropItem exception from the pipeline. Example code from Scrapy docs:
class DuplicatesPipeline:
def __init__(self):
self.ids_seen = set()
def process_item(self, item, spider):
adapter = ItemAdapter(item)
if adapter['id'] in self.ids_seen:
raise DropItem(f"Duplicate item found: {item!r}")
else:
self.ids_seen.add(adapter['id'])
return item

How to do to a manual commit using Django

I would like to execute a transaction to delete some values, after that, count in db and if the result is < 1, rollback, i tried the following code:
#login_required
#csrf_exempt
#transaction.atomic
def update_user_groups(request):
if request.POST:
userId = request.POST['userId']
groups = request.POST.getlist('groups[]')
result = None
with transaction.atomic():
try:
GroupsUsers.objects.filter(user_id=int(userId)).delete()
for group in groups:
group_user = GroupsUsers()
group_user.user_id = userId
group_user.group_id = group
group_user.save()
count = UsersInAdmin.objects.all().count()
if count < 1:
transaction.commit()
else:
transaction.rollback()
except Exception, e:
result = e
return JsonResponse(result, safe=False)
Thanks,
It's not possible to manually commit or rollback the transaction inside an atomic block.
Instead, you can raise an exception inside the atomic block. If the block completes, the transaction will be committed. If you raise an exception, the transaction will be rolled back. Outside the atomic block, you can catch the exception and carry on your view.
try:
with transaction.atomic():
do_stuff()
if ok_to_commit():
pass
else:
raise ValueError()
except ValueError:
pass
I think you should give the Django documentation about db transactions another look.
The most relevant remark is the following:
Avoid catching exceptions inside atomic!
When exiting an atomic block, Django looks at whether it’s exited normally or with an exception to determine whether to commit or roll back. If you catch and handle exceptions inside an atomic block, you may hide from Django the fact that a problem has happened. This can result in unexpected behavior.
Adapting the example from the docs, it looks like you should nest your try and with blocks like this:
#transaction.atomic
def update_user_groups(request):
# set your variables
try:
with transaction.atomic():
# do your for loop
if count < 1:
# You should probably make a more descriptive, non-generic exception
# see http://stackoverflow.com/questions/1319615/proper-way-to-declare-custom-exceptions-in-modern-python for more info
raise Exception('Count is less than one')
except:
handle_exception()

Multiple ajax request (Singleton Pattern)

I am using django restless for an ajax POST request, which took almost 10 to 20 seconds.
Here is my code.
class testEndPoint(Endpoint):
def post(self, request):
testForm = TestEmailForm(request.data)
if testForm.is_valid():
sometable = EmailTable.object.get(**condition)
if sometable.is_email_sent == false:
#Send Email
#Took around 15 seconds
sometable.is_email_sent = true
sometable.save()
else:
result = testForm.errors
return serialize(result)
i am calling it via $.ajax, but the problem is if two request hit this url with milliseconds time difference, both request passed through if sometable.is_email_sent = false: condition.
How can i prevent multiple submission. Right now i have moved sometable.is_email_sent = true;sometable.save(); before email send part, but i need more generic solution as there are dozen more places where this is happening. I am on django 1.5
django restless
You should disable the originating input element before you start your ajax call (that will prevent the majority of these issues).
The remaining problems can be solved by using select_for_update
class testEndPoint(Endpoint):
#transaction.commit_manually
def post(self, request):
testForm = TestEmailForm(request.data)
if testForm.is_valid():
condition['is_email_sent'] = False
try:
rows = EmailTable.objects.select_for_update().filter(**condition)
for row in rows:
row.is_email_sent = True
row.save()
#Send Email
except:
transaction.rollback()
raise
else:
transaction.commit()
else:
result = testForm.errors
return serialize(result)
select_for_update will lock the rows until the end of the transaction (i.e. it needs to be inside a transaction). By adding is_email_sent=False to the condition, we can remove the if. I've moved the changing of is_email_sent above the "Send Email", but it is not strictly necessary -- in any case it will be undone by the transaction rolling back if there is an exception.

How I can Do a transaction rollback and commit at the time of import

The code given below is the program to import data into my django model,
actually what this program does is it read the file and it import each line from the file to the django model, here may be 1000 files will be there,there will be some error in some file and the program will catch the exception if the file have any type of error and log it, But what i need is to rollback the entire transaction from one file which throws an error.I want to do this in postgres
def impSecOrdr_File(self,path):
lines=1
try:
fromFile=open(path)
for eachLine in fromFile:
obj = SecOrdr_File()
if lines!=1:
fieldsInline = eachLine.split(",")
obj.dates = dates
obj.column1 = fieldsInline[0].strip()
obj.column2 = fieldsInline[1].strip()
obj.column3 = float(fieldsInline[2].strip())
obj.column4 = float(fieldsInline[3].strip())
obj.save()
lines+=1
except BaseException as e:
transaction.rollback()
logging.info('\tError in importing %s line %d : %s' % (path, lines, e.__str__()))
else:
try:
logging.info("\tImported %s, %d lines" % (path, lines))
except Exception as e:
logging.info(e)
Is there any way to do that. I went through the django transaction document and I tried Some of these but it is not working.. Can any one help me please
you can save the obj into a list use a try statement if all the transaction passes the save the transactions
lis =[]
try:
#your code
lis.append(obj) #instead of obj.save()
except:
#catch the error and transaction get cancelled
else:
k=len(lis)
for i in range(k):
lis[i].save()
I will suggest you to use postgresSQL instead of sqlite

How to execute code in Django if an error occurs?

One of the views that I call works with two database tables. It attempts to find an object in the first table. If the object isn't found, I get a Server Error (500). I'm not sure what the code would look like, but I want to insert some code into the view that will execute if the Server Error occurs so that I can tell it to try to find the object in the second table.
Current Code:
#csrf_exempt
#login_required
def addEvent(request):
event_id = request.POST['event_id']
user = request.POST['profile']
event = Event.objects.get(event_id = event_id)
if event.DoesNotExist:
event = customEvent.objects.get(event_id = event_id)
user = Profile.objects.get(id = user)
user.eventList.add(event)
return HttpResponse(status = 200)
Most likely you are getting 500 error because you are not finding a record in the first table. To fix that, you just have to catch the DoesNotExist Exception (Mentioned here):
try:
obj = FooModel.objects.get(...)
except FooModel.DoesNotExist:
try:
obj = OtherModel.objects.get(...)
except OtherModel.DoesNotExist:
raise Http404
or you can simplify this by using a shortcut:
try:
obj = FooModel.objects.get(...)
except FooModel.DoesNotExist:
obj = get_object_or_404(OtherModel, ...)