Django "caches" my query. I realized it after updating my Database. When i run the Function with the Query, it takes some old Data before the Update. When i restart the Server, it works fine again.
Do anyone know, how i can solve this issue?
I use Raw Queries with the connection-function. (connection.cursor(sql)). In the End of each Query, i close the cursor with cursor.close().
I debugged the Code and found out, that after Updating my Database it takes the old Data until i restart my Server.
Related
I am receiving a error when running postgres, pgadmin and django.
Port is already in use. I know how to use the sudo command to kill the server, however I am not being shown my databases in postgres. It just shows running when I start the server. When I create a new port database shows but doesn't migration the data. Also when I run migrations to django I get error, table already exist and or table doesn't exist. This happens everytime as if there is a duplicate database and it's not connected to the one showing in pgadmin. I tried dropping and deleting tables in pgadmin, and still get table exist.
How do i fix this? How do I show all databases connected and delete the one that overrides my default port and have postgres show database servers in the app.
Thank you in advance. Long post but I'm stressing and cant find a solution.
Deleting tables in pgadmin, change port number, open postgres first than start pgadmin.
I have a Django application, and it works fine. So far, the connection to Oracle seems to be ok. The problems is when I try to query data. I use the objects property and it gives me
"ORA-00933: SQL command not properly ended"
error. So far I ven looking the query and I think is the problem. Anyway, I try that same query on oracle and sems to be pk
print(CONTRIBUYENTES.objects.using('VIALISDB').all().query)
SELECT "FGESILDOWN.SILD_DET_CONTRIBUYENTES"."DTCO_FOLIOSOLICITUD" FROM "FGESILDOWN.SILD_DET_CONTRIBUYENTES"
Does someone knows the problem?
My function running query, its taking sometime depending on requested data. While query running, I want to show real query loading cfprogressbar & changing status/title while cfprogressbar doing progress. im still searching on google till now no luck all examples showing static time.
I was thinking if i can get real cfquery loading time & i'll pass that value to cfprogressbar. Please advise
coldfusion 11
windows 2012
cfprogressbar
cfquery
Unfortunately you can't show an accurate progress bar. The ColdFusion engine does not run your query, the database server runs your query and for that reason the ColdFusion engine does not know where your db server is at while running the query. You can show a 'spinner' if you want to let your user know that something is going on.
I have a simple Celery task that write some progress data in the database. I need to read this progress update using a django view to give the update to the user.
I used my own tables to write the progress and read it using AJAX polling from client side. Now it's not working and I don't know the reason.
My database backend is PostgreSQL. I tried changing the transaction isolation level using the following (in the read view):
from django.db import transaction
#4 is READ UNCOMITTED
transaction.connections.all()[0].connection.set_isolation_level(4)
I am not sure if this changes the isolation level for a new connection to the database or the one the current transaction is using, but it doesn't seem to work. no progress data can be read until the task has finished and transaction is committed.
Here is second method I tried.
I also found update_state, I write all the progress updates using update_state, but it doesn't seem to be actually written in the database. I run celerycam and configured celery to send events with -E argument.
I want to know what's the proper way to update progress day and retrieve it.
Thanks you.
After some Googling I found out that "READ UNCOMMITTED" is not implemented in PostgreSQL and most probably won't be implemented in the future.
I also found an extension that allows you to read dirty data. It's part of project enter link description here, but this forced me to use raw sql to get the data I wanted.
I want to track how much time needed for my queries to be executed
I referred to this post but I get only the queries without the time.
It is possible that after using my web application for a wile,using select, update , insert queries (not from console but real web-application execution) I can get a summary like this output generated by SHOW PROFILES; command.
I am working with wamp mysql V5.5.24
Many thanks
Edit: I used triggers to track the update and insert statement following this method
I still have the problem how to track the select query.
any idea please?
This no longer works.
As of July 2013, you need:
general-log=1
general-log-file = "C:\wamp\logs\mysql_general.log"
Are you sure you are not getting execution times in your slow query log?
If you are just looking to optimize your queries (rather than checking the execution time of every single one), you can look at the mysql server status in phpmyadmin (assuming you kept it in your wamp server) as covered here. The full tutorial is paid, but the preview will get you into the server status page where phpmyadmin will point out problem areas for you.
Finaly I used the general log by setup WAMP server like this
[mysqld]
port=3306
long_query_time = 1
slow_query_log = 1
slow_query_log_file = "E:/wamp/logs/slowquery.log"
log = "E:/wamp/logs/genquery.log"
after that I used this tool (trial version) dbForge Studio where I can use a query profiler and I get the complete execution time.