Here's how I can do it when MySQL is the backend,
cursor.execute('show tables')
rows = cursor.fetchall()
for row in rows:
cursor.execute('drop table %s; ' % row[0])
But how can I do it when postgresql is the backend?
cursor.execute("""SELECT table_name FROM information_schema.tables WHERE table_schema='public' AND table_type != 'VIEW' AND table_name NOT LIKE 'pg_ts_%%'""")
rows = cursor.fetchall()
for row in rows:
try:
cursor.execute('drop table %s cascade ' % row[0])
print "dropping %s" % row[0]
except:
print "couldn't drop %s" % row[0]
Courtesy of http://www.siafoo.net/snippet/85
You can use select * from pg_tables; get get a list of tables, although you probably want to exclude where schemaname <> 'pg_catalog'...
Based on another one of your recent questions, if you're trying to just drop all your django stuff, but don't have permission to drop the DB, can you just DROP the SCHEMA that Django has everything in?
Also on your drop, use CASCADE.
EDIT: Can you select * from information_schema.tables; ?
EDIT: Your column should be row[2] instead of row[0] and you need to specify which schema to look at with a WHERE schemaname = 'my_django_schema_here' clause.
EDIT: Or just SELECT table_name from pg_tables where schemaname = 'my_django_schema_here'; and row[0]
Documentation says that ./manage.py sqlclear Prints the DROP TABLE SQL statements for the given app name(s).
I use this script to clear the tables, I put it in a script called phoenixdb.sh because it burns the DB down and a new one rises from the ashes. I use this to prevent lots of migrations in the early dev portion of the project.
set -e
python manage.py dbshell <<EOF
DROP SCHEMA public CASCADE;
CREATE SCHEMA public;
EOF
python manage.py migrate
This wipes the tables from the db without deleting the db. Your Django user will need to own the schema though which you can setup with:
alter schema public owner to django-db-user-name;
And you might want to change the owner of the db as well
alter database django-db-name owner to django-db-user-name;
\dt is the equivalent command in postgres to list tables. Each row will contain values for (schema, Name, Type, Owner), so you have to use the second (row[1]) value.
Anyway, you solution will break (in MySQL and PostgreSQL) when foreign-key constraints are involved, and if there aren't any, you might get troubles with the sequences. So the best way is in my opinion to simply drop the whole database and call initdb again (which is also the more efficient solution).
Related
How does one bulk insert data with Postgres into QuestDB?
The following does not work
CREATE TABLE IF NOT EXISTS employees (employee_id INT, last_name STRING,first_name STRING);
INSERT INTO employees
(employee_id, last_name, first_name)
VALUES
(10, 'Anderson', 'Sarah'),(11, 'Johnson', 'Dale');
For inserting data in bulk, there are a few options. You can use CREATE AS SELECT to bulk insert from an existing table which is closest to your example:
CREATE TABLE employees
AS (SELECT employee_id, last_name, first_name FROM existing_table)
Or you can use prepared statements, there are full working examples in a few languages in the QuestDB Postgres documentation, here is a snippet from the Python example:
# insert 10 records
for x in range(10):
cursor.execute("""
INSERT INTO example_table
VALUES (%s, %s, %s);
""", (dt.datetime.utcnow(), "python example", x))
# commit records
connection.commit()
Or you can bulk import from CSV, i.e.:
curl -F data=#data.csv http://localhost:9000/imp
I am very new to sql and intermediate at python. Using sqlite3, how can I get a print() list of of primary and foreign keys (per table) in my database?
Using Python2.7, SQLite3, PyCharm.
sqlite3.version = 2.6.0
sqlite3.sqlite_version = 3.8.11
Also note: when I set up the database, I enabled FKs as such:
conn = sqlite3.connect(db_file)
conn.execute('pragma foreign_keys=ON')
I tried the following:
conn=sqlite3.connect(db_path)
print(conn.execute("PRAGMA table_info"))
print(conn.execute("PRAGMA foreign_key_list"))
Which returned:
<sqlite3.Cursor object at 0x0000000002FCBDC0>
<sqlite3.Cursor object at 0x0000000002FCBDC0>
I also tried the following, which prints nothing (but I think this may be because it's a dummy database with tables and fields but no records):
conn=sqlite3.connect(db_path)
rows = conn.execute('PRAGMA table_info')
for r in rows:
print r
rows2 = conn.execute('PRAGMA foreign_key_list')
for r2 in rows2:
print r2
Unknown or malformed PRAGMA statements are ignored.
The problem with your PRAGMAs is that the table name is missing. You have to get a list of all tables, and then execute those PRAGMAs for each one:
rows = db.execute("SELECT name FROM sqlite_master WHERE type = 'table'")
tables = [row[0] for row in rows]
def sql_identifier(s):
return '"' + s.replace('"', '""') + '"'
for table in tables:
print("table: " + table)
rows = db.execute("PRAGMA table_info({})".format(sql_identifier(table)))
print(rows.fetchall())
rows = db.execute("PRAGMA foreign_key_list({})".format(sql_identifier(table)))
print(rows.fetchall())
SELECT
name
FROM
sqlite_master
WHERE
type ='table' AND
name NOT LIKE 'sqlite_%';
this sql will show all table in database, for eache table run sql PRAGMA table_info(your_table_name);, you can get the primary key of the table.
Those pictures show what sql result like in my database:
first sql result
second sql result
I am building an app in Symfony2, using Doctrine2 with mysql. I would like to use a fulltext search. I can't find much on how to implement this - right now I'm stuck on how to set the table engine to myisam.
It seems that it's not possible to set the table type using annotations. Also, if I did it manually by running an "ALTER TABLE" query, I'm not sure if Doctrine2 will continue to work properly - does it depend on the InnoDB foreign keys?
Is there a better place to ask these questions?
INTRODUCTION
Doctrine2 uses InnoDB which supports Foreign Keys used in Doctrine associations. But as MyISAM does not support this yet, you can not use MyISAM to manage Doctrine Entities.
On the other side, MySQL v5.6, currently in development, will bring the support of InnoDB FTS and so will enable the Full-Text search in InnoDB tables.
SOLUTIONS
So there are two solutions :
Using the MySQL v5.6 at your own risks and hacking a bit Doctrine to implement a MATCH AGAINST method : link in french... (I could translate if needed but there still are bugs and I would not recommend this solution)
As described by quickshifti, creating a MyISAM table with fulltext index just to perform the search on. As Doctrine2 allows native SQL requests and as you can map this request to an entity (details here).
EXAMPLE FOR THE 2nd SOLUTION
Consider the following tables :
table 'user' : InnoDB [id, name, email]
table 'search_user : MyISAM [user_id, name -> FULLTEXT]
Then you just have to write a search request with a JOIN and mapping (in a repository) :
<?php
public function searchUser($string) {
// 1. Mapping
$rsm = new ResultSetMapping();
$rsm->addEntityResult('Acme\DefaultBundle\Entity\User', 'u');
$rsm->addFieldResult('u', 'id', 'id');
$rsm->addFieldResult('u', 'name', 'name');
$rsm->addFieldResult('u', 'email', 'email');
// 2. Native SQL
$sql = 'SELECT u.id, u.name FROM search_user AS s JOIN user AS u ON s.user_id = u.id WHERE MATCH(s.name) AGAINST($string IN BOOLEAN MODE)> 0;
// 3. Run the query
$query = $this->_em->createNativeQuery($sql, $rsm);
// 4. Get the results as Entities !
$results = $query->getResult();
return $results;
}
?>
But the FULLTEXT index needs to stay up-to-date. Instead of using a cron task, you can add triggers (INSERT, UPDATE and DELETE) like this :
CREATE TRIGGER trigger_insert_search_user
AFTER INSERT ON user
FOR EACH ROW
INSERT INTO search_user SET user_id=NEW.id, name=NEW.name;
CREATE TRIGGER trigger_update_search_user
AFTER UPDATE ON user
FOR EACH ROW
UPDATE search_user SET name=name WHERE user_id=OLD.id;
CREATE TRIGGER trigger_delete_search_user
AFTER DELETE ON user
FOR EACH ROW
DELETE FROM search_user WHERE user_id=OLD.id;
So that your search_user table will always get the last changes.
Of course, this is just an example, I wanted to keep it simple, and I know this query could be done with a LIKE.
Doctrine ditched the fulltext Searchable feature from v1 on the move to Doctrine2. You will likely have to roll your own support for a fulltext search in Doctrine2.
I'm considering using migrations to generate the tables themselves, running the search queries w/ the native SQL query option to get sets of ids that refer to tables managed by Doctrine, then using said sets of ids to hydrate records normally through Doctrine.
Will probly cron something periodic to update the fulltext tables.
class Log:
project = ForeignKey(Project)
msg = CharField(...)
date = DateField(...)
I want to select the four most recent Log entries where each Log entry must have a unique project foreign key. I've tries the solutions on google search but none of them works and the django documentation isn't that very good for lookup..
I tried stuff like:
Log.objects.all().distinct('project')[:4]
Log.objects.values('project').distinct()[:4]
Log.objects.values_list('project').distinct('project')[:4]
But this either return nothing or Log entries of the same project..
Any help would be appreciated!
Queries don't work like that - either in Django's ORM or in the underlying SQL. If you want to get unique IDs, you can only query for the ID. So you'll need to do two queries to get the actual Log entries. Something like:
id_list = Log.objects.order_by('-date').values_list('project_id').distinct()[:4]
entries = Log.objects.filter(id__in=id_list)
Actually, you can get the project_ids in SQL. Assuming that you want the unique project ids for the four projects with the latest log entries, the SQL would look like this:
SELECT project_id, max(log.date) as max_date
FROM logs
GROUP BY project_id
ORDER BY max_date DESC LIMIT 4;
Now, you actually want all of the log information. In PostgreSQL 8.4 and later you can use windowing functions, but that doesn't work on other versions/databases, so I'll do it the more complex way:
SELECT logs.*
FROM logs JOIN (
SELECT project_id, max(log.date) as max_date
FROM logs
GROUP BY project_id
ORDER BY max_date DESC LIMIT 4 ) as latest
ON logs.project_id = latest.project_id
AND logs.date = latest.max_date;
Now, if you have access to windowing functions, it's a bit neater (I think anyway), and certainly faster to execute:
SELECT * FROM (
SELECT logs.field1, logs.field2, logs.field3, logs.date
rank() over ( partition by project_id
order by "date" DESC ) as dateorder
FROM logs ) as logsort
WHERE dateorder = 1
ORDER BY logs.date DESC LIMIT 1;
OK, maybe it's not easier to understand, but take my word for it, it runs worlds faster on a large database.
I'm not entirely sure how that translates to object syntax, though, or even if it does. Also, if you wanted to get other project data, you'd need to join against the projects table.
I know this is an old post, but in Django 2.0, I think you could just use:
Log.objects.values('project').distinct().order_by('project')[:4]
You need two querysets. The good thing is it still results in a single trip to the database (though there is a subquery involved).
latest_ids_per_project = Log.objects.values_list(
'project').annotate(latest=Max('date')).order_by(
'-latest').values_list('project')
log_objects = Log.objects.filter(
id__in=latest_ids_per_project[:4]).order_by('-date')
This looks a bit convoluted, but it actually results in a surprisingly compact query:
SELECT "log"."id",
"log"."project_id",
"log"."msg"
"log"."date"
FROM "log"
WHERE "log"."id" IN
(SELECT U0."id"
FROM "log" U0
GROUP BY U0."project_id"
ORDER BY MAX(U0."date") DESC
LIMIT 4)
ORDER BY "log"."date" DESC
I was following the documentation on FullTextSearch in postgresql. I've created a tsvector column and added the information i needed, and finally i've created an index.
Now, to do the search i have to execute a query like this
SELECT *, ts_rank_cd(textsearchable_index_col, query) AS rank
FROM client, plainto_tsquery('famille age') query
WHERE textsearchable_index_col ## query
ORDER BY rank DESC LIMIT 10;
I want to be able to execute this with Django's ORM so i could get the objects. (A little question here: do i need to add the tsvector column to my model?)
My guess is that i should use extra() to change the "where" and "tables" in the queryset
Maybe if i change the query to this, it would be easier:
SELECT * FROM client
WHERE plainto_tsquery('famille age') ## textsearchable_index_col
ORDER BY ts_rank_cd(textsearchable_index_col, plainto_tsquery(text_search)) DESC LIMIT 10
so id' have to do something like:
Client.objects.???.extra(where=[???])
Thxs for your help :)
Another thing, i'm using Django 1.1
Caveat: I'm writing this on a wobbly train, with a headcold, but this should do the trick:
where_statement = """plainto_tsquery('%s') ## textsearchable_index_col
ORDER BY ts_rank_cd(textsearchable_index_col,
plainto_tsquery(%s))
DESC LIMIT 10"""
qs = Client.objects.extra(where=[where_statement],
params=['famille age', 'famille age'])
If you were on Django 1.2 you could just call:
Client.objects.raw("""
SELECT *, ts_rank_cd(textsearchable_index_col, query) AS rank
FROM client, plainto_tsquery('famille age') query
WHERE textsearchable_index_col ## query
ORDER BY rank DESC LIMIT 10;""")