I'm trying to import a postgres dump into a sqlite3 database.
Now pg_dump add the database name to the expressions and this is not good for sqlite3.
CREATE TABLE dbname.table
Is it possible to tell sqlite3 to ignore database name?
The next solution is to try to write a regexp that modifies the sql file but I'm not a regexp magician, I've obtained something along the lines of:
printf "\nCREATE TABLE dbname.data;\nINSERT INTO dbname.data VALUES (\"the data in dbname.\")\n" | sed -e '/^CREATE TABLE/s/dbname.//g' -e '/^INSERT INTO/s/dbname.//g'
But this is incorrect cause I want to substitute only the first occurrence...
Can you give me some suggestion?
You actually don't have to change your file of SQL statements:
$ sqlite3
sqlite> ATTACH 'your_database.db' AS dbname;
sqlite> .read dump_file.sql
ATTACH will open a database using the schema name dbname so that dbname.tablename will refer to it.
Related
There are about 240 tables in a Hive database on AWS. I want to export all tables with column names and their data types to a csv. How can I do that?
Use-
hive -e 'set hive.cli.print.header=true; SELECT * FROM db1.Table1' | sed 's/[\t]/,/g' > /home/table1.csv
set hive.cli.print.header=true : This will add the column names in the csv file
SELECT * FROM db1.Table1: Here, you have to provide your query.
/home/table1.csv: Path where you want to save the file (here as table1.csv).
Hope this solve your problem!
Basically I want to automate hive query based job. Which will take list as input from the file and query will generate reports by applying filters using this input. The input is a list which I want to pass as "where-in" condition in hive query. My query looks like,
// temp.sql //
INSERT INTO TABLE Table1
SELECT * FROM Table2 where pixel_id in ($PIXEL);
I am trying to pass input in command line like this,
hive -f temp.sql -d PIXEL= '('608207','608206','608205','608204','608203','608201','608184','608198','608189')' > temp.log 2>&1 &
I am not sure this is correct way or not?
Anyone has some idea to work around this problem?
Please suggest me some way to work around.
If pixel_id is number, you can use this simple script:
Create shell script script.sh with hive -e "select * from orders where order_id in (${1})"
Save it and change permissions by running chmod +x script.sh
Run shell script by passing the values as parameter for example ./script.sh "1, 2, 3, 4"
You can do it for string columns by escaping " like this "\"1\", \"2\""
Try passing like a string with ',' separated list-
run this on bash: hive -hiveconf myargs="1','2','3','4"
considering your script looks like this: SELECT * from mytable where my_id in ('${hiveconf:myargs}');
I have a file which essentially has tags in it in each of its lines.
Example:
<stock_name>Abc Inc.</stock_name>
<stock_value>123.456</stock_value>
........
I have a database table which has records telling (new) stock value for the stocks.
The database table has say 2 columns: stock_name and stock_value
I need to scan this database table and for each stock_name in the table, I need to replace the <stock_value>xxx</stock_value> with <stock_value>yyy</stock_value> against the appropriate <stock_name> in the file (xxx=existing stock_value in the file; yyy=new stock_value retrieved from the database for the particular stock). Could anyone help me on this please?
PS: This is not a homework. I am in the middle of writing a perl script to modify the file.
Appreciate your help.
If your markup is guaranteed to be <tag>value</tag>, then this will replace the values using a hash keyed off of the tag name:
my %table = (
stock_name => 'xxx',
stock_value => 'yyy',
);
$file = s/<([^>\/]*)>[^<]*/<$1>$table{$1}/g;
I am trying to drop a foreign key in DB2 through the command line. I have succeeded in this many times and I am sure that I am using the correct syntax:
db2 "alter table TABLENAME drop constraint fk_keyname"
Output:
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL0204N "FK_KEYNAME" is an undefined name. SQLSTATE=42704
All my foreign keys are created with an uppercase name. Except for the key I now want to drop. I don't know how to got created with a lowercase name but it seems that it will not drop keys that are lowercase.
When I try to add this foreign key (while it still exists) I get the following message:
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL0601N The name of the object to be created is identical to the existing
name "fk_keyname" of type "FOREIGN KEY". SQLSTATE=42710
Does anyone know how to drop foreign keys that have a lowercase name?
The answer by mustaccio worked. Tried all kinds of quotes but this way did the trick:
db2 'alter table TABLENAME drop constraint "fk_keyname"'
DB2 will convert object names to uppercase, unless they are quoted. Generally it's not a very good idea to create objects with lower- or mixed-case names. If your foreign key is actually "fk_keyname" (all lowercase), run db2 "alter table TABLENAME drop constraint \"fk_keyname\"" or db2 'alter table TABLENAME drop constraint "fk_keyname"'
This behaviour is not unique to DB2, by the way.
Here's how I can do it when MySQL is the backend,
cursor.execute('show tables')
rows = cursor.fetchall()
for row in rows:
cursor.execute('drop table %s; ' % row[0])
But how can I do it when postgresql is the backend?
cursor.execute("""SELECT table_name FROM information_schema.tables WHERE table_schema='public' AND table_type != 'VIEW' AND table_name NOT LIKE 'pg_ts_%%'""")
rows = cursor.fetchall()
for row in rows:
try:
cursor.execute('drop table %s cascade ' % row[0])
print "dropping %s" % row[0]
except:
print "couldn't drop %s" % row[0]
Courtesy of http://www.siafoo.net/snippet/85
You can use select * from pg_tables; get get a list of tables, although you probably want to exclude where schemaname <> 'pg_catalog'...
Based on another one of your recent questions, if you're trying to just drop all your django stuff, but don't have permission to drop the DB, can you just DROP the SCHEMA that Django has everything in?
Also on your drop, use CASCADE.
EDIT: Can you select * from information_schema.tables; ?
EDIT: Your column should be row[2] instead of row[0] and you need to specify which schema to look at with a WHERE schemaname = 'my_django_schema_here' clause.
EDIT: Or just SELECT table_name from pg_tables where schemaname = 'my_django_schema_here'; and row[0]
Documentation says that ./manage.py sqlclear Prints the DROP TABLE SQL statements for the given app name(s).
I use this script to clear the tables, I put it in a script called phoenixdb.sh because it burns the DB down and a new one rises from the ashes. I use this to prevent lots of migrations in the early dev portion of the project.
set -e
python manage.py dbshell <<EOF
DROP SCHEMA public CASCADE;
CREATE SCHEMA public;
EOF
python manage.py migrate
This wipes the tables from the db without deleting the db. Your Django user will need to own the schema though which you can setup with:
alter schema public owner to django-db-user-name;
And you might want to change the owner of the db as well
alter database django-db-name owner to django-db-user-name;
\dt is the equivalent command in postgres to list tables. Each row will contain values for (schema, Name, Type, Owner), so you have to use the second (row[1]) value.
Anyway, you solution will break (in MySQL and PostgreSQL) when foreign-key constraints are involved, and if there aren't any, you might get troubles with the sequences. So the best way is in my opinion to simply drop the whole database and call initdb again (which is also the more efficient solution).