permission denied for relation django_migrations - django

I am using Django CMS, I want to take backup of my database. But when I am firing the query to take my back of database:
pg_dump -U postgres -h 127.0.0.1 -p 5432 db_name > db_name_backup.sql
After firing the query I am getting the following error:
pg_dump: [archiver (db)] query failed: ERROR: permission denied for relation django_migrations
pg_dump: [archiver (db)] query was: LOCK TABLE public.django_migrations IN ACCESS SHARE MODE
Can anyone help me on this?

Sounds like it's down to permissions, so make sure you at least have USAGE for relevant schema and the SELECT privilege for the table.
Check out this answer for much greater detail.

Related

AlloyDB: Drop database

How could I drop an exisitng AlloyDB database programatically?
I'd like to run a test suite of our application on AlloyDB. I'm able to connect to the cluster using the proxy. However, when I try to drop database to cleanup the test environment, using code like:
echo "DROP DATABASE IF EXISTS application_test" | psql
I'm getting:
ERROR: syntax error at or near "DROP"
LINE 2: DROP DATABASE IF EXISTS application_test
I'm sure I can connect to the cluster correctly, because I run other queries before this one.
How could I remove an existing database from a script? I can't find a good way to do that in the docs.
To run psql from the CLI you'll want syntax like:
psql -d postgresql://\<user>:\<password>#\<AlloyDB IP>:5432/postgres \
-c "DROP DATABASE IF EXISTS \<dbname>"

mysqlimport: Error: 1227 Access denied with MySQL 8.0 and Amazon RDS

We are using MySQL 8.0.* and .csv file for the importing data into Amazon RDS. We are executing this command from the app server command line.
Error:
mysqlimport: Error: 1227 Access denied; you need (at least one of) the SUPER, SYSTEM_VARIABLES_ADMIN or SESSION_VARIABLES_ADMIN privilege(s) for this operation
Command:
mysqlimport --local --compress --columns='col1,col2,col3,col4' -h dbhost -u dbusername -pdbpassword dbname --fields-terminated-by='|' file_path/table_name.csv
We have already provided DBA permission to DB user.
As error suggests, the user you are running import command not having permissions SESSION_VARIABLES_ADMIN.
You could setup it like below.
GRANT SESSION_VARIABLES_ADMIN ON *.* TO 'user'#'%';
OR
GRANT SESSION_VARIABLES_ADMIN ON *.* TO 'user'#'specific-host';
It should resolve the issue.
Comment out the parameters TEMP_LOG_BIN and GTID_PURGED in the mysql dump and save. Try to import the dump file in target DB. It should work.

postgresql, ERROR: permission denied for relation django_migrations

i'm having troubles trying to make script to auto backup my db of my django app.
This is how i create my db for my app:
sudo -u postgres psql
CREATE DATABASE dapp;
CREATE USER backupu WITH PASSWORD 'pass123';
ALTER ROLE backupu SET client_encoding TO 'utf8';
ALTER ROLE backupu SET default_transaction_isolation TO 'read committed';
ALTER ROLE backupu SET timezone TO 'UTC';
GRANT ALL PRIVILEGES ON DATABASE dapp TO backupu;
\q
And this is my backup script, backup.sh with chmod 777:
export PGUSER='backupu'
export PGPASSWORD='pass123'
TODAY=`date +%Y-%m-%d`
TIME_NOW=`date +%H:%M`
ARCH_RESP=$TODAY-$TIME_NOW
pg_dump dentalapp > /home/backupu/backup/backup_$ARCH_RESP.sql
find /home/backupu/backup/ -name '*.sql' -mtime +2 -exec rm -f {} \;
unset PGUSER
unset PGPASSWORD
But when i run it, i get this error:
pg_dump: [archiver (db)] query failed: ERROR: permission denied for
relation django_migrations pg_dump: [archiver (db)] query was: LOCK
TABLE public.django_migrations IN ACCESS SHARE MODE
and:
connection to database "dapp" failed: FATAL: Peer"authentication
failed for user "backupu"
I tried adding this line to my pg_hba.conf
local all backupu md5
But the error persist, maybe some permissions are missing or need a more step when i create my db. I already checked some other post here in stackoverflow but also without success, Can you help me?
i'm running a local server with ubuntu 14.04, nginx, gunicorn and postgresql 9.3
What happens is that you are using a user that has no permission over the django_migrations table.
Use the same data base user that the webapp uses.

Migrate postgres dump to RDS

I have a Django postgres db (v9.3.10) running on digital ocean and am trying to migrate it over to Amazon RDS (postgres v 9.4.5). The RDS is a db.m3.xlarge instance with 300GB. I've dumped the Digital Ocean db with:
sudo -u postgres pg_dump -Fc -o -f /home/<user>/db.sql <dbname>
And now I'm trying to migrate it over with:
pg_restore -h <RDS endpoint> --clean -Fc -v -d <dbname> -U <RDS master user> /home/<user>/db.sql
The only error I see is:
pg_restore: [archiver (db)] Error from TOC entry 2516; 0 0 COMMENT EXTENSION plpgsql
pg_restore: [archiver (db)] could not execute query: ERROR: must be owner of extension plpgsql
Command was: COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
Apart from that everything seems to be going fine and then it just grinds to a halt. The dumped file is ~550MB and there are a few tables with multiple indices, otherwise pretty standard.
The Read and Write IOPS on the AWS interface are near 0, as is the CPU, memory, and storage. I'm very new to AWS and know that the parameter groups might need tweaking to do this better. Can anyone advise on this or a better way to migrate a Django db over to RDS?
Edit:
Looking at the db users the DO db looks like:
Role Name Attr Member Of
<user> Superuser {}
postgres Superuser, Create role, Create DB, Replication {}
And the RDS one looks like:
Role Name Attr Member Of
<user> Create role, Create DB {rds_superuser}
rds_superuser Cannot login {}
rdsadmin ... ...
So it doesn't look like it's a permissions issue to me as <user> has superuser permissions in each case.
Solution for anyone looking:
I finally got this working using:
cat <db.sql> | sed -e '/^COMMENT ON EXTENSION plpgsql IS/d' > edited.dump
psql -h <RDS endpoint> -U <user> -e <dname> < edited.dump
It's not ideal for a reliable backup/restore mechanism but given it is only a comment I guess I can do without. My only other observation is that running psql/pg_restore to a remote host is slow. Hopefully the new database migration service will add something.
Considering your dumped DB file is of ~550MB, I think using the Amazon guide for doing this is the way out. I hope it helps.
Importing Data into PostgreSQL on Amazon RDS
I think it did not halt. It was just recreating indexes, foreign keys etc. Use pg_restore -v to see what's going on during the restore. Check the logs or redirect output to a file to check for any errors after import, as this is verbose.
Also I'd recommend using directory format (pg_dump -v -Fd) as it allows for parallel restore (pg_restore -v -j4).
You can ignore this ERROR: must be owner of extension plpgsql. This is only setting a comment on extension, which is installed by default anyway. This is caused by a peculiarity in RDS flavor of PostgreSQL, which does not allow to restore a database while connecting as postgres user.

Django: permission denied when trying to access database after restore (migration)

I have a django 1.4 app with a populated postgres 9.1 database in development server locally. After successful deployment, I wanted to move the data from local to online database, so I used:
pg_dump -f dump.sql -Ox database
and then restored on the server with:
psql -1 -f dump.sql database
Now trying to login online to the website admin throws a "permission denied for relation django_session" exception. I've tried to dump the data with/without -Ox switch and all its combinations but without success. I am also dropping the database and recreating it from scratch on the server with the correct owner as set in settings.py.
If I run a normal syndb without a restore then everything works well.
Am I missing something here?
It turns out that you should grant explicit ownership of all objects in the database to the owner after restore. The owner is not a superuser. It's not enough to only set the owner at database creation time. The final solution for migration goes like this:
on the client:
pg_dump -f dump.sql -Ox database
on the server:
su postgres
dropdb database
createdb database -O user
psql database -f dump.sql
and then to set the privileges:
psql database -c "GRANT ALL ON ALL TABLES IN SCHEMA public to user;"
psql database -c "GRANT ALL ON ALL SEQUENCES IN SCHEMA public to user;"
psql database -c "GRANT ALL ON ALL FUNCTIONS IN SCHEMA public to user;"
Note that we could've run the sql command in psql console but this form is easily embeddable in scripts and such.
Try to do this from postgres user:
sudo su - postgres
pg_dump -f dump.sql -Ox database
Or just pass -U flag:
pg_dump -f dump.sql -Ox database -U postgres
Here's how I fixed mine. I saved myself a ton of a headache by simply changing the user to match the current logged in user of the destination server where the import will happen.
In my case, the imported db had a user of x (x was also the username for the machine it was running on), and the destination machine had a username of y, and a postgres user of y too.
Therefore, I simply changed the Database User and Password in my Django settings to match the destination machine's y user details.
Then did this:
$ sudo -u postgres psql
psql > GRANT ALL PRIVILEGES DATABASE ON mydb TO y;
Sipping some kool-aid now!