CREATE EXTENSION using cloud_sql_proxy fails due to insufficient privileges - django

My goal is to enable the ltree postgres extension in an automated way (GCP PostgreSQL extensions).
This is the query needed to create the extension
CREATE EXTENSION IF NOT EXISTS ltree;. I verified that this command works when I manually connect to the cloudsql instance using the following method. When I do this, the migrations run fine for that instance.
gcloud sql connect MyBackendInstance --user=postgres
postgres=> \c my_database;
Password:
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
You are now connected to database "my_database" as user "postgres".
mch_staging=> CREATE EXTENSION IF NOT EXISTS ltree;
CREATE EXTENSION
Now I have the following django migration from a library called django-ltree, which works perfect on my local installation.
migrations.RunSQL(
"CREATE EXTENSION IF NOT EXISTS ltree;",
"DROP EXTENSION ltree;"
)
However, when I run that migration in my pipeline (without manually installing the ltree connection using the method above), which uses the cloud_sql_proxy to connect to the database, I get the following error:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 82, in _execute
return self.cursor.execute(sql)
psycopg2.errors.InsufficientPrivilege: permission denied to create extension "ltree"
HINT: Must be superuser to create this extension.
The command in the pipeline is quite simple:
./cloud_sql_proxy -instances="$CLOUD_SQL_INSTANCES"=tcp:5432 -credential_file=gcloud-api-credentials.json &
python backend/manage.py migrate
The credentials to connect with the database are correctly defined in the django settings, we've been able to use these credentials to perform all kinds of migrations for ages.
I have tried creating a new user as described in this question, this did not solve the issue and thus seems unrelated.
UPDATE I tried running the cloud_sql_proxy locally, using the same authentication file to install a different extension, and that seems to work flawlessly.

Related

Bq command line error in compute engine VM instance CENTOS7

I'm running a VM instance (google compute engine) with CENTOS 7, everytime i run the bq command, i keep getting error. I supposed bq is by default in compute engine.
[username#instance-1 ~]$ bq
Error initializing bq client: service_account
Traceback (most recent call last):
File "/usr/lib64/google-cloud-sdk/platform/bq/third_party/pyglib/appcommands.py", line 805, in _CommandsStart
sys.modules['__main__'].main(GetCommandArgv())
File "/usr/lib64/google-cloud-sdk/platform/bq/bq.py", line 6078, in main
if FLAGS.debug_mode or FLAGS.headless:
File "/usr/lib64/google-cloud-sdk/platform/bq/third_party/absl/flags/_flagvalues.py", line 468, in __getattr__
raise AttributeError(name)
AttributeError: debug_mode
FATAL error in main: debug_mode
Run 'bq.py help' to get help
[username#instance-1 ~]$ bq --format=prettyjson dataset.tableid
FATAL Flags parsing error: Unknown command line flag 'use_gce_service_account'
Run 'bq.py help' to get help
[username#instance-1 ~]$
my compute engine account has full full access to all Cloud APIs, in addition just to be sure, i also add bigquery admin in the IAM part.
I'm not really sure what is wrong.
Have a look at the documentation:
The bq authorization flags are deprecated. To configure authorization
for the bq command-line tool, see Authorizing Cloud SDK tools.
and also in this section you can find flag that cause an error:
--use_gce_service_account
I've tried to run at my linux machine with Google Cloud SDK command bq and it works perfectly:
$ bq
Python script for interacting with BigQuery.
USAGE: bq.py [--global_flags] <command> [--command_flags] [args]
Any of the following commands:
cancel, cp, extract, get-iam-policy, head, help, init, insert, load, ls, mk, mkdef, partition, query, rm, set-iam-policy, shell,
show, update, version, wait
At first, you should update your Cloud SDK to the latest version (or reinstall it) and check again. Also, please update your post with version of your Cloud SDK:
$ gcloud info
Google Cloud SDK [277.0.0]
Python Version: [3.7.5rc1 (default, Dec 20 2019, 17:52:56) [GCC 8.3.0]]
Python Location: [/usr/bin/python3]
If you have en error with the latest Cloud SDK, try set up default values for bq in $HOME/.bigqueryrc as it described in the documentation.
In addition, I'd recommend you to check/update Python if nothing helps.
EDIT You can set up defaults in your ~/.bigqueryrc file like this:
project_id = --my-project-id--
in your favorite text editor like nano or vim. Keep in mind that file paths in .bigqueryrc had to be the full path.
EDIT2 Have a look at the Cloud SDK system requirements:
It requires Python 2.7.9 or higher.
and you have 2.7.5.
UPDATE Deleting and reinstalling the Cloud SDK solved the problem.
You might have a bash alias acting wrong or a ~/.bigqueryrc
Check the syntax and the correct service account files inside ~/.bigqueryrc

Python2.7-DB2v11.1.4.4 connectivity fails

I am setting up a development environment with the following:
CentOS 7
Python 2.7
IBM-DB2 EE Server v11.1.4.4
ibm-db package
My earlier installation and set up went smooth with no real issues on ODBC connectivity with the local DB2 trial database version. With my new install, I keep getting the following message:
Exception:
[IBM][CLI Driver] SQL1531N The connection failed because the name specified with the DSN connection string keyword could not be found in either the db2dsdriver.cfg configuration file or the db2cli.ini configuration file. Data source name specified in the connection string: "DATABASE". SQLCODE=-1531
I did try updating the python version to 3.7 but results the same. I had to reiterate here that my earlier install with the same configuration went through without any issues. I never updated neither the db2cli.ini file nor db2dsdriver file. I did try here and it fails. As much I could gather, I saw a message which read like "ibm-db does not sit well with all python versions properly".
>>> import difflib
>>> import subprocess
>>> import os
>>> import ibm_db
>>> from shutil import copyfile
>>> conn = ibm_db.connect("DATABASE","USERID","PASSWORD")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
Exception:
[IBM][CLI Driver] SQL1531N The connection failed because the name specified with the DSN connection string keyword could not be found in either the db2dsdriver.cfg configuration file or the db2cli.ini configuration file. Data source name specified in the connection string: "DATABASE". SQLCODE=-1531
I expect the connection to go through fine without any issues.
The short answer is that the easiest way is to use a complete DSN string to establish the connection (including the hostname, port etc.), e.g:
In [1]: import ibm_db
In [2]: conn = ibm_db.connect("DATABASE=SAMPLE;HOSTNAME=localhost;PORT=60111;UID=db2v111;PWD=passw0rd;","","")
The long answer is that we should be able to use the alias from the catalog, as explained in ibm_db.connect API:
IBM_DBConnection ibm_db.connect(string database, string user, string
password [, dict options [, constant replace_quoted_literal])
database For a cataloged connection to a database, this parameter
represents the database alias in the DB2 client catalog. For an
uncataloged connection to a database, database represents a complete
connection string in the following format: DRIVER={IBM DB2 ODBC
DRIVER};DATABASE=database;HOSTNAME=hostname;PORT=port;
PROTOCOL=TCPIP;UID=username;PWD=password; where the parameters
represent the following values:
hostname - The hostname or IP address of the database server.
port - The TCP/IP port on which the database is listening for requests.
username - The username with which you are connecting to the database.
password - The password with which you are connecting to the database.
user - The username with which you are connecting to the database.
For uncataloged connections, you must pass an empty string.
password- The password with which you are connecting to the database. For uncataloged connections, you must pass an empty string.
The question is though which client catalog we will check...
It all depends whether IBM_DB_HOME was set when package was installed, as explained in README. If it was set, then Python driver will use the existing client instance and its database catalog (as well as db2cli.ini and db2dsdriver.cfg). If not, then a separate client will be fetched during the installation and deployed in Python's site-packages.
In order to check which one is the case you can run ldd against your ibm_db.so, e.g:
ldd /usr/lib/python2.7/site-packages/ibm_db-2.0.7-py2.7-linux-x86_64.egg/ibm_db.so | grep libdb2
libdb2.so.1 => /usr/lib/python2.7/site-packages/ibm_db-2.0.7-py2.7-linux-x86_64.egg/clidriver/lib/libdb2.so.1 (0x00007fb6e137e000)
Based on the output I can say that in my environment the diver was linked against a driver in Python's site-packages, so it will use db2cli.ini from /usr/lib/python2.7/site-packages/ibm_db-2.0.7-py2.7-linux-x86_64.egg/clidriver/cfg.
If I will populate it with a section:
[sample]
port=60111
hostname=localhost
database=sample
I will be able to connect just with the DSN alias:
In [4]: conn = ibm_db.connect("SAMPLE","db2v111","passw0rd")
If you want the driver to use the existing client instance, use the IBM_DB_HOME during installation.

How to transfer one mongodb database from one computer to another

I am using django 2.2 and mongodb as database for backend. i have inserted all data in my application.I am also using Robo3T for seeing collections of mongodb database.My database name is CIS_FYP_db. In my computer everything is going perfect but i want to transfer that project into another computer while i am transferring the project which also contains data\db file with many collections.wt files but when i am running that project in other computer it is showing me that database is blank.No data were present there and mongodb is making new database with the same name CIS_FYP_db with no collections.Please help me to solve this problem that how can i transfer my mongodb database into other computer so i can use it into my application which is already made for that database.Thanks in advance
setting.py
DATABASES = {
'default': {
'ENGINE': 'djongo',
'NAME': 'CIS_FYP_db',
}
}
When you create a connection with mongodb then database is created automatically if not exist already.
You can use mongodump command to get all the database records and mongorestore to restore your database on your new machine.
Assumption: you have setup mongoDb locally and want to migrate it to another computer.
1.Requirements:
mongodump
mongorestore
1.1.How to install?
to install above requirement u have to install [MongoDB Database
Tools]
download link: https://www.mongodb.com/try/download/database-tools
1.2.Popular error.
sometime path is not set, so try this in cmd prompt: set path="C:\Program Files\MongoDB\Server\5.0\bin"
Note: please refactor the link according to your folder path.
2.Procedure:
Note: make sure you follow Step 1
2.1. Approach
we are going to create dump of mongodb from old pc(using mongodump), then transfer that dump to new pc, and import that dump using mongorestore.
2.2.Creation of dump in old pc(from where u want to replicate database)
cmd mongodump --host localhost:27017 --out ~/Desktop/mongo-migration
above cmd will create a dump in the mentioned path==> ~/Desktop/mongo-migration
just copy that folder and transfer it to new pc
Note: if you have created authenticated user then add these flag in above cmd and provide values --username [yourUserName] --password [yourPassword] --authenticationDatabase admin
2.3.Import of dump(created from old PC)
place that dump folder somewhere and execute below cmd
mongorestore C:/....../mongo-migration/ -u root --host 127.0.0.1:27017
done :)

AssertionError: INTERNAL: No default project is specified

New to airflow. Trying to run the sql and store the result in a BigQuery table.
Getting following error. Not sure where to setup the default_rpoject_id.
Please help me.
Error:
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 28, in <module>
args.func(args)
File "/usr/local/lib/python2.7/dist-packages/airflow/bin/cli.py", line 585, in test
ti.run(ignore_task_deps=True, ignore_ti_state=True, test_mode=True)
File "/usr/local/lib/python2.7/dist-packages/airflow/utils/db.py", line 53, in wrapper
result = func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/airflow/models.py", line 1374, in run
result = task_copy.execute(context=context)
File "/usr/local/lib/python2.7/dist-packages/airflow/contrib/operators/bigquery_operator.py", line 82, in execute
self.allow_large_results, self.udf_config, self.use_legacy_sql)
File "/usr/local/lib/python2.7/dist-packages/airflow/contrib/hooks/bigquery_hook.py", line 228, in run_query
default_project_id=self.project_id)
File "/usr/local/lib/python2.7/dist-packages/airflow/contrib/hooks/bigquery_hook.py", line 917, in _split_tablename
assert default_project_id is not None, "INTERNAL: No default project is specified"
AssertionError: INTERNAL: No default project is specified
Code:
sql_bigquery = BigQueryOperator(
task_id='sql_bigquery',
use_legacy_sql=False,
write_disposition='WRITE_TRUNCATE',
allow_large_results=True,
bql='''
#standardSQL
SELECT ID, Name, Group, Mark, RATIO_TO_REPORT(Mark) OVER(PARTITION BY Group) AS percent FROM `tensile-site-168620.temp.marks`
''',
destination_dataset_table='temp.percentage',
dag=dag
)
EDIT: I finally fixed this problem by simply adding the bigquery_conn_id='bigquery' parameter in the BigQueryOperator task, after running the code below in a separate python script.
Apparently you need to specify your project ID in Admin -> Connection in the Airflow UI. You must do this as a JSON object such as "project" : "".
Personally I can't get the webserver working on GCP so this is unfeasible. There is a programmatic solution here:
from airflow.models import Connection
from airflow.settings import Session
session = Session()
gcp_conn = Connection(
conn_id='bigquery',
conn_type='google_cloud_platform',
extra='{"extra__google_cloud_platform__project":"<YOUR PROJECT HERE>"}')
if not session.query(Connection).filter(
Connection.conn_id == gcp_conn.conn_id).first():
session.add(gcp_conn)
session.commit()
These suggestions are from a similar question here.
I get the same error when running airflow locally. My solution is to add a the following connection string as a environment variable:
AIRFLOW_CONN_BIGQUERY_DEFAULT="google-cloud-platform://?extra__google_cloud_platform__project=<YOUR PROJECT HERE>"
BigQueryOperator uses the "bigquery_default" connection. When not specified, local airflow uses an internal version of the connection which misses the property project_id. As you can see the connection string above provides the project_id property.
On startup Airflow loads environment variables that start with "AIRFLOW_" into memory. This mechanism can be used to override airflow properties and providing connections when running locally, as explained in the airflow documentation here. Note this also works when running airflow directly without starting the web server.
So I have set up environments variables for all my connections, for example AIRFLOW_CONN_MYSQL_DEFAULT. I have put them into a .ENV file that get sourced from my IDE, but putting them into your .bash_profile would work fine too.
When you look inside your airflow instance on Cloud Composer, you see that the at the "bigquery_default" connection there has the project_idproperty set. That's why BigQueryOperator works when running through Cloud Composer.
(I am on airflow 1.10.2 and BigQuery 1.10.2)

Boto.conf not found

I am running a flask app on an AWS EC2 server, and have been using boto to access data stored in dynamoDB. After accidentally adding boto.conf to a git commit (and push and pull on the server), I have found that my python code can no longer locate the boto.conf file. I rolled back the changes with git, but the problem remains.
The python module and boto.conf file exist in the same directory, but when the module calls
boto.config.load_credential_file('boto.conf')
I get the flask error IOError: [Errno 2] No such file or directory: 'boto.conf'.
As per Documentation:
I'm not really sure why you are using boto.config_load_credential_file. In general you can pick up the config in a file called either ~/.boto or /etc/boto.cfg.
You can also look at this questions from SO that also answers how to get the configuration for boto: Getting Credentials File in the boto.cfg for Python