I'm trying to use django with multiple databases. One default postgres-db and another mysql-db. I followed the documentation on how to configure both dbs:
https://docs.djangoproject.com/en/4.0/topics/db/multi-db/
My settings.py looks like this:
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": "default",
"USER": "----------",
"PASSWORD": "-----------",
"HOST": "localhost",
"PORT": "5432",
},
"data": {
"ENGINE": "django.db.backends.mysql",
"NAME": "data",
"USER": "---------",
"PASSWORD": "---------",
"HOST": "localhost",
"PORT": "3306",
}
}
Things work fine when querying the default database. But it seems that django uses postgres-db sql syntax for queries on the mysql-db:
# Model "Test" lives in the mysql-db.
Test.objects.using('data').all()
# gives me the following exception
psycopg2.errors.SyntaxError: ERROR: SyntaxError at ».«
LINE 1: EXPLAIN SELECT `test`.`id`, `test`.`title`, ...
It even uses the postgres-db client psycopg2 to generate/execute it.
Using a raw query works fine though, this means it does correctly use the mysql-db for the specific query.
Test.objects.using('data').raw('SELECT * FROM test')
# works
How can i configure Django to use the correct SQL syntax when using multiple databases of different engines?
Thanks!
Related
I connect Apache Superset with apache Drill via sqlalchemy-drill.
Here is connection:
"type": "http",
"cacheResults": true,
"connections": {
"get": {
"url": "http://localhost:5500/",
"method": "GET",
"headers": null,
"authType": "none",
"userName": null,
"password": null,
"postBody": null,
"params": null,
"dataPath": null,
"requireTail": true,
"inputType": "json",
"xmlDataLevel": 1
}
},
"timeout": 5,
"proxyType": "direct",
"enabled": true
In Superset I getting the data:
SELECT * FROM api.get.`link`
But in api I need to get the data via dynamic urls like "link/1", "link/2" etc.
I'm trying to use Jinja template for inserting urla parameter into query:
SELECT * FROM api.get.{{ url_param('url') }}
And then in the dashbord use url like "http://localhost:8088/superset/dashboard/1/?url=link/1" for accessing to parameter? but it doesn't work.
Is there any way to use dynamic url for access to api from Superset?
Need to set ENABLE_TEMPLATE_PROCESSING in superset_config.py to TRUE and all works
I tried to use https://github.com/aws/graph-notebook to connect to Blazegraph Database. I have verified Blazegraph is running from the following.
serviceURL: http://192.168.1.240:9999
Welcome to the Blazegraph(tm) Database.
Go to http://192.168.1.240:9999/bigdata/ to get started.
I did the following the jupyter notebook
%%graph_notebook_config
{
"host": "localhost",
"port": 9999,
"auth_mode": "DEFAULT",
"iam_credentials_provider_type": "ENV",
"load_from_s3_arn": "",
"aws_region": "us-west-2",
"ssl": false,
"sparql": {
"path": "blazegraph/namespace/foo/sparql"
}
}
then do the following
%status
gives error {'error': JSONDecodeError('Expecting value: line 1 column 1 (char 0)',)}
I tried to replace host with 192.168.1.240 and is still having the same problem.
Looks like you found a bug!
This is now fixed in https://github.com/aws/graph-notebook/pull/137.
I would like to connect to JDBC DB, e.g. Postgres, via Calcite driver using sqlline shell script wrapper included in the calcite git repo. I'm facing the problem how to specify the target JDBC Postgres driver. Initially I tried this:
CLASSPATH=/Users/davidkubecka/git/calcite/build/libs/postgresql-42.2.18.jar ./sqlline -u jdbc:calcite:model=model.json
The model.json is this:
{
"version": "1.0",
"defaultSchema": "tpch",
"schemas": [
{
"name": "tpch",
"type": "jdbc",
"jdbcUrl": "jdbc:postgresql://localhost/*",
"jdbcSchema": "tpch",
"jdbcUser": "*",
"jdbcPassword": "*"
}
]
}
But
First, I got asked for username and password even though the are already specified in the model.
Second, after filling in the credentials I still get error
java.lang.RuntimeException: java.sql.SQLException: Cannot create JDBC driver of class '' for connect URL 'jdbc:postgresql://localhost/*'
So my question is whether this scenario (using JDBC driver inside Calcite driver via sqlline) is supported and if yes how can I make the connection?
Try including your jdbc Driver within the schema definition, and make sure it is in your classpath. Furthermore, add your database name to the jdbc Url. Your model.json could look like:
{
"version": "1.0",
"defaultSchema": "tpch",
"schemas": [
{
"name": "tpch",
"type": "jdbc",
"jdbcUrl": "jdbc:postgresql://localhost/my_database",
"jdbcSchema": "tpch",
"jdbcUser": "*",
"jdbcPassword": "*",
"jdbcDriver": "org.postgresql.Driver"
}
]
}
I want to reuse the loppback's User login and token logic for my app, and to store the info in mySql.
When I leave User's datasource as default (in memory db), it works fine. Explorer is there.
Now I just want to change the datasource for User, and I edit model-config.json to use my db connector:
...
"User": {
"dataSource": "db"
},
"AccessToken": {
"dataSource": "db",
"public": false
},
"ACL": {
"dataSource": "db",
"public": false
...
After I restart the server and play a bit around, it objects that some tables are not in the db:
{ Error: ER_NO_SUCH_TABLE: Table 'mydb.ACL' doesn't exist
Obviously there is no table structure to store users, acls and other stuff in mySql.
How do I get this scheme structure in my db?
Is there a script or command?
I found it myself, it's pretty easy:
https://docs.strongloop.com/display/public/LB/Creating+database+tables+for+built-in+models
Leaving the question if somebody else needs it...
I am trying to deploy an app named abcd with artifact as abcd.war. I want to configure to an external datasource. Below is my abcd.war/META-INF/context.xml file
<Context>
<ResourceLink global="jdbc/abcdDataSource1" name="jdbc/abcdDataSource1" type="javax.sql.DataSource"/>
<ResourceLink global="jdbc/abcdDataSource2" name="jdbc/abcdDataSource2" type="javax.sql.DataSource"/>
</Context>
I configured the below custom JSON during a deployment
{
"datasources": {
"fa": "jdbc/abcdDataSource1",
"fa": "jdbc/abcdDataSource2"
},
"deploy": {
"fa": {
"database": {
"username": "un",
"password": "pass",
"database": "ds1",
"host": "reserved-alpha-db.abcd.us-east-1.rds.amazonaws.com",
"adapter": "mysql"
},
"database": {
"username": "un",
"password": "pass",
"database": "ds2",
"host": "reserved-alpha-db.abcd.us-east-1.rds.amazonaws.com",
"adapter": "mysql"
}
}
}
}
I also added the recipe opsworks_java::context during configure phase. But it doesnt seem like working and I always get the message as below
[2014-01-11T16:12:48+00:00] INFO: Processing template[context file for abcd] action create (opsworks_java::context line 16)
[2014-01-11T16:12:48+00:00] DEBUG: Skipping template[context file for abcd] due to only_if ruby block
Can anyone please help on what I am missing with OpsWorks configuration?
You can only configure one datasource using the built-in database.yml. If you want to pass additional information to your environment, please see Passing Data to Applications