I use django-pyodbc-azure 2.1.0.0 for the connection with an Azure SQL database which works fine.
When I understand the documentation of django-pyodbc-azure correctly, transactions should be supported.
However, this code immediately updates the row. I would expect, that the row is updated after 20 seconds.
from django.db import transaction
from myapp.models import MyModel
import time
with transaction.atomic():
MyModel.objects.filter(id=1).update(my_field='Test')
time.sleep(20)
Am I doing something wrong? Do I need to specifiy certain settings on the Azure SQL database?
When I set AUTOCOMMIT = False in my database settings, then the following code will not update the row at all.
MyModel.objects.filter(id=1).update(my_field='Test')
time.sleep(20)
transaction.commit()
My current settings.py
'azure_reporting': {
'ENGINE': 'sql_server.pyodbc',
'NAME': 'reporting_db',
'HOST': 'xxxxxx.database.windows.net',
'PORT': '',
'USER': 'xxxx#xxxxxx.database.windows.net',
'PASSWORD': 'xxxxxx',
'OPTIONS': {
'driver': 'ODBC Driver 17 for SQL Server'
}
}
Please make sure you have set the AUTOCOMMIT=true on database settings:
DATABASES = {
'default': {
'ENGINE': 'sql_server.pyodbc',
'HOST': 'yourserver.com',
'PORT': '1433',
'NAME': 'your_db',
'USER': 'your_user',
'PASSWORD': 'your_pw',
'AUTOCOMMIT': True,
'OPTIONS': {
'driver': 'ODBC Driver 13 for SQL Server',
},
},
}
No error happened, the only things what we can do is recheck the configuration of django-pyodbc-azure.
As you said: When I set AUTOCOMMIT = False in my database settings, then the following code will not update the row at all.
I think the transactions should works well.
Hope this helps.
Related
How to connect SQL Server in django rather than default database? What do I have to write in "settings.py" file?
You actually can install package for mssql:
pip install mssql-django
Then change your settings.py:
DATABASES = {
'default': {
'ENGINE': 'mssql',
'NAME': 'mydb',
'USER': 'user#myserver',
'PASSWORD': 'password',
'HOST': 'myserver.database.windows.net',
'PORT': '',
'OPTIONS': {
'driver': 'ODBC Driver 17 for SQL Server',
},
},
}
And btw you can turn off pyodbc's connection pooling by adding this in your settings:
DATABASE_CONNECTION_POOLING = False
I am using Django to connect to an on-premise database. Earlier, the database was hosted on Azure.
The connection string I used within Django settings earlier was as follows-
for sql database of Azure
DATABASES = {
'default': {
'ENGINE': 'sql_server.pyodbc',
'NAME': 'DatabaseName', #notrealname
'USER': 'username',
'PASSWORD': 'password',
'HOST': 'sql-django-uat.database.windows.net', #notreal
'PORT': '1433',
'OPTIONS': {
'driver': 'ODBC Driver 13 for SQL Server',
'MARS_Connection': 'True',
}
After the database migration, this string doesnt work. I keep getting 'Login timeout expired'.
But substituting 'NAME' with "DATABASE' works. Example given below-
DATABASES = {
'default': {
'ENGINE': 'sql_server.pyodbc',
'DATABASE': 'DatabaseName', #notrealname
'USER': 'username',
'PASSWORD': 'password',
'HOST': 'on-prem.local',
'PORT': '1433',
'OPTIONS': {
'driver': 'ODBC Driver 13 for SQL Server',
'MARS_Connection': 'True',
}
My webapp gets to the login page. But after I log in, I get the error- 'ImproperlyConfigured at /login/
settings.DATABASES is improperly configured. Please supply the NAME value.'
Can someone tell me how to solve this? I should mention leaving the "NAME" field blank also gives the same error. Thanks so much in advance.
DATABASES = {
'default': {
'ENGINE': 'sql_server.pyodbc',
'NAME': 'DatabaseName', #notrealname
'USER': '<replace with on-premise DB Username>',
'PASSWORD': '<replace with on-premise DB password>',
'HOST': '<replace with database URL of on-premise DB URL>',
'PORT': '1433',
'OPTIONS': {
'driver': 'ODBC Driver 13 for SQL Server',
'MARS_Connection': 'True',
}
From the Azure SQL database, the settings.py should be:
DATABASES = {
'default': {
'ENGINE': 'sql_server.pyodbc',
'NAME': 'mydb',
'USER': 'user#myserver',
'PASSWORD': 'password',
'HOST': 'myserver.database.windows.net',
'PORT': '',
'OPTIONS': {
'driver': 'ODBC Driver 13 for SQL Server',
},
},
}
# set this to False if you want to turn off pyodbc's connection pooling
DATABASE_CONNECTION_POOLING = False
USER: String. Database user name in "user" (on-premise) or "user#server" (Azure SQL Database) format. If not given then MS Integrated Security will be used.
Reference: django-pyodbc-azure 2.1.0.0
For on-premise SQL server, the database configuration:
DATABASES = {
'default': {
'ENGINE': 'sql_server.pyodbc',
'HOST': '(LocalDB)\ProjectLocalDB',
'PORT': '',
'NAME': 'my_db',
'USER': 'my_user',
'PASSWORD': 'my_password',
'OPTIONS': {
'driver': 'ODBC Driver 13 for SQL Server',
},
},
}
Here are two tutorials, I think you can learn from them to check if you have missed something:
How to use Django with SQL Server LocalDB: This post has been
tested with Microsoft SQL Server 2017, Django 1.11.x and 2.x. The
pyodbc and django-pyodbc-azure packages will be used to connect
Django to SQL Server. The version of django-pyodbc-azure must match
your version of Django.
Django and MS SQL Server: The new python module/library is the django-pyodbc-azure which supports Django
2.0 and lower versions like Django 1.11.
Hope this helps.
I restored PostgreSQL on pythonanywhere from sql-formatted pg-dump made on my macbook.
My tables appear healthy in PostgreSQL, but they appear empty in Django.
I suspect I have two schemas or two databases, and the one Django is looking at is empty.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'mydb2',
'USER': 'super',
'PASSWORD': 'bigsecretR4',
'HOST': 'magula6-1249.postgres.pythonanywhere-services.com',
'PORT': '11249',
'OPTIONS': {
'connect_timeout': 60,
}
}
}
postgres=# select count(*) from public.cw_city5;
count
-------
615
>>> from cw.models import *
>>> qs=City5.objects.all()
>>> len(qs)
0
I have a data migration which loads entries into my database as shown below:
def load_groups(apps, _):
Group = apps.get_model('auth', 'Group')
Group(
name='Admin'
).save()
Group(
name='Product Manager'
).save()
Group(
name='Developer'
).save()
migrations.RunPython(
code=load_groups,
)
This code works fine when running manage.py migrate.
However if I run manage.py migrate --database=copy the migrations apply to the database with the alias copy but the load_groups section attempts to save to the database alias default.
I know I can set where to save to group creation to, but I can't seem to find a way to access the name of the database alias being used.
Any ideas would be greatly appreciated.
settings.py:
DATABASES = {
'default': {
'ENGINE': 'django_prometheus.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
},
'copy': {
'ENGINE': 'django_prometheus.db.backends.mysql',
'NAME': 'dev',
'USER': '',
'PASSWORD': '',
'HOST': '',
'PORT': 3306,
'OPTIONS': {
'init_command': "SET sql_mode='STRICT_TRANS_TABLES'"
}
}
}
When working with GeoDjango I have a problem: when I make migrations then migrate new models the log said "no migrations to apply" and the oracle database still have no new table.
my settings.py is
DATABASES = {
"default": {
"ENGINE": "django.contrib.gis.db.backends.oracle",
"NAME":,
"USER":,
"PASSWORD":,
}
}
I need some help.
I can see you are learning, see:
DATABASES = {
'default': {
'ENGINE': 'django.contrib.gis.db.backends.oracle',
'NAME': 'mydatabase',
'USER': 'mydatabaseuser',
'PASSWORD': 'mypassword',
'HOST': '127.0.0.1',
'PORT': '5432',
}
}
First you need to create a user and password in your database, Oracle and set up this information in setting.py
You can see more information here.