I am given a task for a web application I’m developing currently. Currently, my code allow me to do the necessary saving to the existing tables, but I am unsure of how to do the following task. The task is to dynamically create tables as long as the 'save' button is pressed in my web application. I am using SQLite for my database.
Example: I have the field of 'name'. So the user types Test for the name field. Upon saving, this name is stored in an existing table and register under a id of 1. At the same time, I want to be able to create a new table with its own fields. This table will be named example_(id). So in this case it will be example_1.
I’m a beginner in Django and SQL so if anyone can guide/help me in any way, thank you!
Got the error of
views.py
#api_view(['GET'])
def selected_device(request,pk=None):
if pk != None:
devices = Device.objects.filter(pk=pk)
devicedetail = DeviceDetail.objects.filter(DD2DKEY=pk)
cursor = connection.cursor()
tablename= "dev_interface_" + str(pk)
cursor.execute(f"SELECT interface FROM {tablename} ")
righttable = cursor.fetchall()
devserializer = DeviceSerializers(devices, many=True)
devdserializer = DeviceDetailSerializers(devicedetail, many=True)
interfaces = []
for i in righttable:
interfaces.append(i[0])
for i in interfaces:
data =[{"interface": i}]
interserializer = InterfaceSerializers(data, many = True)
results = {
"device":devserializer.data,
"device_details" : devdserializer.data,
"interface":interserializer.data,
}
return Response(results)
In interfaces, I have the following ['G0/1', 'TenGigabitEthernet1/1/3', 'TenGigabitEthernet1/1/5', 'TenGigabitEthernet1/1/20', 'TenGigabitEthernet1/1/21', 'TenGigabitEthernet1/1/22', 'TenGigabitEthernet1/1/23', 'TenGigabitEthernet1/1/24', 'TenGigabitEthernet1/1/25', 'TenGigabitEthernet1/1/26']
I have mentioned in the comments that you can use database connection with raw SQL. Here is an example for you:
from django.db import connection
# Create a connection with your database
cursor = connection.cursor()
# Execute your raw SQL
cursor.execute("CREATE TABLE NameTable(name varchar(255));")
# Create database records
cursor.execute("INSERT INTO NameTable VALUES('ExampleName')")
# Fetch records from the database
cursor.execute("SELECT * FROM NameTable")
# Get the data from the database. fetchall() can be used if you would like to get multiple rows
name = cursor.fetchone()
# Manipulate data
# Don't forget the close database connection
cursor.close()
This is just a basic example about database connection in Django. Customize it depending on your needs. Here is the official documentation for raw SQL and database connections. Also keep in mind that what you are trying to do may not be the best practice or recommended.
Related
I have a table in DB named Plan.
see code in models.py:
class Plan(models.Model):
id = models.AutoField(primary_key=True)
Comments = models.CharField(max_length=255)
def __str__(self):
return self.Comments
I want to fetch data(comments) from DB and after that data will be deleted. That means one data will be fetched once. And this data will be shown in the Django template.
I tried, see views.py
def Data(request):
data = Plan.objects.filter(id=6)
# latest_id = Model.objects.all().values_list('id', flat=True).order_by('-id').first()
# Plan.objects.all()[:1].delete()
context = {'data':data}
dataD = Plan.objects.filter(id=6)
dataD.delete()
return render(request,'data.html',context)
this code is deleting data from DB but not showing in the template.
How can i do this?
Your template must be updated because it fetch the data from the db one time only so if db is updated your template wouldn't change
From django docs:
Pickling QuerySets¶
If you pickle a QuerySet, this will force all the results to be loaded into memory prior to pickling. Pickling is usually used as a precursor to caching and when the cached queryset is reloaded, you want the results to already be present and ready for use (reading from the database can take some time, defeating the purpose of caching). This means that when you unpickle a QuerySet, it contains the results at the moment it was pickled, rather than the results that are currently in the database.
If you only want to pickle the necessary information to recreate the QuerySet from the database at a later time, pickle the query attribute of the QuerySet. You can then recreate the original QuerySet (without any results loaded) using some code like this:
>>> import pickle
>>> query = pickle.loads(s) # Assuming 's' is the pickled string.
>>> qs = MyModel.objects.all()
>>> qs.query = query # Restore the original 'query'.
In the simple Sqlite3 code below, using a non-default SQLLite database ("sqlite3_db" set up in DATABASES in settings.py) I am trying to build a dictionary cursor. I understand this is done using row_factory which, from my research requires a connection object. However, when using a non-default database, I can't figure out how this is done as the creation of a cursor from connections doesn't seem to create a connection object.
def index_sqlite(request):
from django.db import connections
import sqlite3
cursor = connections["sqlite3_db"].cursor()
#connection.row_factory = sqlite3.Row ==> how to access the connection object??
sql = "SELECT title, rating FROM book_outlet_book ORDER BY title"
cursor.execute(sql)
book_list = [item for item in cursor.fetchall()]
return render(request, "book_outlet/index.html", {
"title": "Some title",
"books": book_list
})
This code produces a list of tuples as expected, but I am after a "dictionary cursor" so I can do things like reference book_list[3].title (without using a model).
I understand how to create a table and query it using sqlalchemy. But what I am trying to do is different. I just want to query a table that already exists and which I did not create. What that means is I won't have a Python class defined for it in my code.
How do I query such a table ?
You can access an existing table using the following code.
For example if your table is users then:
from sqlalchemy.orm import Session
Base = automap_base()
Base.prepare(engine, reflect=True)
Users = Base.classes.users
session = Session(engine)
res = session.query(Users).first()
I have a django application that I have just inherited, knowing very little (generally) about django. The system uses TastyPie to provide RESTful access.
The feature I'm working on needs to be able to POST a new report to the system. The reports in the ORM model are associated with multiple "devices". In the ORM the devices have further relationships, such as to users, companies, other sub-devices and so forth, in a complex relational system.
When I try to POST the report, I frequently DoS myself out of the system. Watching PostGreSQL queries on the PostGreSQL logs, I can see that this performs literally thousands of SQL queries, retrieving all the objects in the relational model. However, ultimately, all it needs to do is to add a new entry in the "report" table and maybe a handful of entries in the "report_device" table (as report to device is a many-to-many relationship).
In the reference to the device in the (TastyPie) resource (called ReportResource), I don't reference the device with "full=True".
Why is the system performing so many database queries when it needs only to update two tables?
How do I stop it doing this and provide a more optimised update mechanism?
I'm an accomplished SQL developer myself, but I don't want to throw out the baby with the bathwater here by writing a custom update (and I wouldn't know how to insert the relevant code anyway). I assume there's a way to make django / tastypie do what I want in a sensible way.
I can provide more information, but I don't know what's pertinent. Please ask if you think you know something and I'll see if I can elucidate.
TastyPie tends to be liberal in its queries. I remember using a good bit of custom dehydrate() functions to get what I wanted. http://django-tastypie.readthedocs.org/en/latest/resources.html?highlight=hydrate#Resource.dehydrate
TastyPie doesn't like table references -- it's too easy to get too much information. Cast a suspicious eyeball on code like user = fields.ForeignKey(UserResource, 'user')
for your own app code, there's a way to ask the Django QuerySet machinery to translate a query into SQL. This, combined with your Postgres logs, should help determine if the issues are with TastyPie or your app queries.
Code:
#!/usr/bin/env python
'''
logquery.py -- expose database queries (SQL)
'''
import functools, os, sys
os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings.local'
sys.path.append('project/project')
from meetup.models import Meeting
def output(arg):
print arg
print
class LoggingObj(object):
def __init__(self, other):
self.other = other
def log_call(self, ofunc, *args, **kwargs):
res = ofunc(*args, **kwargs)
print 'CALL:',ofunc.__name__,args,kwargs
print '=>',res
return res
def __getattr__(self, key):
ofunc = getattr(self.other, key)
if not callable(ofunc):
return ofunc
return functools.partial(self.log_call, ofunc)
qs = Meeting.objects.all()
output( qs.query )
qs = Meeting.objects.all()
qs.query = LoggingObj(qs.query)
output( qs.query.sql_with_params() )
output( list(qs) )
Partial output, with SQL:
SELECT "meetup_meeting"."id", "meetup_meeting"."name",
"meetup_meeting"."meet_date" FROM "meetup_meeting"
CALL: sql_with_params () {}
=> (u'SELECT "meetup_meeting"."id", "meetup_meeting"."name", "meetup_meeting"."meet_date" FROM "meetup_meeting"', ()) (u'SELECT
"meetup_meeting"."id", "meetup_meeting"."name",
"meetup_meeting"."meet_date" FROM "meetup_meeting"', ())
This is what I wanted to do:
I have a table imported from another database. Majority of the columns of one of the tables look something like this: AP1|00:23:69:33:C1:4F and there are a lot of them. I don't think that python will accept them as field names.
I wanted to make an aggregate of them without having to list them as fields in the model. As much as possible I want the aggregation to be triggered from within the Django application, so I don't want to resort to having to create MySQL queries outside the application.
Thanks.
Unless you want to write raw sql, you're going to have to define a model. Since your model fields don't HAVE to be named the same thing as the column they represent, you can give your fields useful names.
class LegacyTable(models.Model):
useful_name = models.IntegerField(db_column="AP1|00:23:69:33:C1:4F")
class Meta:
db_table = "LegacyDbTableThatHurtsMyHead"
managed = False # syncdb does nothing
You may as well do this regardless. As soon as you require the use of another column in your legacy database table, just add another_useful_name to your model, with the db_column set to the column you're interested in.
This has two solid benefits. One, you no longer have to write raw sql. Two, you do not have to define all the fields up front.
The alternative is to define all your fields in raw sql anyway.
Edit:
Legacy Databases describes a method for inspecting existing databases, and generating a models.py file from existing schemas. This may help you by doing all the heavy lifting (nulls, lengths, types, fields). Then you can modify the definition to suit your needs.
python manage.py inspectdb > legacy.py
http://docs.djangoproject.com/en/dev/topics/db/sql/#executing-custom-sql-directly
Django allows you to perform raw sql queries. Without more information about your tables that's about all that I can offer.
custom query:
def my_custom_sql():
from django.db import connection, transaction
cursor = connection.cursor()
# Data modifying operation - commit required
cursor.execute("UPDATE bar SET foo = 1 WHERE baz = %s", [self.baz])
transaction.commit_unless_managed()
# Data retrieval operation - no commit required
cursor.execute("SELECT foo FROM bar WHERE baz = %s", [self.baz])
row = cursor.fetchone()
return row
acessing other databases:
from django.db import connections
cursor = connections['my_db_alias'].cursor()
# Your code here...
transaction.commit_unless_managed(using='my_db_alias')