Postgresql get updated by sql script, get notification in backend - django

I'm using django-rest-framework as backend and postgresql as the database. The database might be changed by raw SQL script and I want to get notified in the backend when those changes happen so that I can notify different users about the change.
I've checked about posts like https://gist.github.com/pkese/2790749 for receiving notification in python and some SQL scripts for
CREATE TRIGGER rec_notify_trig AFTER INSERT OR UPDATE OR DELETE ON rec
FOR EACH ROW EXECUTE PROCEDURE rec_notify_func()
My question is that I don't know how to hook them together in the django-rest-framework, like where should I put the SQL script, where to put the python settup so that I can connect them together. Any advice will be appreciated.

I would create an endpoint on the djangorestframework side to accept a notification.
Then, in your rec_notify_func() you can call out and hit your endpoint where you can perform any enduser notification necessary.
CREATE EXTENSION plpython3u;
CREATE FUNCTION rec_notify_func(notification_endpoint_uri text) RETURNS text AS $$
from urllib.request import urlopen
data = urlopen(notification_endpoint_uri)
return data.read()
$$ LANGUAGE plpython3u;
NOTE:
You need to have plpython installed on the system in order to enable the extension.
On ubuntu something like this:
sudo apt-get install postgresql-plpython3-9.6

Related

How to execute django query from string and get output

I want to run a django query from a string and put the output into a variable
in my DRF project, the client sends a django query:
{'query': 'model.objects.all()'}
and I need to return the result of this query.
I tried using exec('model.objects.all()') but i can't assign the output to a variable, i also tried using subprocess.run([sys.executable, "-c", 'model.objects.all()'], capture_output=True, text=True) but subrocess doesn't find the model
There's a huge amount of setting up needed before a process using Django models can work correctly. That's why manage.py shell exists.
If you want to perform Django operations outside the context of a Django server, write a management command. You can then invoke it from the command line, from cron, from other Python scripts ... wherever.

what will be the query for check completion of workflow?

I have to cheack the status of workflow weather that workflow completed within scheduled time or not in sql query format. And also send an email of workflow status like 'completed within time ' or not 'completed within time'. So, please help me out
You can do it either using option1 or option 2.
You need access to repository meta database.
Create a post session shell script. You can pass workflow name and benchmark value to the shell script.
Get workflow run time from repository metadata base.
SQL you can use -
SELECT WORKFLOW_NAME,(END_TIME-START_TIME)*24*60*60 diff_seconds
FROM
REP_WFLOW_RUN
WHERE WORKFLOW_NAME='myWorkflow'
You can then compare above value with benchmark value. Shell script can send a mail depending on outcome.
you need to create another workflow to check this workflow.
If you do not have access to Metadata, please follow above steps except metadata SQL.
Use pmcmd GetWorkflowDetails to check status, start and end time for a workflow.
pmcmd GetWorkflowDetails -sv service -d domain -f folder myWorkflow
You can then grep start and end time from there, compare them with benchmark values. The problem is the format etc. You need little bit scripting here.

Is there a way to use CLI to POST data on django2 project?

I have a webapp that I created using Django2. At a high level, it will be used to process .tsv files of data and display them nicely on a screen.
I want to be able to have a command line interface where I can perform a POST request to the already running webapp, and essentially add data to a model, save it, and create a unique webpage to display that data. Something like:
uploadtodjangoapp <myfilename> --user='heidi' --other-options='....'
uploading myfilename to myapp!
done
see data here: www.mysite.com/info/myfilename
In this situation ^ the webpage will be running already somewhere (either locally or on a vm).
Currently, I know you can create a form on the user interface to perform post requests/get user data. And I know you can also use python manage.py shell and do something like:
>> from myapp.model import mymodel
>> m = mymodel(data="some data here")
>> m.save()
.... but is this the only way?
Any help would be greatly appreciated!
You can easily achieve this using curl
Just for Example :
In terminal
curl --data "field_1=data_1&field_2=data_2&field_3=data_3" <API FOR POST REQUEST>

Run Redshift Queries Periodically

I have started researching into Redshift. It is defined as a "Database" service in AWS. From what I have learnt so far, we can create tables and ingest data from S3 or from external sources like Hive into Redhshift database (cluster). Also, we can use JDBC connection to query these tables.
My questions are -
Is there a place within Redshift cluster where we can store our queries run it periodically (like Daily)?
Can we store our query in a S3 location and use that to create output to another S3 location?
Can we load a DB2 table unload file with a mixture of binary and string fields to Redshift directly, or do we need a intermediate process to make the data into something like a CSV?
I have done some Googling about this. If you have link to resources, that will be very helpful. Thank you.
I used cursor method using psycopg2 function in python. The sample code is given below. You have to set all the redshift credentials in env_vars files.
you can set your queries using cursor.execute. here I mension one update query so you can set your query in this place (you can set multiple queries). After that you have to set this python file into crontab or any other autorun application for running your queries periodically.
import psycopg2
import sys
import env_vars
conn_string = "dbname=%s port=%s user=%s password=%s host=%s " %(env_vars.RedshiftVariables.REDSHIFT_DW ,env_vars.RedshiftVariables.REDSHIFT_PORT ,env_vars.RedshiftVariables.REDSHIFT_USERNAME ,env_vars.RedshiftVariables.REDSHIFT_PASSWORD,env_vars.RedshiftVariables.REDSHIFT_HOST)
conn = psycopg2.connect(conn_string);
cursor = conn.cursor();
cursor.execute("""UPDATE database.demo_table SET Device_id = '123' where Device = 'IPHONE' or Device = 'Apple'; """);
conn.commit();
conn.close();

How to detect and respond to a database change (INSERT) from a django project?

I am setting up our project to integrate with a shipping platform called Endicia which has the ability to insert new rows into our database when a package is shipped.
How can I detect from python when a new row has been inserted?
My solution for now would be to query the DB every 30 seconds or so for new rows... is there another solution to send a signal from postgres to python?
You'd set up a custom command that is run by the manage.py file.
You'd put it in `yourapp/management/commands/' folder. Make sure to add an init.py file to both the management and commands folder or the command won't work. Then you create the code for the custom command.
Then, see this related question about running a shell script when changes are made to a postgres database. The answer there was to use PL/sh. You'll need to figure that part out on your own, but basically however you do it, the end result is that the script should call something like /path/to/app/manage.py command_name