I'm looking for an approach to handle updating some tabular data (i.e. txt or database) periodically (i.e. every one day). The GUI should access this data any time. I'm Ok with storing data in local host or in a server but for testing I will start with a local PC host. The naive approach is to update data every time the User open GUI. This works just fine but in the future, I need data to be updated automatically. What is the right approach for this issue?
If you are storing data in a .txt file you can use QFileSystemWatcher, which emits fileChanged signal whenever that particular file is changed. Based on this signal you can update your GUI.
And, it should be possible for databases too.
Related
Is there anything (system procedure,function or other) in SQL Server that will provide the functionality of DBMS_ALERT package of ORACLE (and DBMS_PIPE respectively)?
I work in a plant and I'm using an extension-product of SQL-Server called InSQL Server by Wonderware which is specialized in gothering data from plant controllers and HumanMachineInterface(SCADA) software.
This system can record events happening in the plant (like a high-temperature alarm, for example). It stores sensor values in extension tables of SQL Sever, and other less dense information in normal SQL Server tables.
I want to be able to alert some applications running on operator PCs that an event has been recorded in the database.
An after insert trigger in the events table seems to be a good place to put something equivalent to DBMS_ALERT (if it exists), to wake up other applications that are waiting for the specific alert and have the operators type in some data.
In other words - I want to be able to notify other processes (that have connection to SQL Server) that something has happened in the database.
All Wonderware (InSQL but now called Aveva) Historian data is stored in the history blocks EXCEPT for the actual tag storage configuration and dedicated event data. The time series data for analog, discrete and strings is NOT in SQL tables at all - unless someone is doing custom configuration to. create tables of their own.
Where are you wanting these notifications to come up? Even though the historical data is NOT stored in SQL tables, Wonderware has extensive documentation on how to use SQL queries to appropriately retrieve data (check for whatever condition you are looking for)
You can easily build a stored procedure and configure it for a maintenance plan.
But are you just trying to alarm (provide notification) on the scada itself?
Or are you truly utilizing historical data (looking for a data trend - average, etc.)?
Or trying to send the notification to non-scada interfaces?
Depending on your specific answer, the scada itself should probably be able to do it.
But there is software that already does this type of thing Win-911, SeQent, Scadatec are a couple in the OT space. But also things like Hip Link or even DeskAlert which can connect to any SQL via it's own API.
So where does the info need to go (email, text, phone, desktop app...) and what is the real source of the data>
I am working on a web application which involves inventory management. My application uses DB2 as the database.
1) In my application, there is a module which just inserts the records. This can happen at anytime since the records are entered by customers.
2) And there is another stand-alone module which reads and updates the records entered. This module never Inserts records. It just updates existing records. This module is scheduled so it will run once an hour.
My question is, the second module can read and update the records without an issue if the first module is inserting a record at the same time? I am not referring to the record just being entered at the time but the other records in the table that needs processing. ( Bottom line is when first module inserts data, can my second module read and update data in separate rows of the same table at the same time ? )
I am very new to DB2 and heard about the locking in DB2. That is why I raised this question.
Adding the following information about my application. Both modules are written in java. Second module is a spring boot application. Operating system is windows.
Thank you in advance.
I have an excel file where data is refreshing from third party application.
Problem to solve: My DJANGO web application should monitor that excel file continuously and detect a change from that excel file. Whenever there is a change then particular location of web page should be refreshed.
Could somebody please give suggestions to achieve this functionality?
Basically, you need to check with a runtime process (e.g a cron) if the previously version of your file is different to the new version calling your runtime process. This means that you could read the content of the CSV file (could be with import csv or using pandas ìmport pandas as pd) and store it in some place (e.g another temporal file), then (according the time of you define to execute your cron) the content will be check again, and compare it against the content of what you previously have stored. If this happens, you could use Ajax to refresh your website section or use a real time library to refresh automatically, and store the new csv content again.
I have setup the Fabric as per the instruction using docker and everything works fine. I have a chaincode which stores a value in the world state which I can read afterwards using a query method.
My scenario is something like this: I submit multiple separate requests within a short period of time to store different data in the world state. Within each request I need to read the data just submitted previously. However, I am unable to read the most recently submitted data.
My understanding is that it might be because those data might not be stored in the blockchain yet and hence they cannot be read. With this understanding, I introduced a sleep function to sleep for a few seconds to give enough time for the previously submitted data to be included in the blockchain. However, this approach was not successful.
So I am wondering if there is any way to read the previous data just after storing the subsequent data.
Thanks,
Ripul
Waiting a few seconds in the chaincode would not be sufficient. Data that is 'written' in chaincode is not yet committed to the database, it is only a proposal to write something to the database at that point. Only committed data is read back in chaincode. Therefore after you make an update in chaincode and get the proposal response, you must submit the transaction to ordering. It may take a few seconds for orderer to cut the block, distribute it to peers, and have peers commit the data. Only then can the data be read back in chaincode.
If you must read the data that you just wrote within the same chaincode function, then you will need to keep a map of the data that has been written and retrieve the value from the map rather than from the committed database.
I have setup my database in Django in which I have huge amount of data. The task is to download all the data at a time in csv format. The problem which I am facing here is when the data size (in number of table rows) is upto 2000, I am able to download it but when number of rows reaches to more than 5k, it throws an error, "Gateway timeout". How to handle such issue. There is no table indexing as of now.
Also, when there is 2K data available, it takes around 18sec to download. So how this can be optimized.
First, make sure the code that is generating the CSV is as optimized as possible.
Next, the gateway timeout is coming from your front end proxy; so simply increase the timeout there.
However, this is a temporary reprieve - as your data set grows, this timeout will be exhausted and you'll keep getting these errors.
The permanent solution is to trigger a separate process to generate the CSV in the background, and then download it once its finished. You can do this by using celery or rq which are both ways to queue tasks for execution (and then collect the results at a later time).
If you are currently using HttpResponse from django.http then you could try using StreamingHttpResponse instead.
Failing that, you could try querying the database directly. For example, if you use the MySql database backend, these answers might help you:
dump-a-mysql-database-to-a-plaintext-csv-backup-from-the-command-line
As for the speed of the transaction, you could experiment with other database backends. However, if you need to do this often enough for the speed to be a major issue then there may be something else in the larger process which should be optimized instead.