I am using DataNitro to fill in the json data extracted from an url( I am using a function to do this) into an excel file. But I want to update the excel file every 3 seconds without running the function again.
Related
I have a task to access an API every minute and get data. I call the API using the cpr library (which is based on libcurl)
cpr::Get(cpr::Url{ URL })
And store the data in a vector:
std::vector<std::string> myData;
How can I make these requests every minute (or whatever time interval I set with chrono)?
I would like to draw attention to the fact that I would like to somehow work with these data later (exclude duplicates for example).
In javascript I would write something like setinterval
I have made a process with a PL/SQL Function Body to calculate some numbers. The process is done only after double-clicking save changes. Which means I have to go into the report one more time and save the changes (which I didn't make) in order for the process to complete. Is there a way to execute processes automatically after first saving the data?
Make new process with a PL/SQL Function Body, change Interactive Grid - Automatic Row Processing (DML)" to "PL/SQL Code and Editable Region to table_name.
I have a very large query which is getting called from 3 different pages.
Instead of writing the same query in all the 3 cfm files, I am trying to find an alternative way to save the query (alongwith #variable(s)#) in a Query.cfm file.
Query.cfm example :
SELECT *
FROM A
WHERE TRADE_DATE BETWEEN to_date('#f_startDate#','dd/mm/yyyy') AND to_date('#f_endDate#','dd/mm/yyyy')
variables : #f_startDate# and #f_endDate#
Then I read the filecontents, store it in a variable and replace the #variable(s)# with the values to run the function from each of the pages.
Calling page (code so far which is not working):
<cffile action = "read" file = "#ExpandPath( './Query.cfm')#" variable = "Query">
<cfset Query = #ReplaceList(Query,"#f_startDate#,#f_endDate#", "01/01/2000,01/01/2002")#>
<cfquery name="Q_DailyPrice" datasource="#f_datasource#">
#PreserveSingleQuotes(Query)#
</cfquery>
How to set the variable values into the string?
Further details about each page:
Returns the JSON of the query to load charts
Used to generate query data in xls
Used further to generate subset of the query data (QoQ) to create a table.
Database : Oracle
Your options include:
Put the query in a .cfm template and access it with cfinclude
Put the query into a user defined function in a .cfm file. Then cfinclude the file and call the function.
Put the query into a user defined function in a .cfc file. Then you can either run the function with cfinvoke, or create an object and then call the function.
There are probably other options as well. I suggest looking at the three I suggested and determine which one best meets your needs.
Whatever method you use, ColdFusion has a parsedatetime function that will convert your strings to date objects. Using these might be faster that oracle's to_date function. You'll have to test and see. In any event, use cfqueryparameter for a variety of reasons.
Also, be careful about using between with oracle. Its date fields include a time component. If any of your records have one, you are safer with
where trade_date >= YourStartDate
and trade_date < TheDayAfterYourEndDate
How can I read a CSV file, parse the values, and then output it to a particular database table?
That's the basic problem.
Here is a 'bigger picture' of what I'm trying to do:
I'm trying to either read from multiple CSV files (every minute) and or read from an ever-updating CSV file (with additional row entries every update) every minute.
I have a Qt application, that reads a special text file, parses it and inserts about 100000 rows into a temporary table in a firebird database. Then it starts a stored procedure to process this temporary table and apply some changes to permanent tables. Inserting 100000 rows into in-memory temporary table takes about 8 seconds on firebird.
Now I need to implement such behavior using MS SQL Server 2008. If I use simple serial inserts it takes about 76 seconds for 100000 rows. Unfortunately, it's too slow. I looked at the following ways:
Temporary tables (# and ##). Stored on the disk in tempdb scheme. So there is no speed increase.
Bulk Insert. Very nice insertion speed, but thre is a need to have client or server-side shared folder.
Table variables. MSDN says: "Do not use table variables to store large amounts of data (more than 100 rows)."
So, tell me please, what is the right way to increse insertion speed from client application to MSSSQL2008.
Thank you.
You can use the bulk copy operations available through OLE DB or ODBC interfaces.
This MSDN article seems to hold your hand through the process, for ODBC:
Allocate an environment handle and a connection handle.
Set SQL_COPT_SS_BCP and SQL_BCP_ON to enable bulk copy operations.
Connect to SQL Server.
Call bcp_init to set the following information:
The name of the table or view to bulk copy from or to.
Specify NULL for the name of the data file.
The name of an data file to receive any bulk copy error messages
(specify NULL if you do not want a message file).
The direction of the copy: DB_IN from the application to the view or
table or DB_OUT to the application from the table or view.
Call bcp_bind for each column in the bulk copy to bind the column to a
program variable.
Fill the program variables with data, and call bcp_sendrow to send a
row of data.
After several rows have been sent, call bcp_batch to checkpoint the
rows already sent. It is good practice to call bcp_batch at least once
per 1000 rows.
After all rows have been sent, call bcp_done to complete the
operation.
If you need a cross platform implementation of the bulk copy functions, take a look at FreeTDS.