How to execute batch of sql statements in the QuestDB database web console? - questdb

I have bunch of insert statements copy pasted into the web console.
I selected them and pushed run button.
But only first one applied actually.
How to run all of them? Any shortcuts?

There is no way atm, I'd say it's a missing feature. You have to elect one by one and execute.

Related

Setup code for loopback under boot folder

I'm looking to access control example. https://github.com/strongloop/loopback-example-access-control
It says we need to put sample-models.js file under server/boot folder. That means, everytime I run the application, the creation process will be made again and again. Of course, I'm getting errors on the second call.
Should I put my own mechanism to disable if ones it run, or is there a functionality in loopback?
Bot scripts are for setting up the application. And run once per application start.
So if you want to initialize database or any initializing which would be persisted by running boot script, you need to check if it is initialized first or not.
For example for initializing roles in db, you need to check if there is desired roles in db or not. And if there is not, so create ones.
There is no other functionality in loopback for this.

How to run SAS using batch if I do not have it locally

Is there a way to run SAS using batch if I don't have the sas.exe in my machine?
My computer has the SAS EG but the code is ran on our companies servers
Thanks
If you are asking whether it is possible to run SAS batch on your local machine without having SAS on your local machine, the answer is no.
If you are using EG to connect to a SAS server, and you want to execute a batch job on the SAS server, that is possible (just not with EG). For example, if you have terminal access to the SAS server via putty or whatever, you can do a batch submit.
Enterprise Guide is quite capable of scheduling jobs, whether or not you have a local SAS installation.
Wendy McHenry covers this well in Four Ways to Schedule SAS Tasks. Way 1 is what you probably are familiar with ('batch'), but Ways 2 through 4 are all possible in server environments.
Way 2 is what I use, which is specifically covered in Chris Hemedinger's post Doing More with SAS Enterprise Guide Automation. In Enterprise Guide since I think EG 4.3, there has been an option in the File menu "Schedule ...", as well as a right-click option on a process flow "Schedule ...". These create VBScript files that can be scheduled using your normal Windows scheduler, and allow you to schedule a process flow or a project to run unattended, even if it needs to connect to a server.
You need to make sure you can connect to that server using the credentials you'll schedule the job to run under, of course, and that any network connections are created when you're not logged in interactively, but other than that it's quite simple to schedule the job. Then, once you've run it, it will save the project with the updated log and results tabs.
If your company uses the full suite of server products, I would definitely recommend seeing if you can get Way 3 to work (using SAS Management Console) - that is likely easier than doing it through EG. That's how SAS would expect you to schedule jobs in that kind of environment (and lets your SAS Administrator have better visibility on when the server will be more/less busy).

Sitecore: copy files at index update finish

In our Sitecore project we have set a manual policy for indexing. We need to copy the indexes after the build to other servers. I know how we can copy the files from one server to another. So my question is:
How can I set a tool to run when the index rebuild is finished? Is this even possible?
We don't want do run the tool manually after each run.
You can call code when certain events have started or finished.
In your case, one option is for you to hook into the indexing:end handler to start an xcopy command for instance - or call your tool programmatically from there.
Is there a reason why you can't keep the indexes up-to-date on the servers themselves?

How to monitor an FTP upload directory in coldfusion without using event gateways?

Having spent a couple of hours coding an event gateway solution, I discover that they are not supported by CF standard edition. Buggerit! So back to the drawing board.
I can see how I can check the folder's dateLastModified attribute using cfdirectory and so I can run a scheduled task to see when a new file has been uploaded, but whats the best way of storing/comparing the file list so as to get a list of just the ones added since last check.
General hints/links appreciated
Assuming that, for whatever reason, you can't use a gateway, the simplest soluition that springs to mind is to move files you've procesed to a separate directory. Then, your scheduled task can just deal with files in the FTP directory itself.
they are not supported by CF standard
edition
Are you still using CF7? It has been supported by CF Standard Edition since CF8
As #Henry pointed out, you can use an Event Gateway.
If you decide not to use that approach, I'd suggest a ColdFusion scheduled task. Most foolproof algorithm for that task is storing the results of the last <cfdirectory/> call either in a persistent scope - application or server - or writing it out to a database or file (e.g. WDDX). Reason to hold on to all this information, rather than just a timestamp, is handling situations where newly added or changed files do not take on the correct timestamp for whatever reason (system clock off comes to mind).
If you use a database to capture the data you could use a MINUS/EXCEPT query in SQL Server or Oracle, respectively, to determine what's new. Else you'll need to do perform some nested looping in ColdFusion over the old a new queries to generate the list of new files.

Coldfusion Report Builder - How can you set different datasources externally between prod/staging/dev?

Coldfusion Report Builder is great.
One small issue. We use ANT+CFANT to deploy.
When we create the report, say in a datasource called MyApp_dev on a dev box.
Our other server is the production server. It also contains a staging build to ensure everything is going smoothly before we publish to live. (thanks to Al Everett for bringing this clarification to my attention.)
Everything works great when the report is created.
We deploy the report to our staging server, which has a datasource of MyApp_Staging. That server also, may or may not, have the live app working under MyApp_Live. Ant pushes the update to Staging just great.
Run the report, crashes and burns. Why?
It seems the report is looking for the MyApp_Dev data_source, even though the application is using the MyApp_Staging datasource.
In digging around I found a few approaches, I would like to do this one, final, ideal way from the beginning instead of having to go back to do dozens of reports differently when I have a new Aha! moment.
1) Obvious: Pass in the datasource in to the cfreport tag. Doesn't work for ColdFusion Builder Reports as of v8, or v9 as tested on Linux.
2) Most realistic option (but painful) so far: Pass in the query as an object into the ColdFusion Builder report. Let's think about this:
Create the Report with the report builder to my heart's content using the RDS, etc on my local box.
When I'm done, copy the query into a snippet of code, or into a database column to be dynamically be injected at runtime with correct datasource.
Modify my "run report" event to find the query from the database column, insert it into another dynamic cfquery and potentially... evaluate (!?!) it? Fun side is I can set the cfquery datasource to what I would need for each environment.
When I modify the report's columns in CF Report Builder, I always have to update the query in the database. Is there a snippet of code that can extract this for me? Hmm.
3) Less than ideal. Suck it up and let all the reports in staging run off the live server. Maybe copy the live data into staging (sans structural changes) to let it seem similar.
Are there any eloquent ways to accomplish the above?
Thanks in Advance!
If you have different dev/staging/production boxes, why not just use the same datasource name on each? That'll save you from having the code figure out where it is.
Because security concerns at my current assignment preclude me from using RDS, I use option 2 as a matter of course. I also like it as it makes it easier to debug.