TOAD: Cannot query as it freezes with a current query - toad

Toad is freezed with the below "cancel pending" status. Can anyone please help me on this as I am not able to execute any query as of now. :(

It usually happens when you cancel a query which takes a lot of time to execute. If it was updating something, there's a large undo so yes - it takes time until TOAD cancels the operation.
There are two ways out of it (as far as I know):
wait until it is done
kill TOAD process (if on MS Windows, use Task Manager to do that) - don't kill the application, but the process
If you saved they query, no problem. Otherwise, TOAD might offer to restore the last session's editor contents. If not, see if F8 (history) returns something. If not, you'll have to write the query from scratch, I'm afraid.
The question is what will the database do; will it rollback? Will it not? I can't tell. From my experience, it just depends. Sometimes the table affected remains locked so you might need the DBA's assistance to kill the database process as well.

Related

Fixing "A lock is not available" on Work (SAS)

I'm working with a complex set of SAS algorithms, created by a group outside of my company, to prepare a report required each year. Unfortunately, I am running into a file lock problem:
ERROR: A lock is not available for WORK._TEMP_OP_OTHER.DATA
I did have a similar issue last year, but it then appeared to be a somewhat random problem that cropped up (rarely) during execution. I reviewed the logs to see if the problem occurred, and if so cleaned up the output files and ran the algorithm again.
This year's report is consistently producing the error in the same place every time I run the algorithm. I have tried a couple of things to give the system more time in the hopes that the lock will become available: inserting a SLEEP command and also setting FILELOCKWAIT=n in libname statements. Neither has worked as I'd hoped.
FILELOCKWAIT seems like the most promising option, but when observing the execution of the algorithm and reviewing the logs it's clear that the process is failing immediately at that section, consistent with the default FILELOCKWAIT value of 0 seconds.
I am far from an expert in SAS, but I am wondering if I need to set FILELOCKWAIT for WORK, as that is where the lock issue is coming up. Is there a way to do this, and might it help my problem? If not, are there other options I could look into?
(Note: I am aware of the TRYLOCK macro, but want to introduce as few changes as possible to the algorithms I'm running. As mentioned above, they are complex and I am concerned about introducing unintended problems which may be difficult to notice, diagnose, and fix).

How to profile an openedge database?

Is there a Progress profiling tool that allows me to see the queries executing against an OpenEdge database?
We're doing a migration from an OpenEdge database into a SQL database. In order to map the data correctly we'd like to run certain application reports on the OpenEdge database and see what database queries are being executed to retrieve the data.
Is this possible with some kind of Progress profiling tool (a la SQL Server Profiling)? Preferably free...
Progress is record oriented, not set oriented like SQL, so your reports aren't a single query or a set of queries, it is more likely a lot of record lookups combined with what you'd consider query-like operations.
Depending on the version you're running, there is a way to send a signal to the client to see what it is currently doing, however doing so will almost certainly not give you enough information to discern what's going on "under the hood."
Long story short, your options are to get a Dataserver product so you can attach the Progress client to an SQL database - this will enable you to use an SQL database w/out losing the Progress functionality. The second option is to get a copy of the program's source code to find out how the reports are structured.
Tim is quite right -- without the source code, looking at the queries is unlikely to provide you with much insight.
None the less there are some tools and capabilities that will provide information about queries. Probably the most useful for your purpose would be to specify something similar to:
-logentrytypes QryInfo -logginglevel 3 -clientlog "mylog.log"
at session startup.
You can use session triggers to identify almost anything done by any program, without modifying or having access to the source of those programs. Setting this up may be more work than is worth it for your purpose. We have a testing system built around this idea. One big flaw: triggers cannot be fired for CAN-FIND.

SQLite query progress bar

I am using sqlite from c++ and I want to implement a progress bar that will inform user about the progress of a search.
Using sqlite3_progress_handler I can set ca callback to be called every N virtual machine instructions. This is ok for an infinite progress bar that is notifying user the app is still working.
What I need is a progress from 0 -> 100%. Can this be done ?
I realize that this is a bit late, and that this question already has an accepted answer, but I think a little more information would be useful.
As the OP noted in the question, the sqlite3_progress_handler can be configured to call the callback function every N VM instructions. This is not a time progress monitor or a query statement progress monitor, but a monitor for the VM instructions that the query planner has calculated for the query (which will do in a pinch).
By prefacing the query with 'EXPLAIN QUERY PLAN' (or just 'EXPLAIN' for short) and stepping the results to get a count, you will know how many VM instructions are in the query plan. There's your 100% figure.
BE SURE to read the caveats on the SQLite.org website about the EXPLAIN QUERY PLAN command, especially the part about not relying on the output format. But for this situation, we're not concerned with the information in the results, but only the number of instructions.
As of SQLite version 3.24.0 (2018-06-04), the output format for the EXPLAIN QUERY PLAN command diverged significantly from the EXPLAIN command. EXPLAIN QUERY PLAN is no longer suitable for this use case; you have to specify the EXPLAIN command itself.
To be clear, the EXPLAIN command is not published as supporting the sqlite3_progress_handler() API. The fact that it can be used in this case is purely coincidental. You should always test that this works on whichever version of SQLite you are using.
It is not possible for the database to predict how much time (or how many VM instructions) a query will need.

How to deploy a .NET application that will expire after a certain time or number of uses

We would like to be able to create intermediate releases of our software that would time-bomb or expire after a certain fixed time or number of uses that would not easily be manipulated. We are using Visual C++ with mixed native and managed assemblies.
I imagine we may need to rely on a registry tag but this seems to be insecure.
Can anyone offer some advice on how to do this?
I was working on a "trial-ware" solution a while back and it used a combination of registry keys, information stored in a flat-file at a certain position surrounded with junk data, and then also had an option to reach out to a webservice that would verify it back with the software creators.
However, as FrustratedWithFormsDesigner stated, there is no 100% fool-proof way to do this. There is always a way that a hacker can get around whatever precautions you put in place.
If you are using a database for the application, then it might be better to store a install (datetime) and a numberofusers (int) and then make code that checks those fields when the program is starting / loading / initing. If they are past a certain number or time (this could also be in the db) then exit the program.
This is very hard if not impossible to do in a foolproof way. In any event, there's nothing to stop somebody removing and reinstalling the software (you do support that, right?).
If you cannot limit the function of these intermediate releases (a much better incentive for people to move to official bits), it might be more trouble than it's worth to implement such a scheme.
Set a variable to a specific date in the program then every time the program is run access the system date and check if that date is equal to or greater than the specified date. If true then start the expiry process and display a message or alert panel to the user.
Have the binary download a tiny bit of code on startup from one of your servers.
Keep track of the activation counter on the server, when the counter reaches the limit, return a piece of code that displays the 'sorry!' message.
You could deploy it as a ClickOnce application with a certificate that expires at a certain date. If I recall correctly, the app will err on startup after that date.
A couple caveats:
The only option for the user may be to uninstall the app, which is a jerk move.
You will end up maintaining a ton of different deployments.
It will be a shock to the user as it will just happen without warning.

Windows Pre-Caching SQLite problem

SQLite is a great little database, but I am having an issue with it on Windows. It can take up to 50 seconds to perform a query on a 100MB database the first time the application is launched. Subsequent loads take 10% of that time.
After some discussions on the SQLite mailing list, I am told
"The bug is in Windows. It aggressively pre-caches big database files
-- reads in big chunks of the files -- to make it look as if programs
like Outlook are better than they really are. Unfortunately although
this speeds up some programs it makes others act jerky because they
have no control over how much is read when they ask for just a few
bytes of file."
This problem is compounded because there is no way to get progress information while all this is happening from SQLite, so my users think something is broken. (I could display a dummy progress report, but that is really cheesy for a sharp tool.)
I believe there is a way to turn the pre-caching off globally, but is there some way around this programmatically?
I don't know how to fix the caching problem, but 50 seconds sounds extreme. If the query itself takes 10% of that, that means 45 seconds to load a 100mb file. Even if Windows does read in the entire file in one go, that shouldn't take more than a couple of seconds given normal harddrive speeds.
Is the file very fragmented or something?
It sounds to me like there's more than just precaching at play here.
I'm too having the same problem with my first query. The problem returns after not querying the database for a long time. It seems to be a memory caching problem. My software runs 24/7 and every once in a while the user performs the SELECT query. I am also performing the query on a database of the same size.