SQL Server Profiler: How to inspect/understand the duration for the Audit Logout event in more detail? - profiling

I am profiling some MS SQL queries with the SQL Server Profiler for my C# Application that I develop with Visual Studio and IIS Express:
The duration that is given for the event "Audit Logout" (16876 ms) is the total time between login and logout. The duration for the query is only 60 ms.
Login/Logout events are related to the setting up / tearing down.
From What is "Audit Logout" in SQL Server Profiler?
I would like to understand the time difference of 16816 ms (= 16876 ms - 60ms) in more detail.
a) Is it possible to log more events (like a "debug mode")?
b) Is it right to assume that the time difference is only due to tearing down because the end time of the "Audit Login" event is the same as the start time of the query execution?
c) Is there some other tool for analyzing (setup and) tear down times?
d) Does the time difference depend on my query? With other words: would an optimization of the query also help to reduce that time difference?
What I observed so far for #DevTime is that it makes a difference if I start my application the first time (IIS Express is started by Visual Studio, the database is created using the entity framework, example data is written to the database) or if I start it the second time when the database already exists.
For a login after the first start the time difference is about 15 s larger than for a login after the second start. The query that is marked in the example above is executed after user login. Therefore I would expect that the initialization of the database already has been finished at that time and that the initialization would not have any effect on the time difference. Nevertheless it seems to have an influence.
Some related articles:
What is "Audit Logout" in SQL Server Profiler?
Fixing slow initial load for IIS
SQL Server audit logout creates huge number of reads
https://learn.microsoft.com/de-de/sql/relational-databases/event-classes/audit-logout-event-class
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/84ecfe9e-ff0e-4fc5-962b-cffdcbc619ee/audit-logout-event-seems-slow-on-occasion?forum=sqldatabaseengine

When starting the SQL Profiler a window Trace Properties is shown.
The second tab Events Selection is the place where the considered events can be selected.
Activate the option Show all events.
Enable for example the option "Showplan XML FOR Query Compile" under the section "Performance" to log more events.
Also see How to determine what is compiling in SQL Server

I dont have a solution with your recomandations, please the database continue whit problem. all the querys are so good in the process time, only the logout audit constantly show a long time duration. the network its laso good. I think that this logout time affected the efficiency

Related

"Error: error - Service Unavailable" while running any job via oracle Apex Page

I am executing a external job using DBMS_SCHEDULER through apex page by clicking a button in below manner.(Dynamic action=>Execute PlSql)
dbms_scheduler.run_job(job_name => 'APEXDATA.myJobName', use_current_session=> TRUE);
Its executing the external job correctly.(taking 1-2 minutes).My issue is that, in between the time while its executing i can not able to access any other page or can not able to login with new session nothing.showing below error in every task i am performing.
**503 Service Unavailable
The connection pool named: |apex|| is not correctly configured, due to the following error(s):
Exception occurred while getting connection: oracle.ucp.UniversalConnectionPoolException:
All connections in the Universal Connection Pool are in use**
Is this the general or known issue?if yes how to resolve the issue,because in same time other user also has to perform any other task or other may login same time.
Thank You.
I think you're mixing 2 things that hard to combine:
Dynamic actions are designed to submit code from the page without a page submit so the user can continue to work on the page after he has done something (eg run pl/sql code)
Running a process in the database that takes up the database session until it is completed ( use_current_session=> TRUE). Your dbms_scheduler.run_job process will run in the current session and as long as that job is running no other operations can be run in that database session (the connection is in use as shown in the error message).
Solutions:
use_current_session=> FALSE so the job runs in the background
In the dynamic action, set "Wait for result" to true, so the user is forced to wait until the job completes.
Execute the job on page submit which will also force the user to wait for the job to be completed.
Since your job takes 1-2 mins to complete, options 2 and 3 are probably not feasible because the user experience is not optimal. If you execute the job in the background, then you probably need to write some additional code to prevent the user from clicking a couple of times and submitting the job multiple times. You could do that by checking if the job is running before you submit it and not submit it if it is currently running.

"Zombie Requests" CFQUERY tags get stuck and are unkillable

Coldfusion 2016
Microsoft Server 2012
Oracle 12
ODBC connection
I turned on profiling and monitoring and now I can see that there are requests that are stuck and cannot be terminated by the CF monitor; Some are over 200k seconds.
I know I can increase the number of simultaneous requests but I want to solve the underlying problem. As I read the stack traces of these “zombie requests” they are getting stuck on and some are in but some are not. I ran the query in my oracle client and they resolve instantly.
Is there a way to terminate these requests or prevent this from happening at all?
EDIT: The server monitor does not treat these requests as slow or hung, the alerts are not triggering for any of these. Honestly, they should have be going off constantly considering how many of these there are.
Also, the execution time is a mere .003 seconds so what happened? Why doesn't ColdFusion know this?
An example of a "zombie"
The active query that is stuck
We have a similar situation with a different database engine - redbrick, which runs on a unix server. We solved it as follows.
We set up a cron job on the database server to run every 5 minutes. This job uses a combination of unix and awk commands.
This job runs a query against the system table that looks for queries that have been running for more than 120 seconds, where the database account is the one used by ColdFusion. Records are outputted to a file. Something like this:
print "alter system cancel user command userName process " $1 ";"
$1 comes from the query and is the process Id we want to stop.
Then we run the file, which executes all those alter system commands.
With a different database engine, and possible different OS for the database server, the details would be different, but the approach should work.
Edit Starts Here
To prevent recurrence, look at the pages that call the ones with the long running queries. If impatient users are able to repeatedly click something because nothing is happening, do something about that. You can use javascript to make the link/button go away. Alternatively, you can go to an intermediate page with a display for the user and something that carries them through to the real page.

Gracefully terminate a request based service on server

In our web application, for each http-request there is a lot of computation that happens on back end. Output can vary from 10 sec - 1 Hour. In the mean time when it is computed, "Waiting.." is shown on the website for the respective user.
But it so happens, that a user might cut down the service in between. So what all can be done on the back end so that the computation can be stopped in between to save resources? What different tactics can be applied here?
And if better (instead of killing the thread directly), then a graceful termination policy should make wonders.
I'm not sure if this fits your scenario but here is how I have tackled this issue in the past. We were generating pdf reports for a web-app. Most reports could be generated in under 5 seconds but some would take up to an hour.
When the User clicks on generate button we redirect them to a "Generating..." dialog screen which has a sort of progress bar and a Cancel button. This also launches the generate process on the server in a separate thread (we have a worker pool). The browser then polls the server regularly via ajax to check on the progress (either update the progress bar or redirect to the display page when finished).
The synchronization at the server between the generating process and the ajax process was done via a process synchronization object. The sync-obj was a very simple class instance which could be retrieved quickly from any thread at any time via some unique string.
Both processes could update this shared sync-obj. As the report generated the repgen thread would update the sync-obj which the ajax thread would inform the browser. If the User clicked the Cancel button then the ajax thread would set the "cancel" flag in the sync-ob and the repgen thread would pick that up and break out of the generate loop.
Clearly the responsiveness of the whole process depends a lot on how frequently the repgen thread checks the sync-obj and that often comes down to how the individual report was coded.
Finally, to answer your question, if the User gets bored and goes "back" and clicks the generate button again we do not cancel the first report and start a second but rather realise that it is the same report (and the same sync-obj id) and so just let the report continue. However if that does not suit your scenario then starting a generate process could cancel the first in the same manner that the User could via the Cancel button.

Operation "timing out" during new item creation in Sitecore Editor

I've created a command in the Sitecore Editor that automatically builds out up to 25 items at a time. The problem that I'm experiencing is that the operation just "hangs" and does not complete. I don't think it's an error because I've added error handling and logging.
I'm getting the following error message "The operation could not be completed. Your session may have been lost due to a time-out or a server failure. Try again."
How can I increase the "time-out" duration (if this is a setting somewhere) - or is there another solution to this problem?
Long running operations will time out eventually depending on your IIS settings, usually after 20 mins. Instead, you should run your commands as a scheduled task, as they run in the background, with no waiting for the IIS request.
However, it seemes strange that inserting 25 items is such a long operation that the browser times out. You might have another issue in your code.

Django/Postgres performance worsening after repeatedly processing the same query

I am running Django on Apache. I have several client computers which should call urllib2.urlopen() and send over some data which my server will process and immediately send back a reply. However, when I am testing this I found a very tricky issue. I have one client repeatedly send the same data to be processed. The first time, it takes around ~20 seconds, second time, it takes about 40 seconds, third time I get a 504 (gateway timeout) error. If I try to send data some more 504 errors randomly pop up. I am pretty sure this is an issue with Postgres as the function that processes the information makes many database calls, however, I do not know why the performance of Postgres would decline so much. I have tried several database optimization tricks, including this one: (http://stackoverflow.com/questions/1125504/django-persistent-database-connection), to no avail.
Thanks in advance.
Edit: The requests are not coming concurrently. They are coming in back to back and each query involves a lot of SELECTs and JOINs, and there are a few INSERTs and UPDATEs as well. The apache error logs show that it is just a simple timeout, where the function to process the client posted data takes over 90 seconds.
If it's really Postgres, then you should turn on the logging of slow statements in the Postgres configuration to find out which statement exactly is taking so much time.
This can be done by setting the configuration property log_min_duration.
Details are in the manual:
http://www.postgresql.org/docs/current/static/runtime-config-logging.html#GUC-LOG-MIN-DURATION-STATEMENT
You say the function makes "many database calls" so I'd start with a very low number, or even 0 to log the duration of all statements, then you might be able to identify the slow ones.
It could also be a locking issued. Maybe the first call does not end its transaction properly and subsequent calls run into a timeout when waiting for a resource.
You can verify this by checking the system view pg_locks after the first call.
Have you checked the Apache error_logs? Have you set django DEBUG = True or ADMINS = ('email#addr.com',) so you can get a detailed error report about what the actual cause of the issue is? If so, how about pasting some information here.
Why are you certain that it's postgres? Have you done diagnostics to come to that conclusion? If so, please let us know.
Are you running apache with mod_wsgi? How many processes and threads have you allocated to your django application?
Also, 20 seconds to process the first transaction is a huge amount of time. Perhaps you could show us the view code that is causing the time out. We may be able to help there.
I sincerely doubt that it's going to be postgres alone that is causing the issue. It probably has something to do with application code, or server configuration.