I have a fairly large Windows application (about 10 years old, written in c++) which works with SQL2000 Express (MSDE). It operates with database pretty extensively, but doesn't have performance issues. Due to SQL2000 MSDE compatibility issues with Windows 7/8 I want to migrate the application to SQL2014 Express.
All database access code is written in t/sql and as such the application migrates to SQL2014 without any code changes and all features work as expected. Except it's so badly slow it makes no sense to use the application under SQL2014. All select/update/insert queries take about 5-20 times more time to execute.
These are connection strings that I tried:
Provider=SQLOLEDB;Data Source=localhost\app;User ID=app_user;Password=password;
Provider=SQLOLEDB;Data Source=localhost\app;Trusted_Connection=yes;
I don't convert SQL2000 database to 2014 as the application creates a new database from scratch from scripts on its first run. Nothing fails, the default DB size is 12MB, the schema is pretty well optimised.
I also tried the same under SQL2008R2 Express - it's as slow as SQL2014 Express. Tried different PCs under Windows 7/8/8.1 - all the same.
The main detail which I noticed is that when I run the application under SQL2014 the most CPU consuming process in Windows Task Manager is "Local Security Authority Process". This process doesn't appear in Task Manager at all when I run it under SQL2000 MSDE and the application runs much faster. I guess LSA may be very heavy processing my "open connection" requests, but I don't know what to do about it.
The application is written is a way that it doesn't keep connections open, but creates them on demand and then releases. I tried to run SQL 2014 service under different accounts - it made no difference.
This process doesn't appear in Task Manager at all when I run it under SQL2000 MSDE and the application runs much faster. I guess LSA may be very heavy processing my "open connection" requests, but I don't know what to do about it.
Typically, lsass.exe (LSA) been used by IPSEC Services(PolicyAgent),
ProtectedStorage and Security Accounts Manager(SamSs)
Try to disable IPSEC Services(PolicyAgent)
Related
Recently we upgraded to ColdFusion 11 Enterprise and noticed that the full-fledged sandbox security tends to have a way bigger overhead than the Standard edition (CF10).
What can one do to make an existing CF app perform well with sandbox security?
Here are my findings so far:
install VisualVM by adding -Dcom.sun.management.jmxremote.port=8701 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false to CF Admin's JVM Arguments. Learn how to use it and pay special attention to the CPU Snapshot & Hotspot tab. http://boncode.blogspot.ca/2010/04/cf-java-using-free-visualvm-tool-to.html. FYI CF Server Monitor in the Enterprise edition is utterly useless because its memory/performance profiling overhead is way too big to be viable for a live production server, and it doesn't perform well under load to give you any useful data of what could be going wrong.
Disable IPv6, and add [serverip] [serverip] to the OS's hostfile to speed up the default DNS reverse proxy lookup on creating new physical DB connection by Security Manager. See: On Linux, Java issues reverse DNS lookups when a socket is opened. Why, and how can I stop it? (FYI, Windows is affected to)
remove as much <cfmodule> and <cfinclude> as possible as they will end up with many java.io.File.canRead() and java.io.File.exists() which will stress the disk IO under load. Even SSD suffers under load. I have tried Trusted Cache and it does not help. Instead, try using cached CFC's in application scope and make sure the code are thread safe and local-var'ed.
eliminate the use of <cfinterface>, inheritance with extends, and getMetaData() as much as possible as they will eventually calls java.io.File.lastModified() which will stress the disk IO under load. Bug?
eliminate the use of access="package" as it will end up with many java.security.AccessController.checkPermission calls.
less objects per request the better, as each object instantiation has a higher cost with the extra java.security.AccessController.checkPermission call.
I have a web site that exposes a web service to all my desktop clients.
Randomly, these clients will invoke the web service which in turn will add a message jpeg in byte array format to the MSMQ.
I have a service application that reads from this queue and performs an enhancement on this jpeg and saves it to the hard drive.
The number of clients uploading at anyone time is unpredictable.
I choose this method because I do not want to put any strain on IIS. The enhancements my service application performs is not much 'erg' but it exists nevertheless.
However, after realizing that my service application had stopped for sometime and required restarting I noticed the RAM leap up to clear the backlog. Whilst I have corrected this and the service is now coded to restart automatically on fail I surmised that a backlog could exists at busy times which again give a higher RAM usage.
Now, should I accept to do the processing all within my web service and then save to the hard drive or am I correct in using a MSMQ?
I am using C# and asp.net
Is it possible that a database (connected to ColdFusion 9 via a datasource connection) being unavailable could cause ColdFusion to become unresponsive? (The database is used for a singular one-off lightly-trafficked app.)
Recently, maintenance on a connected Oracle database (oracle jdbc) has caused that database to be unavailable two different times. Coincidentally, at both these times, ColdFusion pages on our site became unavailable or terribly slow to load (static HTML pages seemed to load fine, for the most part). Restarting the ColdFusion application server service would fix the problem, but only for minutes. The first time, during a time the application server was responsive, we unchecked the "Maintain connections" checkbox. I'm not sure this had any effect, then shortly after the Oracle database came back online, and we didn't seem to have the problem any more.
The second time that database was offline, we experienced a very similar issue with our website - ColdFusion pages becoming reaaaally slow or unavailable altogether. During one of the times when I could access the CF administrator, I updated the datasource and checked "Disable connections". Then I stopped and restarted both the CF ODBC agent and ODBC server services. After that, the problem seemed to stop, but I don't know enough to know if this is causation or coincidence.
Anyone have insights on this?
Server setup: Windows Server 2003 SP2, ColdFusion 9, IIS 6
There are a number of ways to slow a database to a crawl if not stop it completely. If you have hackers for example attacking your database through Port 1433 with attempted logins several times a second that can slow it down and if they get in they can of course do whatever they want. When this happened to me I found a record of attacks in the Event logs; the solution is better network security intercepting such attacks and never letting them actually talk to the database. Or say if your site is vulnerable to SQL injection attacks hackers could be messing with your database that way too but network security wouldn't necessarily work in that case. It doesn't require hackers to degrade the performance of your database however, you could be having a problem with allocated disk space for transaction logs or indexes filling up, or heaven forbid an imminent hardware failure showing early symptoms. You're backing up your database often I hope, off the server. To answer your question yes ColdFusion can and will become unresponsive when pages are called that call the database, and will usually display error messages when the database finally times out and never sends the requested data to ColdFusion. You can protect against that to some extent with CFTRY tags around your queries that display clean and polite error messages instead of ColdFusion's ugly ones if the database fails to return data, at least your site continues to look professional that way. One project I worked used a shared SQL Server database that often got overloaded and slowed down terribly and there was nothing I could do about improving that situation. What I did to keep the site functioning was to maintain a DB backup in the form of a MS Access database (yeah it was inappropriate but it worked when SQL Server wouldn't) and anytime SQL Server failed I had the application set up to automatically use code that called the Access database instead.
These are some ideas for you to think about if you are continuing to have problems, I see nobody's even tried to answer your question in the last six months and that's kinda been my experience with the quality of assistance this site has offered me too. I hope my thoughts can be of some use to you.
I inherited a coldfusion MX7 application that has a long running process, which the user kicked by accident.
By my calculations, at the current rate, the job will run for 3 days.
There doesnt seem to be a way through the administrator interface to stop a job that is running.
The table that is being filled can be easily repopulated, so I would think stopping the coldfusion service wont effect anything except the table, which isnt a problem.
Am I right? Is that safe? Is there another way?
a one-time restart of the service should be fine. for the future, you may want to add a required url param or other such safety mechanism to keep this long process from accidentally going off.
Check to see if the task already has an explicit timeout out set
Explicit Timeout
Otherwise the default page time out is used
Server Settings
For newer versions of ColdFusion 8 and above, you can kill a running page with with the Server Monitor in the section labeled "Aborting unresponsive or troublesome requests/threads"
Using server monitor.
It also may be possible to stop the processing by killing the SQL Server Task:
Is there a task manager of sorts for SQL Server 2008 and on?
We have automatically started service which in some cases spends a lot of the time loading necessary data, let's say 10 minutes. During this time it works as expected (processing some huge data files required to start). I report the progess by C++ SetServiceStatus function, it is working fine.
This service is not dependent on anything and has only one dependency which is again our own service. It is started after those 10 minutes, it needs the first "server" service to be fully running to accept the requests.
I thought that windows would start all other automatic services (in less then 10 minutes as usually) and then start working normally but system is completely blocked during startup (i can't login to computer or ping the computer) until this one specific service is started (reports SERVICE_RUNNING by SetServiceStatus). When out service completely starts, the other missing system services (required for network, remote desktop, whatever, it's quite random) are also started. Is this normal behaviour? Why are non-depending processes (as remote desktop, network connections, etc.) waiting for this process? Am I missing something?
I tried to add some dependencies to postpone the startup of my service but I ended up with many dependencies and behaviour still somehow random (as order of services is random). Sometimes I was able to login but for example Start button started working only after those 10 minutes when my service was started. I am not sure what is "the last service" to depend on and what services to include to my depend-list and on some computers this services can be disabled and it can bring new problems... so I don't like this solution very much.
Another option was Delayed start option for our service. This should start service when all other automatic services are running. Well, this works fine, windows boots, computer running and responding, our service is started, but the performance is very bad, many times slower than usually, it seems that delayed started services have much lower priority or something like that.
My only current solution is to report to system that my service is running (by SetServiceStatus function), but to continue loading (this works, I tested it). But then we have problem with our dependent service as it needs to be started when the first one is really ready. It can be solved but I still wonder how is this possible and if there is something I could use to keep the current state of automatic started service which reports "started" when it is really fully started and prepared to work. Thanks for any ideas.
Set SERVICE_RUNNING as soon as possible, and then continue processing in background. Make your other service resilient to the first service being in a running state, but not yet ready to service.
The longer the service is in the starting state the more problems we get from different windows versions.