Several PCs with Windows 7 and .NET 4.5.2 are using an application based upon the Invantive Web Service to access data located after the Web Service server. However, since a few months the performance is bad.
Even switching between tabs in the user interface is slow, taking multiple seconds instead of instaneous switching.
The problem was caused by Webroot SecureAnywhere, an anti-virus product. Invantive Web Service uses HTTPS to communicate between application and the web service servers.
Webroot SecureAnywhere does some form of deep packet inspection on HTTPS connections. The application in this case typically exchanges information in average every few seconds using a HTTPS POST with a small payload (several hundred bytes).
Webroot SecureAnywhere inspects this HTTPS POST and it takes approximately 750-1700 ms per HTTPS POST to analyze it and pass it through. Measured on an i3 processor with Windows 7.
When Webroot SecureAnywhere was disabled, response times per HTTPS POST dropped from in average 1.500 ms to 30 ms.
Long term solution is to either abandon use of Webroot SecureAnywhere or that Webroot SecureAnywhere improves their algorithm in how they do deep packet inspection on HTTPS POSTs.
Short term solution is to add entries for all approved remote sites using HTTPS to the white list of Webroot SecureAnywhere.
Related
We have been maintaining a project internally which has both web and mobile application platform. The backend of the project is developed in Django 1.9 (Python 3.4) and deployed in AWS.
The server stack consists of Nginx, Gunicorn, Django and PostgreSQL. We use Redis based cache server to serve resource intensive heavy queries. Our AWS resources include:
t1.medium EC2 (2 core, 4 GB RAM)
PostgreSQL RDS with one additional read-replica.
Right now Gunicorn is set to create 5 workers (by following the 2*n+1 rule). Load wise, there are like 20-30 mobile users making requests in every minute and there are 5-10 users checking the web panel every hour. So I would say, not very much load.
Now this setup works alright for 80% days. But when something goes wrong (for example, we detect a bug in the live system and we had to switch off the server for maintenance for few hours. In the mean time, the mobile apps have a queue of requests ready in their app. So when we make the backend live, a lot of users hit the system at the same time.), the server stops behaving normally and started responding with 504 gateway timeout error.
Surprisingly every time this happened, we found the server resources (CPU, Memory) to be free by 70-80% and the connection pool in the databases are mostly free.
Any idea where the problem is? How to debug? If you have already faced a similar issue, please share the fix.
Thank you,
I am using Hunchentoot for a web app to be a high traffic db driven app, also depends on web sockets protocol and http ajax requests.
When I benchmark my app with apache benchmark as
ab -c 50 -n 1000
connection is reset prompt is shown. for unto 40 concurrency test is completed but after not. How can one increase max-thread-count of Hunchentoot.
What is the realistic numbers of concurrency and request number per unit time for a high traffic web app that I should think according to? for example for reddit or twitter.
You can pass in a number to the taskmaster with the :max-thread-count keyword.
I have a web site that exposes a web service to all my desktop clients.
Randomly, these clients will invoke the web service which in turn will add a message jpeg in byte array format to the MSMQ.
I have a service application that reads from this queue and performs an enhancement on this jpeg and saves it to the hard drive.
The number of clients uploading at anyone time is unpredictable.
I choose this method because I do not want to put any strain on IIS. The enhancements my service application performs is not much 'erg' but it exists nevertheless.
However, after realizing that my service application had stopped for sometime and required restarting I noticed the RAM leap up to clear the backlog. Whilst I have corrected this and the service is now coded to restart automatically on fail I surmised that a backlog could exists at busy times which again give a higher RAM usage.
Now, should I accept to do the processing all within my web service and then save to the hard drive or am I correct in using a MSMQ?
I am using C# and asp.net
My server application is using embedded jetty as an http end point. It hosts several web applications with a bunch of jsp/servlets as well as several web services.
This application will eventually be deployed on the cloud but before that I'd like to make sure that this app measures the inflow and outflow (in bytes) coming through the jetty. I could probably make a global filter and count bytes somehow..
But is there a more intelligent way of doing this?
You should check for NetworkTrafficListener in Jetty 7 & 8, and how to use it in this test class.
Say are dealing with a Windows network that for internet access must pass through a firewall that you have no control over. Said firewall apparently blocks the known time protocols (NTP,daytime,etc) and you know from experience that those who control
it will not allow any exceptions.
Is it possible to sync this "Windows" (could be linux) computer via a web service call which grabs the time from a server out on the internet?
Is there another reliable method for updating time on the server, like pulling from a website and passing it to the ntp client?
HTTP Time Protocol:
http://www.vervest.org/fiki/bin/view/HTP/WebHome
It takes the date from the http server itself, not from a website served by it