Jetty 8 vs Jetty 9 for production environment - jetty

We are currently using Jetty 8 for a production environment serving low latency traffic. We were wondering what would be the advantages to move to jetty 9, given that we have a very low latency requirement.
Thanks!

Jetty 9 has a refactored IO layer over Jetty 8 which should garner you improvements in that area. See this blog for information on how and why Jetty is a best of breed in this area.
https://webtide.com/jetty-in-techempower-benchmarks/
Specifically look down for the latency tests where Jetty easily came out on top.
[edit] I should note that this is perhaps not an appropriate question for this forum since it is outside of the intent of stack overflow and more inline with server fault...just fair warning.

Related

Migration from jetty 7.6 to 9.4.7

I was using jetty 7.6.5 version in one of my project and now we want to upgrade to jetty 9.4.7. And found that multiple class have been removed or changed in 9.4.7 version.
Example:
httpClient.setConnectorType(HttpClient.CONNECTOR_SELECT_CHANNEL);
ExecutorThreadPool pool = new ExecutorThreadPool(execSvc);
httpClient.setThreadPool(pool); httpClient.setTimeout(1000);
This code does not work on jetty 9. Please help how to fix it
Let's answer your questions first:
Connector types do not exist in the same way as it did in Jetty 7.x. The default is NIO based.
Executors / ThreadPools can only be set on the constructor. (Don't set small thread pools!)
There are many flavors of timeout you can set from the HttpClient itself, do you want idle timeout? connection timeout? dns lookup timeout? etc... Check the javadoc for details.
There are also per-request timeouts available. Again, check the javadoc for details on which kind you want.
Going from Jetty 7.6.x series to 9.4.x series is a huge jump. You are skipping nearly 8 years of heavy development and change in your usage. (7.x is 2009 vintage to 9.4.x is new to 2017)
The techniques and mechanisms you were using in the past are likely no longer present anymore.
This is because HttpClient has been updated significantly for HTTP/1.1, HTTP/2, ALPN, Unix Socket, FCGI, WebSocket, etc.
The most significant is to treat the new HttpClient like a web browser, start it once, and leave it running, perform as many requests as you like against it. The worst thing you can do for yourself is to start it, perform a request or two, then stop it. This kind of usage is not supported and can lead to strange issues with memory/threading/etc. Start it once when you first need it, don't stop/shutdown that HttpClient instance till your application stops.

Why are my coldfusion soap webservices 10 times slower in production vs development?

UPDATE
It appears this issue is caused by a bug related specifically to using Axis2 with ColdFusion and we have been able
to replicate the issue in our production environment on two different servers by
switching between Axis1 and Axis2. My original tests to compare the
two were apparently thwarted by an override in an Application.cfc
which forced Axis2.
We ran into a memory leak today which forced us to speed up the resolution to this issue. It resembled the leak
discussed here though we aren't sure if it is the exact same
problem
(https://www.hass.de/content/coldfusion-10-webservice-leaking-memory-trusted-cache-leaks-memory).
Our primary webservices are in Axis1 and we only switched to Axis2 for
this new set of webservices because we needed document literal style
for SalesForce and with Axis1 an invalid wsdl was being created (did
not properly describe all object types in arrays). So now we have it as
Axis1 and using a manually manipulated wsdl. Not entirely sure if it
will work out with SalesForce but as far as a general fix this works.
I am investigating an issue with our coldfusion based soap webservices in our production environment. It appears that the time between the return statement in the webservices method code and actually receiving a response can be significant and appears to directly correspond to the size of the response and/or number of objects.
In development a particular request that returns 1000 records takes about 6 seconds to return. However in production that same hit takes 50+ seconds to return. I added some timing code and found that the actual function code takes less than 1 second to run at the start of the request, meaning that generating the response is taking coldfusion about 50 seconds in production. Hitting the webservice with simple http request does not have the same slowness so seems to be soap/axis specific. The resulting xml is about 1MB which I have compared and found no differences. I also copied out settings from cfadmin in both environments to compare and could find no performance related setting differences.
Both environments are at the same CF 10 update level. The server monitor shows no significant memory usage. I also ran the request from in the server to make sure there wasn't some slow connection issues or https slowing things down but the results are the same.
Any suggestions or solution would be appreciated.
Additional notes...
CPU sits at about 17% for most of the time of the request which is a lot of work to be doing. Something is happening very inefficiently
I tried switching instance to Axis1 and back again followed by an instance restart and additional tests with no change in results
One possibility is that you have them throttled - check the "request tuning" in your CF administrator. By default the setting for "number of simultaneous web service requests" is 10. Are you looping and hitting the server? In production is there more traffic?
In server monitor enable profiling and monitoring, then click on "statistics". On the far right there is a little chart icon. click on it and you will see a chart and a counter legend in the top right. Then run your code. Does the "web services running" reach a threshold and cross into "web services queued" - if so you need to increase that threshold.
One more clue - in the server monitor do NOT run the "memory profiling for more than a few seconds - say 30. If you don't you will have performance problems for sure.

ColdFusion server crashing on hourly basis

I am facing serious ColdFusion Server crashing issue. I have many live sites on that server so that is serious and urgent.
Following are the system specs:
Windows Server 2003 R2, Enterprise X64 Edition, Service Pack 2
ColdFusion (8,0,1,195765) Enterprise Edition
Following are the hardware specs:
Intel(R) Xeon(R) CPU E7320 #2.13 GHZ, 2.13 GHZ
31.9 GB of RAM
It is crashing on the hourly bases. Can somebody help me to find out the exact issue? I tried to find it through ColdFusion log files but i do not find anything over there. Every times when it crashes, i have to reset the ColdFusion services to get it back.
Edit1
When i saw the runtime log files "ColdFusion-out165.log" so i found following errors
error ROOT CAUSE:
java.lang.OutOfMemoryError: Java heap space
javax.servlet.ServletException: ROOT CAUSE:
java.lang.OutOfMemoryError: Java heap space
04/18 16:19:44 error ROOT CAUSE:
java.lang.OutOfMemoryError: GC overhead limit exceeded
javax.servlet.ServletException: ROOT CAUSE:
java.lang.OutOfMemoryError: GC overhead limit exceeded
Here are my current JVM settings:
As you can see my JVM setting are
Minimum JVM Heap Size (MB): 512
Maximum JVM Heap Size (MB): 1024
JVM Arguments
-server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=512m -XX:+UseParallelGC -Dcoldfusion.rootDir={application.home}/../ -Dcoldfusion.libPath={application.home}/../lib
Note:- when i tried to increase Maximum JVM Heap size to 1536 and try to reset coldfusion services, it does not allow me to start them and give the following error.
"Windows could not start the ColdFusion MX Application Server on Local Computer. For more information, review the System Event Log. If this is a non-Microsoft service, contact the service vendor, and refer to service-specific error code 2."
Should i not able to set my maximum heap size to 1.8 GB, because i am using 64 bit operating system. Isn't it?
How much memory you can give to your JVM is predicated on the bitness off your JVM, not your OS. Are you running a 64-bit CF install? It was an uncommon thing to do back in the CF8 days, so worth asking.
Basically the error is stating you're using too much RAM for how much you have available (which you know). I'd be having a look at how much stuff you're putting into session and application scope, and culling back stuff that's not necessary.
Objects in session scope are particularly bad: they have a far bigger footprint than one might think, and cause more trouble than they're worth.
I'd also look at how many inactive but not timed-out sessions you have, with a view to being far more agressive with your session time-outs.
Have a look at your queries, and get rid of any SELECT * you have, and cut them back to just the columns you need. Push dataprocessing back into the DB rather than doing it in CF.
Farm scheduled tasks off onto a different CF instance.
Are you doing anything with large files? Either reading and processing them, or serving them via <cfcontent>? That can chew memory very quickly.
Are all your function-local variables in CFCs properly VARed? Especially ones in CFCs which end up in shared scopes.
Do you accidentally have debugging switched on?
Are you making heavy use of custom tags or files called in with <cfmodule>? I have heard apocryphyal stories of custom tags causing memory leaks.
Get hold of Mike Brunt or Charlie Arehart to have a look at your server config / app (they will obviously charge consultancy fees).
I will update this as I think of more things to look out for.
Turn on ColdFusion monitor in the administrator. Use it to observe behavior. Find long running processes and errors.
Also, make sure that memory monitoring is turned off in the ColdFusion Server Monitor. That will bring down a production server easily.
#Adil,
I have same kind of issue but it wasn't crashing it but CPU usage going high upto 100%, not sure it relevant to your issue but atleast worth to look.
See question at below URL:
Strange JRUN issue. JRUN eating up 50% of memory for every two hours
My blog entry for this
http://www.thecfguy.com/post.cfm/strange-coldfusion-issue-jrun-eating-up-to-50-of-cpu
For me it was high traffic site and storing client variables in registry which was making thing going wrong.
hope this help.

blazeds increase concurrent user count by using servlet 3.0 and nio server

i am developing a turn based multiplayer game with flex and blazeds.
Problem is that i read that the blazeds can handle only hundereds of concurrent users,but this can be increased by using nio server like jetty 7 and servlet 3.0.
does Tomcat 7 supports nio? and i wonder if i can increase concurrent user count by using tomcat 7and blazeds to a few thousands.
Any clue or help will be appreciated.
Thank you.
Do not worry yet about performance. If your game will be successful you will be able to afford the better technical solution. If not, it will not matter if you can handle 1000 or 1000000 requests.
However, related to your question - you may be able to increase the number of concurrent users by doing server related tunings (like stack size, increase the size of the thread pool).
There a couple of solutions implementing Servlet 3.0 (NIO), but you will have to write your own BlazeDS NIO endpoint - so it does not work out of the box.
Edit:
Using the NIO Jetty connector by can be a good idea...but the first thing which should be done is building and testing a valid performance scenario. For example if you plan to support 10000 connected users and to push 1 msg/sec you need to write stress test for that. After that, you can experiment using various connectors/configurations.
There is one tool created by Adobe which can help you with performance testing - it's located here (take a look at the attachments of Adobe LiveCycle Data Services 3 ES2 Performance Brief.pdf). It contains instructions how to configure/run the stress tool. If you cannot manage to run it let me know
Just to give you an example, on my machine (i7 Q820 8gb ram), using the stress tool I was able to handle 10000 connected users.

Apache or lighttpd [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
For development, I use a local LAMP stack, for production I'm using MediaTemple's Django Container (which I'm loving BTW). MT's container uses lighthttpd. Honestly I've never had any other experience with it. I've always used Apache. I've been doing some reading:
Onlamp
TextDrive
Linux.com
Here's are questions:
What strengths does one have over the other?
Would it benefit me to use lighthttpd on my dev setup?
What's up with using both? The Linux.com article talks about using lighttpd with Apache.
The benefit of both: Apache is more powerful and extensible (useless if you don't need that power, but anyway...) and lighttpd is faster at static content. The idea is of splitting your site into static content (css, js, images, etc) and dynamic code that flows through Apache.
I'm not saying you can't do a lot with lighttpd on its own. You can and people do.
If you're using lighttpd exclusively on your production server, I would seriously consider mirroring that on your development and staging servers so you know exactly what to expect before you deploy.
For purely static web pages (.gif, .css, etc.) with n http requests from distinct ip addresses:
1. Apache: Runs n processes (with mod_perl, mod_php in memory)
2. lighttpd: Runs 1 process and 1 threads (You can assign m threads before launching it)
For purely dynamic web pages (.php, .pl) with n http requests from distinct ip addresses:
1. Apache: Runs n processes (with mod_perl, mod_php in memory)
2. lighttpd: Runs 1 lighttpd process thanks to async I/O, and runs m fast-cgi processes for each script language.
Lighttpd consumes much less memory. YouTube used to be a big user of lighttpd until it was acquired by Google. Go to its homepage for more info.
P.S. At my previous company, we used both with a load balancer to distribute the http traffic according to its url suffixes. Why not fully lighttpd? For legacy reasons.
The way you interface between the web server and Django might have even a greater impact on performance than the choice of web server software. For instance, mod_python is known to be heavy on RAM.
This question and its answers discuss other web server options as well.
I wouldn't be concerned on compatibility issues with client software (see MarkR's comment). I've had no such problems when serving Django using lighttpd and FastCGI. I'd like to see a diverse ecosystem of both server and client software. Having a good standard is better than a de facto product from a single vendor.
The answer depends on your projects goals. If it's going to be a large scale site where uptime is critical and load is hight go with lighttpd; it scales amazingly. The only downside is that you have to be more hands on initially. Most hosts won't support this and it really pays to know what you're doing with lighttpd.
If it's a site for your mother that'll get a few thousand visitors a month apache'll work better. She'll be able to move to a new host a lot easier and support is easier to find.
Use a standard web server. Apache is used by 50% of web sites (Netcraft), therefore, if you use Apache, peoples' web browsers, spiders, proxies etc, are pretty much guaranteed to work with your site (its web server anyway).
Lighthttpd is used by 1.5% of web sites (Netcraft), so it's far less likely that people will test their apps with it.
Any performance difference is likely not to matter in production; an Apache server can probably serve static requests at a much higher bandwidth than you have, on the slowest hardware you're likely to deploy in production.