Jetty 9 ignoring my config XML - jetty

I'm using Jetty-Runner in a project, and recently upgraded to Jetty 9. After migrating the XML with the new configuration, I noticed the following line in startup logs:
WARN:oejx.XmlConfiguration:main: Ignored arg: ...
The arg ignored being printed is pretty much my whole XML, which is:
<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN"
"http://www.eclipse.org/jetty/configure.dtd">
<Configure id="Server" class="org.eclipse.jetty.server.Server">
<Arg name="threadpool">
<New class="org.eclipse.jetty.util.thread.QueuedThreadPool">
<Arg name="maxThreads">200</Arg>
<Arg name="minThreads">50</Arg>
<Arg name="idleTimeout">1000</Arg>
<Arg name="queue">
<New class="java.util.concurrent.ArrayBlockingQueue">
<Arg type="int">6000</Arg>
</New>
</Arg>
<Set name="detailedDump">false</Set>
</New>
</Arg>
</Configure>
I've checked each arg name/type/setter with current Jetty Javadocs, but still can't understand what's wrong with this setup, to be ignored.
Could you please help?

Update: Feb 2018
Ignored arg: means you have an <Arg> that isn't being used for the constructor of the object referenced in <Configure> (or <New>). The most likely reason is you are attempting to change something that has already been set earlier.
If you are here because you want to change the ThreadPool, read on.
If you only want to configure the QueuedThreadPool, you have options.
The Recommended Choice: Using the jetty-home or jetty-distribution to start Jetty?
Configure the threading property values in your ${jetty.base} directory.
You can find them either in your ${jetty.base}/start.ini or ${jetty.base}/start.d/*.ini files.
Not using jetty-home or jetty-distribution, but still using the XML? Why? (really, we want to know!)
The XML <Get> approach is preferable to set the values on the existing threadpool instead.
Example (from Jetty 9.2.20, same as Jetty 9.4.8):
<Configure id="Server" class="org.eclipse.jetty.server.Server">
...
<Get name="ThreadPool">
<Set name="minThreads" type="int">10</Set>
<Set name="maxThreads" type="int">200</Set>
<Set name="idleTimeout" type="int">60000</Set>
<Set name="detailedDump">false</Set>
</Get>
If you want to change from QueuedThreadPool to something else, this is considered an extreme expert level option. You have to be aware of the impact this will have on your server.
When making changes for minimum / maximum threads be aware of the following:
The number of CPU cores you have. (this impacts minimum required threads)
The number of network interfaces you have. (this impacts minimum required threads)
The number of simultaneous connections you will have. (this impacts minimum required threads)
The number of simultaneous requests you will have. (this impacts minimum required threads)
The use of HTTP/2 will increase your threading requirements significantly.
The use of old-school Servlet blocking APIs will increase your threading requirements. (consider using the newer AsyncContext and Async I/O behaviors, and it will significantly drop your thread requirements)
Some minimum threading examples:
The following examples are for illustration purposes and do not represent a "recommended" set of values from the Jetty project.
The Jetty recommended values are the default values already present in Jetty.
On an Intel i7, with 1 network interface, serving an average web page, with resources (images, css, javascript, etc). You'll need (8 for cpu cores, 1 for network interface, 1 for the acceptor, 1 for the selector, and about 10 more to serve the webpage and its resources to a typical, modern day Chrome browser) about 22 minimum threads.
On an Raspberry Pi, with 1 network interface, serving REST requests to a single REST client in sequence (never parallel), you'll need 8 minimum threads.
More notes:
Also, Do not assume that 1 thread == 1 request / response exchange. That is not true on Jetty. A single request / response exchange can be handled by [1...n] threads over its lifetime. Only blocking reads/writes using the old school Servlet APIs will hold a thread (again, use the new Servlet API Async I/O operations)
If you are wanting to adjust the thread configurations under some assumption of reducing traffic/requests to something on Jetty, consider using QoSFilter (for controlling endpoint/resource specific behaviors) or DoSFilter (for controlling overall load limits) instead.
OLD ANSWER (From Dec 2013)
1) Fix your DTD
Yours:
<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN"
"http://www.eclipse.org/jetty/configure.dtd">
The correct one for Jetty 9+
<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN"
"http://www.eclipse.org/jetty/configure_9_0.dtd">
2) Use type attribute better
The type="int" is unsupported in that older version of Jetty, as "int" is not a valid object type. Use java.lang.Integer instead (or upgrade to a newer version of Jetty that supports "int")
Try this out instead.
<Arg name="threadpool">
<New id="threadpool" class="org.eclipse.jetty.util.thread.QueuedThreadPool">
<Arg type="java.lang.Integer" name="maxThreads">200</Arg>
<Arg type="java.lang.Integer" name="minThreads">50</Arg>
<Arg type="java.lang.Integer" name="idleTimeout">1000</Arg>
<Arg name="queue">
<New class="java.util.concurrent.ArrayBlockingQueue">
<Arg type="java.lang.Integer">6000</Arg>
</New>
</Arg>
<Set name="detailedDump">false</Set>
</New>
</Arg>
The above was verified with jetty-9.1.0.v20131115 distribution.

Related

JEE7/JAX-RS How to programatically create a JDBC connectionpool

I'm currently developing a REST service to replace an existing solution. I'm using plain Payara/JEE7/JAX-RS. I am not using Spring and I do not intent to.
The problem I'm facing is that we want to reuse as much of the original configuration as possible (deployment on multiple nodes in a cluster with puppet controlling the configuration files).
Usually in Glassfish/Payara, you'd have a domain.xml file that has some content like this:
<jdbc-connection-pool driver-classname="" pool-resize-quantity="10" datasource-classname="org.postgresql.ds.PGSimpleDataSource" max-pool-size="20" res-type="javax.sql.DataSource" steady-pool-size="10" description="" name="pgsqlPool">
<property name="User" value="some_user"/>
<property name="DatabaseName" value="myDatabase"/>
<property name="LogLevel" value="0"/>
<property name="Password" value="some_password"/>
<!-- bla --->
</jdbc-connection-pool>
<jdbc-resource pool-name="pgsqlPool" description="" jndi-name="jdbc/pgsql"/>
Additionally you'd have a persistence.xml file in your archive like this:
<persistence-unit name="myDatabase">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>jdbc/pgsql</jta-data-source>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQLDialect"/>
<!-- bla -->
</properties>
</persistence-unit>
I need to replace both of these configuration files by a programmatic solution so I can read from the existing legacy configuration files and (if needed) create the connection pools and persistence units on the server's startup.
Do you have any idea how to accomplish that?
Actually you do not need to edit each domain.xml by hands. Just create glassfish-resources.xml file like this:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE resources PUBLIC "-//GlassFish.org//DTD GlassFish Application Server 3.1 Resource Definitions//EN" "http://glassfish.org/dtds/glassfish-resources_1_5.dtd">
<resources>
<jdbc-connection-pool driver-classname="" pool-resize-quantity="10" datasource-classname="org.postgresql.ds.PGSimpleDataSource" max-pool-size="20" res-type="javax.sql.DataSource" steady-pool-size="10" description="" name="pgsqlPool">
<property name="User" value="some_user"/>
<property name="DatabaseName" value="myDatabase"/>
<property name="LogLevel" value="0"/>
<property name="Password" value="some_password"/>
<!-- bla --->
</jdbc-connection-pool>
<jdbc-resource pool-name="pgsqlPool" description="" jndi-name="jdbc/pgsql"/>
</resources>
Then either use
$PAYARA_HOME/bin/asadmin add-resources glassfish-resources.xml
on each node once or put it under WEB-INF/ of your war (note, in this case jndi-name SHOULD be java:app/jdbc/pgsql because you do not have access to global: scope at this context).
Note that your persistence.xml should be under META-INF/ of any jar in your classpath.
If you do not like this, you may use
#PersistenceUnit(unitName = "MyDatabase")
EmtityManagerFactory emf;
to create EntityManager on fly:
createEntityManager(java.util.Map properties).
By the way, using Payara you can share configuration with JCache across you cluster.
Since the goal is to have a dockerized server that runs a single application, I can very well use an embedded server.
Using an embedded sever, the solution to my problem looks roughly like this:
For the server project, create a Maven dependency:
<dependencies>
<dependency>
<groupId>fish.payara.extras</groupId>
<artifactId>payara-embedded-all</artifactId>
<version>4.1.1.163.0.1</version>
</dependency>
</dependencies>
Start your server like this:
final BootstrapProperties bootstrapProperties = new BootstrapProperties();
final GlassFishRuntime runtime = GlassFishRuntime.bootstrap();
final GlassFishProperties glassfishProperties = new GlassFishProperties();
final GlassFish glassfish = runtime.newGlassFish(glassfishProperties);
glassfish.start();
Add your connection pools to the started instance:
final CommandResult createPoolCommandResult = commandRunner.run("create-jdbc-connection-pool",
"--datasourceclassname=org.postgresql.ds.PGConnectionPoolDataSource", "--restype=javax.sql.ConnectionPoolDataSource", //
"--property=DatabaseName=mydb"//
+ ":ServerName=127.0.0.1"//
+ ":PortNumber=5432"//
+ ":User=myUser"//
+ ":Password=myPassword"//
//other properties
, "Mydb"); //the pool name
Add a corresponding jdbc resource:
final CommandResult createResourceCommandResult = commandRunner.run("create-jdbc-resource", "--connectionpoolid=Mydb", "jdbc__Mydb");
(In the real world you would get the data from some external configuration file)
Now deploy your application:
glassfish.getDeployer().deploy(new File(pathToWarFile));
(Usually you would read your applications from some deployment directory)
In the application itself you can just refer to the configured pools like this:
#PersistenceContext(unitName = "mydb")
EntityManager mydbEm;
Done.
A glassfish-resources.xml would have been possible too, but with a catch: My configuration file is external, shared by some applications (so the file format is not mine) and created by external tools on deployment. I would need to XSLT the file to a glassfish-resources.xml file and run a script that does the "asadmin" calls.
Running an embedded server is an all-java solution that I can easily build on a CI server and my application's test suite could spin up the same embedded server build to run some integration tests.

Correct way to enable graceful shutdown in Jetty 9.3

I have been trying to use the StatisticsHandler to enable graceful shutdown in Jetty 9.3. But to my dismay, it does not seem to be that straight forward. First of all, let me explain my environment. I am using multiple jetty modules (gzip, server, servlets, jsp, jstl, etc.) and every module is instantiated by its corresponding .ini file in start.d directory. I have set stopTimeout to be 15000 and stopAtShutdown as true. (in same way as in default jetty.xml)
To enable StatisticsHandler i tried the following methods:
added stats.ini file in start.d directory and the contents of the file were as follows:
--module=stats
Even after adding the module and restarting jetty, when i stop jetty there is no graceful shutdown and it just stops immediately.
Then i tried adding the statistics handler in Jetty.xml as mentioned in the documentation. It was added as follows:
<Get id="oldhandler" name="handler" />
<Set name="handler">
<New id="StatsHandler" class="org.eclipse.jetty.server.handler.StatisticsHandler">
<Set name="handler"><Ref refid="oldhandler" /></Set>
</New>
</Set>
But even then just restarting jetty didn't solve my problem. Can someone guide me about what I am doing wrong, or what needs to be done to enable graceful shutdown in jetty.
Thanks in advance.
Graceful shutdown has nothing to do with StatisticsHandler (or its module), it's controlled by the org.eclipse.jetty.server.ShutdownMonitor object, and its interaction with the org.eclipse.jetty.server.Server instance.
The behavior of which is determined by the ${jetty.base} configuration and is initiated by the jetty-distribution's start.jar directly to the active ShutdownMonitor you previously configured on startup of the Server instance.

Any known issues with ActiveMQ running on SGI

I'm having lots of issues trying to use ActiveMQ, and was wondering if there are any know issues running on SGI hardware - specifically a UV2k? Are there any known issues running on Suse linux?
Getting a lot of threads started when starting the ActiveMQ service, and getting an error message stating "Insufficient threads configured for selectChannelConnector". Tried limiting the JVM thread stack size with no joy.
ActiveMQ 5.10 snapshot
I haven't heard of UV2k, but it sounds like something with a lot of processors/cores, right?
Jetty, which is powering the webgui for ActiveMQ, uses one connection acceptor per four cores (roughly).
The default thread pool size is maxed at 256 threads in Jetty, so if you have above 1024 cores the thread pool won't be enough for jetty. A quick google shows the UV2K has "up to 4096 cores" (whatever that means, if this is the number Jetty consider - it means 1024 acceptors).
You can alter the Jetty thread pool by placing this element into the "server" bean in conf/jetty.xml. I leave the correct max size up to you to figure out.
<property name="threadPool">
<bean id="ThreadPool" class="org.eclipse.jetty.util.thread.QueuedThreadPool">
<property name="minThreads" value="10"/>
<property name="maxThreads" value="XXX"/>
</bean>
</property>
Another thing you can try is to manually set the number of acceptors to a lower value, like 1 (you won't need much for an administrative UI). Look into your Connector bean (same file), and add the property <property name="acceptors" value="2"/>.
For obvious reasons, I have not tested the above config on the machine you mention, so consider it a "good guess" rather than a confirmed fact.

How to make Jetty load webdefault.xml when it runs in OSGi?

I'm running a Jetty 8.1.12 server within an OSGi container thanks to jetty-osgi-boot as explained in Jetty 8 and Jetty 9 documentation
I want to configure the default webapp descriptor (etc/webdefault.xml). When I define jetty.home, jetty picks up etc/jetty.xml but it does not load etc/webdefault.xml
I do not want to rely on a configuration bundle (through the jetty.home.bundle system property) because I want the config easily modifiable.
I do not want to rely on the Jetty-defaultWebXmlFilePath MANIFEST header for the same reason, plus it would tie my webapp to jetty.
The jetty-osgi-boot bundle contains a jetty-deployer.xml configuration file with this commented-out chunk :
<!-- Providers of OSGi Apps -->
<Call name="addAppProvider">
<Arg>
<New class="org.eclipse.jetty.osgi.boot.OSGiAppProvider">
<Set name="defaultsDescriptor"><Property name="jetty.home" default="."/>/etc/webdefault.xml</Set>
...
which does not work because the OSGiAppProvider class does not exist anymore.
Is there any other way to configure the webdefaults.xml file location ?
Short answer : I could not have jetty 8.1.12 to load webdefaults.xml under OSGi.
After many hours of googling, source-reading and debugging, I came to these conclusions :
The Jetty-defaultWebXmlFilePath MANIFEST header did not work as expected. It could not resolve a bundle entry path, only a absolute file system path. An absolute FS path was not a realistic option.
Much of the configuration is hardcoded in ServerInstanceWrapper and the likes of BundleWebAppProvider so we cannot configure defaults descriptor location. This location ends up to the default, which is, IIRC, org/eclipse/jetty/webapp/webdefault.xml.
I resorted to patching jetty-osgi so that it can read some configuration and apply it to BundleWebAppProvider. FWIW this hack is available on github

Jetty is getting stucked during high-load tests

I have a web service composed by 2 Jettys (running the same content) load-balanced by a HA Proxy. During a test that consists in a medium requests per second rate (less than 100) and each request having a big body (21 KB), Jetty gets stucked -- It doesnt respond to any request.
The only way to bring Jetty up is restarting it.
I didn't find any information in log files (2011_05_20.stderrout.log, 2011_05_20.log) -- It seems to stop logging.
There are any other useful log files that I should enable in Jetty configs ?
Have anyone ever experienced this weird behaviour ?
Could I retrieve some info about thread status from Jetty (I'm not sure if all threads are busy, the request is rejected) ?
Thanks in advance!
How many threads have you specified in jetty.xml ? I think per default (at least for embedded Jetty), the maximum number of threads is set at around 50. You can change this either programatically, or via jetty.xml. Rather than just setting max to a high number, you should figure out a correct value, depending on server resources and load requirements.
<Configure id="Server" class="org.eclipse.jetty.server.Server">
<!-- =========================================================== -->
<!-- Server Thread Pool -->
<!-- =========================================================== -->
<Set name="ThreadPool">
<New class="org.eclipse.jetty.util.thread.QueuedThreadPool">
<!-- initial threads set to 50 -->
<Set name="minThreads">50</Set>
<!-- the thread pool will grow only up to 768 -->
<Set name="maxThreads">768</Set>
</New>
</Set>
</Configure>
Use something like jVisualVM or BTrace to find out how many threads that are in your threadpool. Heres a link to a BTrace script to print out thread counts: https://github.com/joachimhs/EurekaJ/blob/master/EurekaJ.Scripts/btrace/1.2/btraceScripts/ThreadCounter.java