I'm having lots of issues trying to use ActiveMQ, and was wondering if there are any know issues running on SGI hardware - specifically a UV2k? Are there any known issues running on Suse linux?
Getting a lot of threads started when starting the ActiveMQ service, and getting an error message stating "Insufficient threads configured for selectChannelConnector". Tried limiting the JVM thread stack size with no joy.
ActiveMQ 5.10 snapshot
I haven't heard of UV2k, but it sounds like something with a lot of processors/cores, right?
Jetty, which is powering the webgui for ActiveMQ, uses one connection acceptor per four cores (roughly).
The default thread pool size is maxed at 256 threads in Jetty, so if you have above 1024 cores the thread pool won't be enough for jetty. A quick google shows the UV2K has "up to 4096 cores" (whatever that means, if this is the number Jetty consider - it means 1024 acceptors).
You can alter the Jetty thread pool by placing this element into the "server" bean in conf/jetty.xml. I leave the correct max size up to you to figure out.
<property name="threadPool">
<bean id="ThreadPool" class="org.eclipse.jetty.util.thread.QueuedThreadPool">
<property name="minThreads" value="10"/>
<property name="maxThreads" value="XXX"/>
</bean>
</property>
Another thing you can try is to manually set the number of acceptors to a lower value, like 1 (you won't need much for an administrative UI). Look into your Connector bean (same file), and add the property <property name="acceptors" value="2"/>.
For obvious reasons, I have not tested the above config on the machine you mention, so consider it a "good guess" rather than a confirmed fact.
Related
I am trying to use WSO2 to schedule pooling data call every minutes to a REST API my business has and push that information to our centralize MQTT broker.
I've been reading the documents of the Streaming Integrator, Micro Integrator, Micro Gateway and API Manager and I cannot find any way to schedule REST API calls base on a defined time.
The point of this task is to push data from all our system into our centralize broker and add analyzing tool afterward to benefit from the data created by our systems that is only accessible by the system at this time.
Could someone give me a hint on what should be the right tools for this and maybe the link to some documentation about how to configure time base call if the software wso2 allowed it ?
You can create a WSO2EI scheduled task
You can define a cron job expression for timing and execute a sequence or an implementation class.
example:
<task name="SampleInjectToSequenceTask"
class="org.apache.synapse.startup.tasks.MessageInjector"
group="synapse.simple.quartz">
<trigger interval="5"/>
<property xmlns:task="http://www.wso2.org/products/wso2commons/tasks"
name="injectTo"
value="sequence"/>
<property xmlns:task="http://www.wso2.org/products/wso2commons/tasks"
name="sequenceName"
value="SampleSequence"/>
</task>
2019-08-01 06:04:43,263 | ERROR | Could not accept connection :
org.apache.activemq.transport.tcp.ExceededMaximumConnectionsException:
Exceeded the maximum number of allowed client connections. See the
'maximumConnections' property on the TCP transport configuration URI
in the ActiveMQ configuration file (e.g., activemq.xml) |
org.apache.activemq.broker.TransportConnector | ActiveMQ Transport
Server Thread Handler:
nio+ssl://b-e13f27f2-1fa3-419f-819c-a24277e973a8-2.mq.us-west-2.amazonaws.com:61617?maximumConnections=100&wireFormat.maxFrameSize=104857600
Getting above exception on amazonMQ, earlier we were using activeMQ where we were setting something like
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireformat.maxFrameSize=104857600"/>
</transportConnectors>
In amazonMQ we are unable to find such options and broker is throwing exception. We did checked transportConnector on amazonMQ supports :
name
updateClusterClients
rebalanceClusterClients
updateClusterClientsOnRemove
Any idea how can we increase size of maximum connections?
As listed here that limit can be changed per AWS account.
You will need to open up an AWS support ticket requesting a limit increase
I guess I have to ask, Why so many connections?
Large has 1000 connections with Micro allowing 100 connections. Seeing in your error message you have 100 connections, are you on Micro? Maybe a Micro instance can't handle the load?
Are the producers/consumers something you control or is this a third party app? I would review code before increasing these levels if that is something you can do. Connections are to be shared as much as they can be. Are they being closed correctly when done? Are all your producers opening and maintaining their own connections?
Producer Connections should be grouped and shared with the PooledConnectionFactory.
We currently call SOAP web services which send back very big responses.
We use Spring-WS (using WebServiceTemplate), JAX-WS client while invoking the web services and the application is run on Jboss EAP 6.0.
We also use SaajSoapMessageFactory currently. I read from forums that AxiomSoapMessageFactory should be used rather than SaajSoapMessageFactory (http://docs.spring.io/spring-ws/site/reference/html/common.html) to improve reading performance.
I made the following modification:
Replaced
<bean id="messageFactory" class="org.springframework.ws.soap.saaj.SaajSoapMessageFactory">
<property name="soapVersion">
<util:constant static-field="org.springframework.ws.soap.SoapVersion.SOAP_11" />
</property>
</bean>
by
<bean id="messageFactory" class="org.springframework.ws.soap.axiom.AxiomSoapMessageFactory">
<property name="payloadCaching" value="false"/>
</bean>
This change worked fine as expected. But from a performance perspective, I am getting surprising results.
For a test with 50 users concurrently accessing the web service (indirectly via a screen that in turn invokes the web service), the overall response time (moment the button is clicked to the moment the response from the web service is displayed back on the screen) reduced from ~ 27 secs to 22 secs -that's a good 5 second improvement over SaajSoapMessageFactory.
However, when I ran a 100 user test, the response time increased by 2 secs and SaajSoapMessageFactory appears to be better in this case.
Can someone explain the reason for this difference in performance despite AxiomSoapMessageFactory using streaming and avoiding building tree??
I am using tk10x server to listen some request from a GPS device on my OpenGTS server. By default this tk10x has a timeout of 60000ms. I want to remove this timeout thing what should i do ?
Here are a few texts from: http://www.opengts.org/FAQ.html
For Tomcat
This can be changed in the Tomcat default "web.xml" file found in the Tomcat directory "$CATALINA_HOME/conf/web.xml". Here is the section of the "web.xml" file that sets the timeout to 30 minutes:
<session-config>
<session-timeout>30</session-timeout>
</session-config>
You can change this value to any desired length of time. Tomcat should be restarted after this value has been changed. (Note: setting this value too large may cause excessive resources to be consumes by users which have logged in, but are not actually using the system).
Alternative
This can be configured in the "dcservers.xml" file (or "dcservers/dcserver_XXXXX.xml" file where XXXXX is the DCS id) by setting the TCP timeout values to '0', as follows:
<Property key="tcpIdleTimeoutMS">0</Property>
<Property key="tcpPacketTimeoutMS">0</Property>
<Property key="tcpSessionTimeoutMS">0</Property>
This will cause the DCS (where the above properties were set) to always leave the TCP session open. (Note: each connected TCP connesion consumes system resources - memory, threads, filehandles, etc. Having many such connected TCP sessions may significantly limit the number of devices which can connect with your server).
I have a web service composed by 2 Jettys (running the same content) load-balanced by a HA Proxy. During a test that consists in a medium requests per second rate (less than 100) and each request having a big body (21 KB), Jetty gets stucked -- It doesnt respond to any request.
The only way to bring Jetty up is restarting it.
I didn't find any information in log files (2011_05_20.stderrout.log, 2011_05_20.log) -- It seems to stop logging.
There are any other useful log files that I should enable in Jetty configs ?
Have anyone ever experienced this weird behaviour ?
Could I retrieve some info about thread status from Jetty (I'm not sure if all threads are busy, the request is rejected) ?
Thanks in advance!
How many threads have you specified in jetty.xml ? I think per default (at least for embedded Jetty), the maximum number of threads is set at around 50. You can change this either programatically, or via jetty.xml. Rather than just setting max to a high number, you should figure out a correct value, depending on server resources and load requirements.
<Configure id="Server" class="org.eclipse.jetty.server.Server">
<!-- =========================================================== -->
<!-- Server Thread Pool -->
<!-- =========================================================== -->
<Set name="ThreadPool">
<New class="org.eclipse.jetty.util.thread.QueuedThreadPool">
<!-- initial threads set to 50 -->
<Set name="minThreads">50</Set>
<!-- the thread pool will grow only up to 768 -->
<Set name="maxThreads">768</Set>
</New>
</Set>
</Configure>
Use something like jVisualVM or BTrace to find out how many threads that are in your threadpool. Heres a link to a BTrace script to print out thread counts: https://github.com/joachimhs/EurekaJ/blob/master/EurekaJ.Scripts/btrace/1.2/btraceScripts/ThreadCounter.java