2019-08-01 06:04:43,263 | ERROR | Could not accept connection :
org.apache.activemq.transport.tcp.ExceededMaximumConnectionsException:
Exceeded the maximum number of allowed client connections. See the
'maximumConnections' property on the TCP transport configuration URI
in the ActiveMQ configuration file (e.g., activemq.xml) |
org.apache.activemq.broker.TransportConnector | ActiveMQ Transport
Server Thread Handler:
nio+ssl://b-e13f27f2-1fa3-419f-819c-a24277e973a8-2.mq.us-west-2.amazonaws.com:61617?maximumConnections=100&wireFormat.maxFrameSize=104857600
Getting above exception on amazonMQ, earlier we were using activeMQ where we were setting something like
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireformat.maxFrameSize=104857600"/>
</transportConnectors>
In amazonMQ we are unable to find such options and broker is throwing exception. We did checked transportConnector on amazonMQ supports :
name
updateClusterClients
rebalanceClusterClients
updateClusterClientsOnRemove
Any idea how can we increase size of maximum connections?
As listed here that limit can be changed per AWS account.
You will need to open up an AWS support ticket requesting a limit increase
I guess I have to ask, Why so many connections?
Large has 1000 connections with Micro allowing 100 connections. Seeing in your error message you have 100 connections, are you on Micro? Maybe a Micro instance can't handle the load?
Are the producers/consumers something you control or is this a third party app? I would review code before increasing these levels if that is something you can do. Connections are to be shared as much as they can be. Are they being closed correctly when done? Are all your producers opening and maintaining their own connections?
Producer Connections should be grouped and shared with the PooledConnectionFactory.
Related
I have developed an application that does simple publish-subscrible messages with AWS IOT Core service.
As per the requirement of AWS IoT SDK, I need to call aws_iot_mqtt_yield() frequently.
Following is description of function aws_iot_mqtt_yield
Called to yield the current thread to the underlying MQTT client. This
time is used by the MQTT client to manage PING requests to monitor the
health of the TCP connection as well as periodically check the socket
receive buffer for subscribe messages. Yield() must be called at a
rate faster than the keepalive interval. It must also be called at a
rate faster than the incoming message rate as this is the only way the
client receives processing time to manage incoming messages. This is
the outer function which does the validations and calls the internal
yield above to perform the actual operation. It is also responsible
for client state changes
I am calling this function at a period of 1 second.
As it is sending PING on tcp connection, it creates too much internet data usage in long run when system is IDLE for most of the time.
My system works on LTE as well and paying more money for IDLE time is not acceptable for us.
I tried to extend period from 1 second to 30 seconds to limit our data usage but it adds 30 seconds latency in receiving messages from cloud.
My requirement is to achieve fast connectivity with low additional data usage in maintaining connection with AWS.
We have a pretty standard API server written in Go. The http handler unmarshals the request body (JSON) into a protobuf struct, and sending it off to be processed.
The service is deployed as ECS containers on AWS, fronted by ALB. The service has a pretty high request volume, and we observed that about 0.2% requests fail with messages like this:
read tcp $ECS_CONTAINER_IP:$PORT->$REMOTE_IP:$REMOTE_RANDOM_PORT: i/o timeout
We tracked it down that the error is returned from jsonpb.Unmarshal method. Our tracing tells us that all these i/o timeout requests take 5s.
Within the container, we did ss -t | wc -l to see in flight requests, and the number is quite reasonable (about 200-300ish, which is far lower than the no-file ulimit).
Some quick awk/sort/uniq tells us that the number of inflight requests coming from the ALBs are roughly balanced.
Any idea how do we go on from here?
I am planning to use throttling in wso2-ei 6.4.0, From local system i tested the scenario i face some problems could please help me if any one know thanks in advance.
If we restart the wso2-ei node policy is not working. It taking again from starting ( suppose request limit is 10 for 1 hour,Before restarting the node it processing 5 request after restarting it should take remaining 5 request but it accepting 10 request
Throttling is working based on wso2-ei node level but suppose Linux server having 10 nodes how to distribute the throttling policy in Linux server level .
How to consider client ip in throttling. If request coming from F5 load balance i need to consider the requested system IP not F5 server IP.
If we restart the wso2-ei node policy is not working. It taking again from starting ( suppose request limit is 10 for 1 hour,Before restarting the node it processing 5 request after restarting it should take remaining 5 request but it accepting 10 request
The throttle mediator does not store the throttle count. Therefore if you perform a server restart it will reset the throttle count value and start from zero. In a production environment, it is not expected to have frequent server restarts.
Throttling is working based on wso2-ei node level but suppose Linux server having 10 nodes how to distribute the throttling policy in Linux server level .
If you want to maintain the throttle count across all the nodes you need to cluster the nodes. Throttle mediator uses hazelcast cluster messages to maintain a global count across the cluster.
I am trying to implement an HTTP client in my Akka app in order to consume a 3rd party API.
What I am trying to configure are the timeout and the number of retries in case of failure.
Is the below code the right approach to do it?
val timeoutSettings =
ConnectionPoolSettings(config).withIdleTimeout(10 minutes)
.withMaxConnections(3)
val responseFuture: Future[HttpResponse] =
Http().singleRequest(
HttpRequest(
uri = "https://api.com"
),
timeoutSettings
)
That is not the right approach (for below I refer to the settings via .conf file rather than the programmatic approach, but that should correspond easily).
idle-timeout corresponds to the
time after which an idle connection pool (without pending requests)
will automatically terminate itself
on the pool level, and on akka.http.client level to
The time after which an idle connection will be automatically closed.
So you'd rather want the connection-timeout setting.
And for the retries its the max-retries setting.
The max-connections setting is only:
The maximum number of parallel connections that a connection pool to a
single host endpoint is allowed to establish
See the official documentation
Consider routes containing all the HTTP services
val routes:Route = ...
I wish to throttle number of requests so I used Route.handleFlow(routes) to create flow and called throttle method with finite duration.
Finally, I created HTTP binding using
Http().bindAndHandle(flowObjectAfterThrottling, hostname, port)
When HTTP requests are fired from a loop throttling is not obeyed by akka.
One possibility is that the http requests being "fired from a loop" may be using separate connections. Each incoming connection is throttling at the appropriate rate but the aggregate throughput is higher than expected.
Use Configurations Instead
You don't need to write software to set limiting rates for your Route.
If you are only concerned with consumption of a resource, such as disk or RAM, then you can remove the rate logic and use akka configuration settings instead:
# The maximum number of concurrently accepted connections when using the
# `Http().bindAndHandle` methods.
max-connections = 1024
# The maximum number of requests that are accepted (and dispatched to
# the application) on one single connection before the first request
# has to be completed.
pipelining-limit = 16
This doesn't provide the ability to set a maximum frequency, but it does at least allow for the specification of a maximum concurrent usage which is usually sufficient for resource protection.