It's a really basic setup, I am using slf4j-simple
I have the following route:
get("/fail", (req, res) -> {
throw new RuntimeException("fail");
}
);
As expected, it throws 500 internal error.
However, the logs show nothing about this. How can I get these bubbled exceptions to log?
These are the only logs I see:
[Thread-0] INFO org.eclipse.jetty.util.log - Logging initialized #164ms
[Thread-0] INFO spark.embeddedserver.jetty.EmbeddedJettyServer - == Spark has ignited ...
[Thread-0] INFO spark.embeddedserver.jetty.EmbeddedJettyServer - >> Listening on 0.0.0.0:4567
[Thread-0] INFO org.eclipse.jetty.server.Server - jetty-9.3.z-SNAPSHOT
[Thread-0] INFO org.eclipse.jetty.server.ServerConnector - Started ServerConnector#35eae602{HTTP/1.1,[http/1.1]}{0.0.0.0:4567}
[Thread-0] INFO org.eclipse.jetty.server.Server - Started #259ms
You may implement your log operation inside the Exception Mapper.
Related
I`m using the 2SO2 integration studio version is 8.0.0.
I tried to write junit suite test for my app using mock-service for my endpoint.
But when I tried to run the test I got the next failed :
My mock-service is set up with port = 9090 ( i do not know it`s right or not, this port i found in documentation by wso2 : https://ei.docs.wso2.com/en/7.2.0/micro-integrator/develop/creating-unit-test-suite/):
Test is :
<unit-test>
<artifacts>
<test-artifact>
<artifact>/LmaAPIConfigs/src/main/synapse-config/api/LmaAPI.xml</artifact>
</test-artifact>
<supportive-artifacts/>
<registry-resources/>
<connector-resources/>
</artifacts>
<test-cases>
<test-case name="TestMock">
<input>
<request-path>/currency</request-path>
<request-method>POST</request-method>
<request-protocol>http</request-protocol>
<payload><![CDATA[{"currency": "USD"}]]></payload>
<properties>
<property name="Content-Type" scope="transport" value="application/json"/>
</properties>
</input>
<assertions>
<assertEquals>
<actual>$body</actual>
<expected><![CDATA[<jsonObject><r030>840</r030><txt>Долар США</txt><rate>36.5686</rate><cc>USD</cc><exchangedate>]]></expected>
<message>not equals</message>
</assertEquals>
<assertNotNull>
<actual>$body</actual>
<message>body is null</message>
</assertNotNull>
</assertions>
</test-case>
</test-cases>
<mock-services>
<mock-service>/LmaAPI/LmaAPIConfigs/test/resources/mock-services/Exchange.xml</mock-service>
</mock-services>
</unit-test>
The logs from wso2carbon.log file :
[2022-09-19 08:49:43,467] INFO {org.apache.synapse.unittest.UnitTestingExecutor} - Start processing test-case handler
[2022-09-19 08:49:43,467] INFO {org.apache.synapse.unittest.UnitTestingExecutor} - Unit testing agent checks transport Pass-through HTTP Listener port
[2022-09-19 08:49:43,487] INFO {org.apache.synapse.transport.passthru.core.PassThroughListeningIOReactorManager} - Pass-through EI_INTERNAL_HTTPS_INBOUND_ENDPOINT Listener started on 0:0:0:0:0:0:0:0:9164
[2022-09-19 08:49:43,495] INFO {org.apache.synapse.unittest.SynapseTestcaseDataReader} - Artifact data from descriptor data read successfully
[2022-09-19 08:49:43,495] INFO {org.apache.synapse.unittest.SynapseTestcaseDataReader} - Test case data from descriptor data read successfully
[2022-09-19 08:49:43,497] INFO {org.apache.synapse.unittest.SynapseTestcaseDataReader} - Mock service data from descriptor data read successfully
[2022-09-19 08:49:43,498] INFO {org.apache.synapse.unittest.ConfigModifier} - Mock service creator ready to start service for Exchange
[2022-09-19 08:49:43,530] INFO {org.apache.synapse.unittest.MockServiceCreator} - Mock service started for Exchange in - http://localhost:9090/get-nbu-exchange
[2022-09-19 08:49:43,530] INFO {org.apache.synapse.unittest.ConfigModifier} - Thread waiting for mock service(s) starting
[2022-09-19 08:49:43,609] ERROR {org.apache.synapse.unittest.UnitTestingExecutor} - Failed to get input stream from TCP connection java.io.EOFException
at java.base/java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2842)
at java.base/java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3337)
at java.base/java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:925)
at java.base/java.io.ObjectInputStream.<init>(ObjectInputStream.java:368)
at org.apache.synapse.unittest.RequestHandler.readData(RequestHandler.java:112)
at org.apache.synapse.unittest.RequestHandler.run(RequestHandler.java:74)
at java.base/java.lang.Thread.run(Thread.java:834)
[2022-09-19 08:49:43,609] ERROR {org.apache.synapse.unittest.UnitTestingExecutor} - Error while reading data from received message java.lang.NullPointerException
at org.apache.synapse.unittest.SynapseTestcaseDataReader.readAndStoreArtifactData(SynapseTestcaseDataReader.java:148)
at org.apache.synapse.unittest.RequestHandler.preProcessingData(RequestHandler.java:137)
at org.apache.synapse.unittest.RequestHandler.run(RequestHandler.java:80)
at java.base/java.lang.Thread.run(Thread.java:834)
[2022-09-19 08:49:43,609] ERROR {org.apache.synapse.unittest.UnitTestingExecutor} - Reading Synapse testcase data failed
[2022-09-19 08:49:43,609] ERROR {org.apache.synapse.unittest.UnitTestingExecutor} - Error while running client request in test agent java.net.SocketException: Software caused connection abort: socket write error
at java.base/java.net.SocketOutputStream.socketWrite0(Native Method)
at java.base/java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:110)
at java.base/java.net.SocketOutputStream.write(SocketOutputStream.java:150)
at java.base/java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1883)
at java.base/java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1792)
at java.base/java.io.ObjectOutputStream.writeNonProxyDesc(ObjectOutputStream.java:1287)
at java.base/java.io.ObjectOutputStream.writeClassDesc(ObjectOutputStream.java:1232)
at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1428)
at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179)
at java.base/java.io.ObjectOutputStream.writeFatalException(ObjectOutputStream.java:1583)
at java.base/java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:352)
at org.apache.synapse.unittest.RequestHandler.writeData(RequestHandler.java:238)
at org.apache.synapse.unittest.RequestHandler.run(RequestHandler.java:92)
at java.base/java.lang.Thread.run(Thread.java:834)
[2022-09-19 08:49:43,830] INFO {io.netty.handler.logging.LoggingHandler} - [id: 0x00a2ce58] REGISTERED
[2022-09-19 08:49:43,830] INFO {io.netty.handler.logging.LoggingHandler} - [id: 0x00a2ce58] BIND: localhost/127.0.0.1:9090
[2022-09-19 08:49:43,830] INFO {io.netty.handler.logging.LoggingHandler} - [id: 0x00a2ce58, L:/127.0.0.1:9090] ACTIVE
[2022-09-19 08:49:44,040] INFO {org.apache.synapse.unittest.ConfigModifier} - Mock service(s) are started with given ports
[2022-09-19 08:49:44,042] INFO {org.apache.synapse.unittest.UnitTestingExecutor} - Main test artifact deployment started
[2022-09-19 08:49:44,042] INFO {io.netty.handler.logging.LoggingHandler} - [id: 0x00a2ce58, L:/127.0.0.1:9090] READ: [id: 0x7fabaabd, L:/127.0.0.1:9090 - R:/127.0.0.1:61051]
[2022-09-19 08:49:44,042] INFO {io.netty.handler.logging.LoggingHandler} - [id: 0x00a2ce58, L:/127.0.0.1:9090] READ COMPLETE
[2022-09-19 08:49:44,326] INFO {org.apache.synapse.api.API} - {api:LmaAPI} Initializing API: LmaAPI
[2022-09-19 08:49:44,326] INFO {org.apache.synapse.core.axis2.Axis2SynapseEnvironment} - Continuation call is set to true
[2022-09-19 08:49:44,326] INFO {org.apache.synapse.deployers.APIDeployer} - API named 'LmaAPI' has been deployed from file : LmaAPI
[2022-09-19 08:49:44,326] INFO {org.apache.synapse.unittest.TestingAgent} - Primary test API artifact deployed successfully
[2022-09-19 08:49:44,326] INFO {org.apache.synapse.unittest.UnitTestingExecutor} - Synapse testing agent ready to mediate test cases through deployments
[2022-09-19 08:49:44,326] INFO {org.apache.synapse.unittest.TestingAgent} - 1 Test case(s) ready to execute
[2022-09-19 08:49:44,326] INFO {org.apache.synapse.unittest.UnitTestingExecutor} - Invoking URI - http://localhost:8290/lma/currency
[2022-09-19 08:49:44,467] INFO {org.apache.synapse.mediators.builtin.LogMediator} - {api:LmaAPI} uri.var.currency = USD
[2022-09-19 08:49:44,472] INFO {org.apache.synapse.mediators.builtin.LogMediator} - {api:LmaAPI} To: /lma/currency, MessageID: urn:uuid:898e7062-97b1-4179-a6d0-a3a63106455f, correlation_id: 898e7062-97b1-4179-a6d0-a3a63106455f, Direction: request, LmaAPI_currency = ERROR RESPONSE, Payload: {"currency": "USD"}
[2022-09-19 08:49:44,498] INFO {org.apache.synapse.unittest.Assertor} - Assert Equals - assert property for services started
[2022-09-19 08:49:44,514] INFO {org.apache.synapse.unittest.Assertor} - Service Assert Expression - $body
[2022-09-19 08:49:44,514] INFO {org.apache.synapse.unittest.Assertor} - Service mediated result for Actual - <jsonObject><currency>USD</currency></jsonObject>
[2022-09-19 08:49:44,514] INFO {org.apache.synapse.unittest.Assertor} - Service Assert Expected - <jsonObject><r030>840</r030><txt>ДоларСША</txt><rate>36.5686</rate><cc>USD</cc></jsonObject>
[2022-09-19 08:49:44,514] ERROR {org.apache.synapse.unittest.Assertor} - Service assertEquals for $body expression failed with a message - not equals
[2022-09-19 08:49:44,514] ERROR {org.apache.synapse.unittest.Assertor} - Unit testing failed for the test case - TestMock
[2022-09-19 08:49:44,514] INFO {org.apache.synapse.deployers.APIDeployer} - API named 'LmaAPI' has been undeployed
[2022-09-19 08:49:44,514] INFO {org.apache.synapse.api.API} - {api:LmaAPI} Destroying API: LmaAPI
[2022-09-19 08:49:44,514] INFO {org.apache.synapse.core.axis2.Axis2SynapseEnvironment} - Continuation call is set to false
[2022-09-19 08:49:44,514] INFO {org.apache.synapse.unittest.TestingAgent} - Undeployed all the deployed test and supportive artifacts
[2022-09-19 08:49:44,514] INFO {io.netty.handler.logging.LoggingHandler} - [id: 0x00a2ce58, L:/127.0.0.1:9090] INACTIVE
[2022-09-19 08:49:44,514] INFO {io.netty.handler.logging.LoggingHandler} - [id: 0x00a2ce58, L:/127.0.0.1:9090] UNREGISTERED
[2022-09-19 08:49:44,521] INFO {org.apache.synapse.commons.emulator.core.Emulator} - Emulator shutdown successfully.......
[2022-09-19 08:49:44,521] INFO {org.apache.synapse.unittest.UnitTestingExecutor} - End processing test-case handler
My synapse artefacts :
My API :
My EP :
Mock Service port is simply a port you want your Mock service to start on, this can be any arbitrary port that is not occupied by any other service. So in your case, if any other service is not using the port 9090 you can use this. As you can see here in the code, a new Emulator will be started with this port and the context you are providing to facilitate mocking.
When you create a Mock service, you will be mocking an Endpoint. So I assume you already have an Endpoint Defined, and trying to mock this. If that's the case you need to add that Endpoint to the <supportive-artifacts/> section, in your Test Suite. Something like the below.
<supportive-artifacts>
<artifact>PATH_TO_ENDPOINT</artifact>
</supportive-artifacts>
I'm not exactly sure why you are getting a Received status code - 202 as the response. But it typically means your integration is unable to run.(Probably due to the missing endpoint). Also, it's important to note that all the detailed logs will be logged on the server side. So you won't be able to figure out what's happening by just looking at the Maven log. For example, as you can see here the server should log a message when your mock service is started. So make sure you check the server-side logs to identify any issues. If you are just executing from Integration Studio, the logs are located at <INTEGRATION_STUDIO_HOME>/runtime/microesb/repository/logs/wso2carbon.log
Integration Studio 8.0.0 was having some bugs related to the unit testing and mock services. AFAIK the issue you observed (Receiving a 202 status code) and the one #ycr have mentioned (Missing endpoint config in the <supportive-artifacts/> [1]) have been fixed in the latest Integration Studio 8.1.0.
Can you try this in the latest updated Integration Studio 8.1.0 pack? You can download the latest version from the official website. Please refer Get the latest updates to install the latest updates to Integration Studio.
I am doing a simple inner join between two tables, but I keep getting the warning shown below. I saw in other posts, that it is ok to ignore the warning, but my jobs end in failure and do not progress.
The tables are pretty big, (12 billion rows) but I am adding just three columns from one table to the other.
When reduce the dataset to a few million rows and run the script in Amazon Sagemaker Jupyter notebook. it works fine. But when I run it on the EMR cluster for the entire dataset, it fails. I even ran the specific snappy partition that it seemed to fail on, and it worked in sagemaker.
The job has no problems reading from one of the tables, it is the other table that seems to give the problem
INFO FileScanRDD: Reading File path:
s3a://path/EES_FD_UVA_HIST/date=2019-10-14/part-00056-ddb83da5-2e1b-499d-a52a-cad16e21bd2c-c000.snappy.parquet,
range: 0-102777097, partition values: [18183] 20/04/06 15:51:58 WARN
S3AbortableInputStream: Not all bytes were read from the
S3ObjectInputStream, aborting HTTP connection. This is likely an error
and may result in sub-optimal behavior. Request only the bytes you
need via a ranged GET or drain the input stream after use. 20/04/06
15:51:58 WARN S3AbortableInputStream: Not all bytes were read from the
S3ObjectInputStream, aborting HTTP connection. This is likely an error
and may result in sub-optimal behavior. Request only the bytes you
need via a ranged GET or drain the input stream after use. 20/04/06
15:52:03 INFO CoarseGrainedExecutorBackend: Driver commanded a
shutdown 20/04/06 15:52:03 INFO MemoryStore: MemoryStore cleared
20/04/06 15:52:03 INFO BlockManager: BlockManager stopped 20/04/06
15:52:03 INFO ShutdownHookManager: Shutdown hook called
This is my code:
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
uvalim=spark.read.parquet("s3://path/UVA_HIST_WITH_LIMITS")
uvaorg=spark.read.parquet("s3a://path/EES_FD_UVA_HIST")
config=uvalim.select('SEQ_ID','TOOL_ID', 'DATE' ,'UL','LL')
uva=uvaorg.select('SEQ_ID', 'TOOL_ID', 'TIME_STAMP', 'RUN_ID', 'TARGET', 'LOWER_CRITICAL', 'UPPER_CRITICAL', 'RESULT', 'STATUS')
uva_config=uva.join(config, on=['SEQ_ID','TOOL_ID'], how='inner')
uva_config.write.mode("overwrite").parquet("s3a://path/Uvaconfig.parquet")
Is there a way to debug this?
Update: Based on Emerson's suggestion:
I ran it with the debug log. It ran for 9 hours with a Fail before i killed the yarn application.
For some reason the stderr did not have much output
This is the stderr output:
SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found
binding in
[jar:file:/mnt/yarn/usercache/hadoop/filecache/301/__spark_libs__1712836156286367723.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation. SLF4J: Actual binding is of type
[org.slf4j.impl.Log4jLoggerFactory] 20/04/07 05:04:13 INFO
CoarseGrainedExecutorBackend: Started daemon with process name:
5653#ip-10-210-13-51 20/04/07 05:04:13 INFO SignalUtils: Registered
signal handler for TERM 20/04/07 05:04:13 INFO SignalUtils: Registered
signal handler for HUP 20/04/07 05:04:13 INFO SignalUtils: Registered
signal handler for INT 20/04/07 05:04:15 INFO SecurityManager:
Changing view acls to: yarn,hadoop 20/04/07 05:04:15 INFO
SecurityManager: Changing modify acls to: yarn,hadoop 20/04/07
05:04:15 INFO SecurityManager: Changing view acls groups to: 20/04/07
05:04:15 INFO SecurityManager: Changing modify acls groups to:
20/04/07 05:04:15 INFO SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view
permissions: Set(yarn, hadoop); groups with view permissions: Set();
users with modify permissions: Set(yarn, hadoop); groups with modify
permissions: Set() 20/04/07 05:04:15 INFO TransportClientFactory:
Successfully created connection to
ip-10-210-13-51.ec2.internal/10.210.13.51:35863 after 168 ms (0 ms
spent in bootstraps) 20/04/07 05:04:16 INFO SecurityManager: Changing
view acls to: yarn,hadoop 20/04/07 05:04:16 INFO SecurityManager:
Changing modify acls to: yarn,hadoop 20/04/07 05:04:16 INFO
SecurityManager: Changing view acls groups to: 20/04/07 05:04:16 INFO
SecurityManager: Changing modify acls groups to: 20/04/07 05:04:16
INFO SecurityManager: SecurityManager: authentication disabled; ui
acls disabled; users with view permissions: Set(yarn, hadoop); groups
with view permissions: Set(); users with modify permissions:
Set(yarn, hadoop); groups with modify permissions: Set() 20/04/07
05:04:16 INFO TransportClientFactory: Successfully created connection
to ip-10-210-13-51.ec2.internal/10.210.13.51:35863 after 20 ms (0 ms
spent in bootstraps) 20/04/07 05:04:16 INFO DiskBlockManager: Created
local directory at
/mnt1/yarn/usercache/hadoop/appcache/application_1569338404918_1241/blockmgr-2adfe133-fd28-4f25-95a4-2ac1348c625e
20/04/07 05:04:16 INFO DiskBlockManager: Created local directory at
/mnt/yarn/usercache/hadoop/appcache/application_1569338404918_1241/blockmgr-3620ceea-8eee-42c5-af2f-6975c894b643
20/04/07 05:04:17 INFO MemoryStore: MemoryStore started with capacity
3.8 GB 20/04/07 05:04:17 INFO CoarseGrainedExecutorBackend: Connecting to driver:
spark://CoarseGrainedScheduler#ip-10-210-13-51.ec2.internal:35863
20/04/07 05:04:17 INFO CoarseGrainedExecutorBackend: Successfully
registered with driver 20/04/07 05:04:17 INFO Executor: Starting
executor ID 1 on host ip-10-210-13-51.ec2.internal 20/04/07 05:04:18
INFO Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port
34073. 20/04/07 05:04:18 INFO NettyBlockTransferService: Server created on ip-10-210-13-51.ec2.internal:34073 20/04/07 05:04:18 INFO
BlockManager: Using
org.apache.spark.storage.RandomBlockReplicationPolicy for block
replication policy 20/04/07 05:04:18 INFO BlockManagerMaster:
Registering BlockManager BlockManagerId(1,
ip-10-210-13-51.ec2.internal, 34073, None) 20/04/07 05:04:18 INFO
BlockManagerMaster: Registered BlockManager BlockManagerId(1,
ip-10-210-13-51.ec2.internal, 34073, None) 20/04/07 05:04:18 INFO
BlockManager: external shuffle service port = 7337 20/04/07 05:04:18
INFO BlockManager: Registering executor with local external shuffle
service. 20/04/07 05:04:18 INFO TransportClientFactory: Successfully
created connection to ip-10-210-13-51.ec2.internal/10.210.13.51:7337
after 19 ms (0 ms spent in bootstraps) 20/04/07 05:04:18 INFO
BlockManager: Initialized BlockManager: BlockManagerId(1,
ip-10-210-13-51.ec2.internal, 34073, None) 20/04/07 05:04:20 INFO
CoarseGrainedExecutorBackend: Got assigned task 0 20/04/07 05:04:20
INFO Executor: Running task 0.0 in stage 0.0 (TID 0) 20/04/07 05:04:21
INFO TorrentBroadcast: Started reading broadcast variable 0 20/04/07
05:04:21 INFO TransportClientFactory: Successfully created connection
to ip-10-210-13-51.ec2.internal/10.210.13.51:38181 after 17 ms (0 ms
spent in bootstraps) 20/04/07 05:04:21 INFO MemoryStore: Block
broadcast_0_piece0 stored as bytes in memory (estimated size 39.4 KB,
free 3.8 GB) 20/04/07 05:04:21 INFO TorrentBroadcast: Reading
broadcast variable 0 took 504 ms 20/04/07 05:04:22 INFO MemoryStore:
Block broadcast_0 stored as values in memory (estimated size 130.2 KB,
free 3.8 GB) 20/04/07 05:04:23 INFO CoarseGrainedExecutorBackend:
eagerFSInit: Eagerly initialized FileSystem at s3://does/not/exist in
5155 ms 20/04/07 05:04:25 INFO Executor: Finished task 0.0 in stage
0.0 (TID 0). 53157 bytes result sent to driver 20/04/07 05:04:25 INFO CoarseGrainedExecutorBackend: Got assigned task 2 20/04/07 05:04:25
INFO Executor: Running task 2.0 in stage 0.0 (TID 2) 20/04/07 05:04:25
INFO Executor: Finished task 2.0 in stage 0.0 (TID 2). 53114 bytes
result sent to driver 20/04/07 05:04:25 INFO
CoarseGrainedExecutorBackend: Got assigned task 3 20/04/07 05:04:25
INFO Executor: Running task 3.0 in stage 0.0 (TID 3) 20/04/07 05:04:25
ERROR CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM 20/04/07
05:04:25 INFO DiskBlockManager: Shutdown hook called 20/04/07 05:04:25
INFO ShutdownHookManager: Shutdown hook called
Can you switch to using s3 instead of s3a. i belive s3a is not recommended for use in EMR. Additionaly, You can run your job in debug.
sc = spark.sparkContext
sc.setLogLevel('DEBUG')
Read the below document that talks about s3a
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-file-systems.html
So after troubleshooting the Debugs, I came to the conclusion that it was indeed a memory issue.
The cluster I was using was running out of memory after loading a few days worth of data. Each day was about 2 billion rows.
So I tried parsing my script by each day which it seemed to be able to handle.
However when handling some days where the data was a slightly larger (7 billion rows), it gave me a
executor.CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM
error. This post by Jumpman solved the problem by simply extending the spark.dynamicAllocation.executorIdleTimeout value
So thank you #Emerson and #Jumpman!
It's a strange problem.
when i set log4j.category.org.apache.synapse=DEBUG ,all is well.
when change to log4j.category.org.apache.synapse=INFO,the same proxy service failed.
here's my configuration:
batchLoadDiagProxy
singleLoadDiagProxy
when log level is INFO,i get ERRORs:
[2018-09-19 09:18:50,242] [EI-Core] WARN - PassThroughHttpListener System may be unstable: HTTP ListeningIOReactor encountered a checked exception : too many open files java.io.IOException: too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
at org.apache.http.impl.nio.reactor.DefaultListeningIOReactor.processEvent(DefaultListeningIOReactor.java:170)
at org.apache.http.impl.nio.reactor.DefaultListeningIOReactor.processEvents(DefaultListeningIOReactor.java:153)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:349)
at org.apache.synapse.transport.passthru.core.PassThroughListeningIOReactorManager$1.run(PassThroughListeningIOReactorManager.java:506)
at java.lang.Thread.run(Thread.java:745)
[2018-09-19 09:18:50,271] [EI-Core] ERROR - Axis2Sender Unexpected error during sending message out
java.lang.IllegalStateException: I/O reactor has been shut down
Try opening a command line and typing as a super user:
ulimit -f 100000
This will delay the error, but will not eliminate it. The problem is that INFO outputs more data into different files. Each file handle is not closed before the next is opened; meaning the OS runs out of file handles quite quickly.
Only enable INFO when necessary for debugging.
I try to run Worker and dashbord in same machine.
the first tools is running coorectly, but when i start the second the error has been raised :
[2018-03-07 09:59:43,546] INFO
{org.wso2.msf4j.internal.websocket.EndpointsRegistryImpl} - Endpoint
Registered : /server-stats/{type}
[2018-03-07 09:59:43,636] INFO {org.wso2.carbon.data.provider.DataProviderAPI} - Data Provider
Service Component is activated
[2018-03-07 09:59:44,909] INFO {org.wso2.msf4j.internal.websocket.WebSocketServerSC} - All required
capabilities are available of WebSocket service component is
available.
[2018-03-07 09:59:45,049] INFO {org.wso2.msf4j.internal.MicroservicesServerSC} - All microservices
are available
[2018-03-07 09:59:45,346] INFO {org.wso2.transport.http.netty.listener.ServerConnectorBootstrap$HTTPServerConnector}
- HTTP(S) Interface starting on host 0.0.0.0 and port 9643
[2018-03-07 09:59:45,939] INFO {org.wso2.carbon.metrics.core.config.model.JmxReporterConfig} -
Creating JMX reporter for Metrics with domain
'org.wso2.carbon.metrics'
[2018-03-07 09:59:45,954] INFO {org.wso2.carbon.metrics.core.reporter.impl.AbstractReporter} -
Started JMX reporter for Metrics
[2018-03-07 09:59:45,954] INFO {org.wso2.msf4j.analytics.metrics.MetricsComponent} - Metrics
Component is activated
[2018-03-07 09:59:45,970] INFO {org.wso2.carbon.databridge.agent.internal.DataAgentDS} - Successfully
deployed Agent Server
[2018-03-07 09:59:52,914] ERROR {org.wso2.carbon.kernel.internal.startupresolver.StartupComponentManager}
- Runtime Exception occurred while calling onAllRequiredCapabilitiesAvailable of component
carbon-datasource-service
com.zaxxer.hikari.pool.PoolInitializationException: Exception during
pool initialization: Connection is broken:
"java.net.SocketTimeoutException: connect timed out:
169.254.235.125:59336" [90067-196]
at com.zaxxer.hikari.pool.HikariPool.initializeConnections(HikariPool.java:581)
at com.zaxxer.hikari.pool.HikariPool.(HikariPool.java:152)
at com.zaxxer.hikari.HikariDataSource.(HikariDataSource.java:73)
at org.wso2.carbon.datasource.rdbms.hikari.HikariRDBMSDataSource.getDataSource(HikariRDBMSDataSource.java:56)
at org.wso2.carbon.datasource.rdbms.hikari.HikariDataSourceReader.createDataSource(HikariDataSourceReader.java:74)
at org.wso2.carbon.datasource.core.DataSourceBuilder.buildDataSourceObject(DataSourceBuilder.java:79)
at org.wso2.carbon.datasource.core.DataSourceBuilder.buildDataSourceObject(DataSourceBuilder.java:60)
at org.wso2.carbon.datasource.core.DataSourceBuilder.buildCarbonDataSource(DataSourceBuilder.java:44)
at org.wso2.carbon.datasource.core.DataSourceManager.initDataSources(DataSourceManager.java:153)
at org.wso2.carbon.datasource.core.internal.DataSourceListenerComponent.onAllRequiredCapabilitiesAvailable(DataSourceListenerComponent.java:125)
at org.wso2.carbon.kernel.internal.startupresolver.StartupComponentManager.lambda$notifySatisfiableComponents$7(StartupComponentManager.java:266)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at org.wso2.carbon.kernel.internal.startupresolver.StartupComponentManager.notifySatisfiableComponents(StartupComponentManager.java:252)
at org.wso2.carbon.kernel.internal.startupresolver.StartupOrderResolver$1.run(StartupOrderResolver.java:204)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
Caused by: org.h2.jdbc.JdbcSQLException: Connection is broken: "java.net.SocketTimeoutException: connect timed out:
169.254.235.125:59336" [90067-196]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:168)
at org.h2.engine.SessionRemote.connectServer(SessionRemote.java:457)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:367)
at org.h2.jdbc.JdbcConnection.(JdbcConnection.java:116)
at org.h2.jdbc.JdbcConnection.(JdbcConnection.java:100)
at org.h2.Driver.connect(Driver.java:69)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:95)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:101)
at com.zaxxer.hikari.pool.HikariPool.addConnection(HikariPool.java:496)
at com.zaxxer.hikari.pool.HikariPool.initializeConnections(HikariPool.java:565)
... 15 more
Caused by: java.net.SocketTimeoutException: connect timed out
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.h2.util.NetUtils.createSocket(NetUtils.java:103)
at org.h2.util.NetUtils.createSocket(NetUtils.java:83)
at org.h2.engine.SessionRemote.initTransfer(SessionRemote.java:115)
at org.h2.engine.SessionRemote.connectServer(SessionRemote.java:453)
... 23 more
can you please advise?
Thanks.
Can you share the WSO2 SP version you were using when you got this exception?
Also please check whether the AUTO_SERVER=TRUE
config is avaiable in the jdbc url of WSO2_METRICS_DB datasource configuration, which can be found in
{WSO2_SP_HOME}/conf/worker/deployment.yaml
eg :
jdbcUrl: 'jdbc:h2:${sys:carbon.home}/wso2/dashboard/database/metrics;AUTO_SERVER=TRUE'
I configured all datasource in mysql, and i can running all SP componnent.
the issue is related to H2 datasase, that not allowed to share connection with default configuration.
i will check default H2 connection parametrs, and test again.
I'm trying to configure one Siddhi application on WSO2 Stream Processor with two sources (both are files) but that doesn't work (one source works fine). However, even when I split the application in to Siddhi application that doesn't work too. There logs in both situations are the same as below:
[2018-01-25 08:51:20,583] INFO {org.quartz.impl.StdSchedulerFactory} - Using default implementation for ThreadExecutor
[2018-01-25 08:51:20,586] INFO {org.quartz.simpl.SimpleThreadPool} - Job execution threads will use class loader of thread: Timer-0
[2018-01-25 08:51:20,599] INFO {org.quartz.core.SchedulerSignalerImpl} - Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
[2018-01-25 08:51:20,599] INFO {org.quartz.core.QuartzScheduler} - Quartz Scheduler v.2.3.0 created.
[2018-01-25 08:51:20,600] INFO {org.quartz.simpl.RAMJobStore} - RAMJobStore initialized.
[2018-01-25 08:51:20,601] INFO {org.quartz.core.QuartzScheduler} - Scheduler meta-data: Quartz Scheduler (v2.3.0) 'polling-task-runner' with instanceId 'NON_CLUSTERED'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 1 threads.
Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.
[2018-01-25 08:51:20,601] INFO {org.quartz.impl.StdSchedulerFactory} - Quartz scheduler 'polling-task-runner' initialized from an externally provided properties instance.
[2018-01-25 08:51:20,601] INFO {org.quartz.impl.StdSchedulerFactory} - Quartz scheduler version: 2.3.0
[2018-01-25 08:51:20,601] INFO {org.quartz.core.QuartzScheduler} - Scheduler polling-task-runner_$_NON_CLUSTERED started.
[2018-01-25 08:51:20,604] INFO {org.quartz.core.QuartzScheduler} - Scheduler polling-task-runner_$_NON_CLUSTERED started.
[2018-01-25 08:51:20,605] ERROR {org.wso2.carbon.connector.framework.server.polling.PollingTaskRunner} - Exception occurred while scheduling job org.quartz.ObjectAlreadyExistsException: Unable to store Trigger with name: 'scheduledPoll' and group: 'group1', because one already exists with this identification.
at org.quartz.simpl.RAMJobStore.storeTrigger(RAMJobStore.java:415)
at org.quartz.simpl.RAMJobStore.storeJobAndTrigger(RAMJobStore.java:252)
at org.quartz.core.QuartzScheduler.scheduleJob(QuartzScheduler.java:855)
at org.quartz.impl.StdScheduler.scheduleJob(StdScheduler.java:249)
at org.wso2.carbon.connector.framework.server.polling.PollingTaskRunner.start(PollingTaskRunner.java:74)
at org.wso2.carbon.connector.framework.server.polling.PollingServerConnector.start(PollingServerConnector.java:57)
at org.wso2.carbon.transport.remotefilesystem.server.connector.contractimpl.RemoteFileSystemServerConnectorImpl.start(RemoteFileSystemServerConnectorImpl.java:75)
at org.wso2.extension.siddhi.io.file.FileSource.deployServers(FileSource.java:537)
at org.wso2.extension.siddhi.io.file.FileSource.connect(FileSource.java:370)
at org.wso2.siddhi.core.stream.input.source.Source.connectWithRetry(Source.java:130)
at org.wso2.siddhi.core.SiddhiAppRuntime.start(SiddhiAppRuntime.java:335)
at org.wso2.carbon.stream.processor.core.internal.StreamProcessorService.deploySiddhiApp(StreamProcessorService.java:280)
at org.wso2.carbon.stream.processor.core.internal.StreamProcessorDeployer.deploySiddhiQLFile(StreamProcessorDeployer.java:81)
at org.wso2.carbon.stream.processor.core.internal.StreamProcessorDeployer.deploy(StreamProcessorDeployer.java:170)
at org.wso2.carbon.deployment.engine.internal.DeploymentEngine.lambda$deployArtifacts$0(DeploymentEngine.java:291)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at org.wso2.carbon.deployment.engine.internal.DeploymentEngine.deployArtifacts(DeploymentEngine.java:282)
at org.wso2.carbon.deployment.engine.internal.RepositoryScanner.sweep(RepositoryScanner.java:112)
at org.wso2.carbon.deployment.engine.internal.RepositoryScanner.scan(RepositoryScanner.java:68)
at org.wso2.carbon.deployment.engine.internal.DeploymentEngine.start(DeploymentEngine.java:121)
at org.wso2.carbon.deployment.engine.internal.DeploymentEngineListenerComponent.onAllRequiredCapabilitiesAvailable(DeploymentEngineListenerComponent.java:216)
at org.wso2.carbon.kernel.internal.startupresolver.StartupComponentManager.lambda$notifySatisfiableComponents$7(StartupComponentManager.java:266)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at org.wso2.carbon.kernel.internal.startupresolver.StartupComponentManager.notifySatisfiableComponents(StartupComponentManager.java:252)
at org.wso2.carbon.kernel.internal.startupresolver.StartupOrderResolver$1.run(StartupOrderResolver.java:204)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
Anybody can throw an idea on how to overcome this?
Thanks.
Thanks for pointing this issue out.
Seems like this is occurred due to scheduling two polling tasks with the same id.
I have created an issue for this in git repository[1]. The fix will be shipped with an update soon.
[1] https://github.com/wso2/product-sp/issues/463
Best Regards!