How to make Armeria exit on an `Address already in use` error? - armeria

How do I make sure that my program exits when Armeria fails to start because of an Address already in use error?
I have the following code:
import com.linecorp.armeria.common.HttpRequest;
import com.linecorp.armeria.common.HttpResponse;
import com.linecorp.armeria.server.AbstractHttpService;
import com.linecorp.armeria.server.Server;
import com.linecorp.armeria.server.ServerBuilder;
import com.linecorp.armeria.server.ServiceRequestContext;
import java.util.concurrent.CompletableFuture;
public class TestMain {
public static void main(String[] args) {
ServerBuilder sb = Server.builder();
sb.http(8080);
sb.service("/greet/{name}", new AbstractHttpService() {
#Override
protected HttpResponse doGet(ServiceRequestContext ctx, HttpRequest req) throws Exception {
String name = ctx.pathParam("name");
return HttpResponse.of("Hello, %s!", name);
}
});
Server server = sb.build();
CompletableFuture<Void> future = server.start();
future.join();
}
}
When I run it once everything is fine.
But when I run it the second time I get an Address already in use error, which is of course expected, but the program doesn't terminate on its own. This may be how it is supposed to be, but how do I make sure that it terminates upon errors during initialization?
$ gradle run
> Task :run
14:36:04.811 [main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework
14:36:04.815 [main] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false
14:36:04.816 [main] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 8
14:36:04.817 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
14:36:04.817 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
14:36:04.818 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
...
14:36:05.064 [globalEventExecutor-3-1] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
14:36:05.068 [globalEventExecutor-3-1] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
14:36:05.068 [globalEventExecutor-3-1] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0
14:36:05.068 [globalEventExecutor-3-1] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
Exception in thread "main" java.util.concurrent.CompletionException: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use
at java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:326)
at java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:338)
at java.util.concurrent.CompletableFuture.uniRelay(CompletableFuture.java:911)
at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:953)
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:739)
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:250)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use
<==========---> 80% EXECUTING [6s]
> :run
^C

It's a bug in Armeria, where Netty event loop threads are not terminated when a Server fails to start up. Here's the fix PR, which should be part of the next release (0.97.0): https://github.com/line/armeria/pull/2288

Related

Using Ray from Flask - init() fails (with core dump)

I'm trying to use Ray from a Flask web application.
The whole thing runs in Docker container.
Ray Version is 0.8.6, Flask 1.1.2
When I start the web application, Ray tries to init twice, at it seems, and then the processes crashes. I added the memory limitations later on because there where some warning regarding not enough shared memory size (docker compose setting is "shm_size: '4gb'").
If I start Ray in the same container without using Flask it runs well.
import os
import flask
import ray
from flask import Flask
def create_app(test_config=None):
app = Flask(__name__, instance_relative_config=True)
app.config.from_mapping(
SECRET_KEY='dev',
DEBUG = True
)
# ensure the instance folder exists
try:
os.makedirs(app.instance_path)
except OSError:
pass
if ray.is_initialized() == False:
ray.init(ignore_reinit_error=True,
include_webui=False,
object_store_memory=1*1024*1014*1024,
redis_max_memory=2*1024*1014*1024)
ray.worker.global_worker.run_function_on_all_workers(setup_ray_logger)
#app.route('/api/GetAccountRatings', methods=['GET'])
def GetAccountRatings():
return ...
return app
When I start the flask web app with:
export FLASK_APP="mifad.api:create_app()"
export FLASK_ENV=development
flask run --host=0.0.0.0 --port=8084
I get the following error messages:
* Serving Flask app "mifad.api:create_app()" (lazy loading)
* Environment: development
* Debug mode: on
* Running on http://0.0.0.0:8084/ (Press CTRL+C to quit)
* Restarting with stat
Failed to set SIGTERM handler, processes mightnot be cleaned up properly on exit.
* Debugger is active!
* Debugger PIN: 331-620-174
Failed to set SIGTERM handler, processes mightnot be cleaned up properly on exit.
2020-07-06 07:38:10,382 INFO resource_spec.py:212 -- Starting Ray with 59.18 GiB memory available for workers and up to 0.99 GiB for objects. You can adjust these settings with ray.init(memory=<bytes>, object_store_memory=<bytes>).
2020-07-06 07:38:10,610 WARNING services.py:923 -- Redis failed to start, retrying now.
2020-07-06 07:38:10,675 INFO resource_spec.py:212 -- Starting Ray with 59.13 GiB memory available for workers and up to 0.99 GiB for objects. You can adjust these settings with ray.init(memory=<bytes>, object_store_memory=<bytes>).
2020-07-06 07:38:10,781 WARNING services.py:923 -- Redis failed to start, retrying now.
2020-07-06 07:38:11,043 WARNING services.py:923 -- Redis failed to start, retrying now.
2020-07-06 07:38:11,479 ERROR import_thread.py:93 -- ImportThread: Error 111 connecting to 172.29.0.2:44946. Connection refused.
2020-07-06 07:38:11,481 ERROR worker.py:949 -- print_logs: Connection closed by server.
2020-07-06 07:38:11,488 ERROR worker.py:1049 -- listen_error_messages_raylet: Connection closed by server.
2020-07-06 07:38:11,899 ERROR import_thread.py:93 -- ImportThread: Error while reading from socket: (104, 'Connection reset by peer')
2020-07-06 07:38:11,901 ERROR worker.py:1049 -- listen_error_messages_raylet: Connection closed by server.
2020-07-06 07:38:11,908 ERROR worker.py:949 -- print_logs: Connection closed by server.
F0706 07:38:17.390182 4555 4659 service_based_gcs_client.cc:104] Check failed: num_attempts < RayConfig::instance().gcs_service_connect_retries() No entry found for GcsServerAddress
*** Check failure stack trace: ***
# 0x7ff84ae8061d google::LogMessage::Fail()
# 0x7ff84ae81a8c google::LogMessage::SendToLog()
# 0x7ff84ae802f9 google::LogMessage::Flush()
# 0x7ff84ae80511 google::LogMessage::~LogMessage()
# 0x7ff84ae5dde9 ray::RayLog::~RayLog()
# 0x7ff84ac39cea ray::gcs::ServiceBasedGcsClient::GetGcsServerAddressFromRedis()
# 0x7ff84ac39f37 _ZNSt17_Function_handlerIFSt4pairISsiEvEZN3ray3gcs21ServiceBasedGcsClient7ConnectERN5boost4asio10io_contextEEUlvE_E9_M_invokeERKSt9_Any_data
# 0x7ff84ac6ffb7 ray::rpc::GcsRpcClient::Reconnect()
# 0x7ff84ac71da8 _ZNSt17_Function_handlerIFvRKN3ray6StatusERKNS0_3rpc19AddProfileDataReplyEEZNS4_12GcsRpcClient14AddProfileDataERKNS4_21AddProfileDataRequestERKSt8functionIS8_EEUlS3_S7_E_E9_M_invokeERKSt9_Any_dataS3_S7_
# 0x7ff84ac4251d ray::rpc::ClientCallImpl<>::OnReplyReceived()
# 0x7ff84ab96870 _ZN5boost4asio6detail18completion_handlerIZN3ray3rpc17ClientCallManager29PollEventsFromCompletionQueueEiEUlvE_E11do_completeEPvPNS1_19scheduler_operationERKNS_6system10error_codeEm
# 0x7ff84b0b80df boost::asio::detail::scheduler::do_run_one()
# 0x7ff84b0b8cf1 boost::asio::detail::scheduler::run()
# 0x7ff84b0b9c42 boost::asio::io_context::run()
# 0x7ff84ab7db10 ray::CoreWorker::RunIOService()
# 0x7ff84a7763e7 execute_native_thread_routine_compat
# 0x7ff84deed6db start_thread
# 0x7ff84dc1688f clone
F0706 07:38:17.804720 4553 4703 service_based_gcs_client.cc:104] Check failed: num_attempts < RayConfig::instance().gcs_service_connect_retries() No entry found for GcsServerAddress
*** Check failure stack trace: ***
# 0x7fedd65e261d google::LogMessage::Fail()
# 0x7fedd65e3a8c google::LogMessage::SendToLog()
# 0x7fedd65e22f9 google::LogMessage::Flush()
# 0x7fedd65e2511 google::LogMessage::~LogMessage()
# 0x7fedd65bfde9 ray::RayLog::~RayLog()
# 0x7fedd639bcea ray::gcs::ServiceBasedGcsClient::GetGcsServerAddressFromRedis()
# 0x7fedd639bf37 _ZNSt17_Function_handlerIFSt4pairISsiEvEZN3ray3gcs21ServiceBasedGcsClient7ConnectERN5boost4asio10io_contextEEUlvE_E9_M_invokeERKSt9_Any_data
# 0x7fedd63d1fb7 ray::rpc::GcsRpcClient::Reconnect()
# 0x7fedd63d3da8 _ZNSt17_Function_handlerIFvRKN3ray6StatusERKNS0_3rpc19AddProfileDataReplyEEZNS4_12GcsRpcClient14AddProfileDataERKNS4_21AddProfileDataRequestERKSt8functionIS8_EEUlS3_S7_E_E9_M_invokeERKSt9_Any_dataS3_S7_
# 0x7fedd63a451d ray::rpc::ClientCallImpl<>::OnReplyReceived()
# 0x7fedd62f8870 _ZN5boost4asio6detail18completion_handlerIZN3ray3rpc17ClientCallManager29PollEventsFromCompletionQueueEiEUlvE_E11do_completeEPvPNS1_19scheduler_operationERKNS_6system10error_codeEm
# 0x7fedd681a0df boost::asio::detail::scheduler::do_run_one()
# 0x7fedd681acf1 boost::asio::detail::scheduler::run()
# 0x7fedd681bc42 boost::asio::io_context::run()
# 0x7fedd62dfb10 ray::CoreWorker::RunIOService()
# 0x7fedd5ed83e7 execute_native_thread_routine_compat
# 0x7fedd968f6db start_thread
# 0x7fedd93b888f clone
Aborted (core dumped)
What am I doing wrong?
Best regards,
Bernd

Flask-sqlalchemy / uwsgi: DB connection problem when more than on process is used

I have a Flask app running on Heroku with uwsgi server in which each user connects to his own database. I have implemented the solution reported here for a very similar situation. In particular, I have implemented the connection registry as follows:
class DBSessionRegistry():
_registry = {}
def get(self, URI, **kwargs):
if URI not in self._registry:
current_app.logger.info(f'INFO - CREATING A NEW CONNECTION')
try:
engine = create_engine(URI,
echo=False,
pool_size=5,
max_overflow=5)
session_factory = sessionmaker(bind=engine)
Session = scoped_session(session_factory)
a_session = Session()
self._registry[URI] = a_session
except ArgumentError:
raise Exception('Error')
current_app.logger.info(f'SESSION ID: {id(self._registry[URI])}')
current_app.logger.info(f'REGISTRY ID: {id(self._registry)}')
current_app.logger.info(f'REGISTRY SIZE: {len(self._registry.keys())}')
current_app.logger.info(f'APP ID: {id(current_app)}')
return self._registry[URI]
In my create_app() I assign a registry to the app:
app.DBregistry = DBSessionRegistry()
and whenever I need to talk to the DB I call:
current_app.DBregistry.get(URI)
where the URI is dependent on the user. This works nicely if I use uwsgi with one single process. With more processes,
[uwsgi]
processes = 4
threads = 1
sometimes it gets stuck on some requests, returning a 503 error code. I have found that the problem appears when the requests are handled by different processes in uwsgi. This is an excerpt of the log, which I commented to illustrate the issue:
# ... EVERYTHING OK UP TO HERE.
# ALL PREVIOUS REQUESTS HANDLED BY PROCESS pid = 12
INFO in utils: SESSION ID: 139860361716304
INFO in utils: REGISTRY ID: 139860484608480
INFO in utils: REGISTRY SIZE: 1
INFO in utils: APP ID: 139860526857584
# NOTE THE pid IN THE NEXT LINE...
[pid: 12|app: 0|req: 1/1] POST /manager/_save_task =>
generated 154 bytes in 3457 msecs (HTTP/1.1 200) 4 headers in 601
bytes (1 switches on core 0)
# PREVIOUS REQUEST WAS MANAGED BY PROCESS pid = 12
# THE NEXT REQUEST IS FROM THE SAME USER AND TO THE SAME URL.
# SO THERE IS NO NEED FOR CREATING A NEW CONNECTION, BUT INSTEAD...
INFO - CREATING A NEW CONNECTION
# TO THIS POINT, I DON'T UNDERSTAND WHY IT CREATED A NEW CONNECTION.
# THE SESSION ID CHANGES, AS IT IS A NEW SESSION
INFO in utils: SESSION ID: 139860363793168 # <<--- CHANGED
INFO in utils: REGISTRY ID: 139860484608480
INFO in utils: REGISTRY SIZE: 1
# THE APP AND THE REGISTRY ARE UNIQUE
INFO in utils: APP ID: 139860526857584
# uwsgi GIVES UP...
*** HARAKIRI ON WORKER 4 (pid: 11, try: 1) ***
# THE FAILED REQUEST WAS MANAGED BY PROCESS pid = 11
# I ASSUME THIS IS WHY IT CREATED A NEW CONNECTION
HARAKIRI: -- syscall> 7 0x7fff4290c6d8 0x1 0xffffffff 0x4000 0x0 0x0
0x7fff4290c6b8 0x7f33d6e3cbc4
HARAKIRI: -- wchan> poll_schedule_timeout
HARAKIRI !!! worker 4 status !!!
HARAKIRI [core 0] - POST /manager/_save_task since 1587660997
HARAKIRI !!! end of worker 4 status !!!
heroku[router]: at=error code=H13 desc="Connection closed without
response" method=POST path="/manager/_save_task"
DAMN ! worker 4 (pid: 11) died, killed by signal 9 :( trying respawn ...
Respawned uWSGI worker 4 (new pid: 14)
# FROM HERE ON, NOTHINGS WORKS ANYMORE
This behavior is consistent over several attempts: when the pid changes, the request fails. Even with a pool_size = 1 in the create_engine function the issue persists. No issue instead is uwsgi is used with one process.
I am pretty sure it is my fault, there is something I don't know or I don't understand about how uwsgi and/or sqlalchemy work. Could you please help me?
Thanks
What is hapeening is that you are trying to share memory between processes.
There are some exaplanations in these posts.
(is it possible to share memory between uwsgi processes running flask app?).
(https://stackoverflow.com/a/45383617/11542053)
You can use an extra layer to store your sessions outsite of the app.
For that, you can use uWsgi's SharedArea(https://uwsgi-docs.readthedocs.io/en/latest/SharedArea.html) which is very low level or you can user other approaches like uWsgi's caching(https://uwsgi-docs.readthedocs.io/en/latest/Caching.html)
hope it helps.

Selenium Python desired capabilities cannot create a new driver instance

I am trying to use Desired Capabilities in Selenium Python for IE on our 64bit machine, Windows 2008 as IEDriverServer.exe keeps crashing half way through the test when i use:
cls.driver = webdriver.Ie(Globals.IEdriver_path)
I want try Desired Capabilities, see if it works ok this way.
I have the following in my setup:
class BaseTestCase(unittest.TestCase):
#classmethod
def setUpClass(cls):
desired_caps = {}
desired_caps['platform'] = 'WINDOWS'
desired_caps['browserName'] = 'INTERNETEXPLORER'
#cls.driver = webdriver.Remote('http://192.168.1.103:4444/wd/hub', desired_caps)
cls.driver = webdriver.Remote('http://127.0.0.1:4444/wd/hub', desired_caps)
cls.driver = webdriver.Ie(Globals.IEdriver_path)
cls.driver.get(Globals.URL)
cls.login_page = login.LoginPage(cls.driver)
I run the Selenium Server jar file as follows:
java -Dwebdriver.ie.driver="C:\\IEDriverServer.exe" -jar
selenium-server-standalone-2.53.0.jar
When i run my Selenium Python test i get the following error:
WebDriverException: Message: The best matching driver provider org.openqa.selenium.ie.InternetExplorerDriver can't create a new driver instance for Capabilities [{browserName=INTERNETEXPLORER, platform=WINDOWS}]
Build info: version: '2.53.0', revision: '35ae25b', time: '2016-03-15 17:00:58'
System info: host: 'JUSTIN-PC', ip: '192.168.1.164', os.name: 'Windows 7', os.arch: 'x86', os.version: '6.1', java.version: '1.8.0_45'
Driver info: driver.version: unknown
Stacktrace:
at org.openqa.selenium.remote.server.DefaultDriverFactory.newInstance (DefaultDriverFactory.java:62)
at org.openqa.selenium.remote.server.DefaultSession$BrowserCreator.call (DefaultSession.java:222)
at org.openqa.selenium.remote.server.DefaultSession$BrowserCreator.call (DefaultSession.java:1)
at java.util.concurrent.FutureTask.run (None:-1)
at org.openqa.selenium.remote.server.DefaultSession$1.run (DefaultSession.java:176)
at java.util.concurrent.ThreadPoolExecutor.runWorker (None:-1)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (None:-1)
at java.lang.Thread.run (None:-1)
If i use:
cls.driver = webdriver.Remote('http://192.168.1.103:4444/wd/hub', desired_caps)
Then I will get the following error:
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>
How should i set Desired Capabilities in Selenium Python?
Thanks, Riaz
Here is an example to start a remote session with Internet Explorer:
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
capabilities = DesiredCapabilities.INTERNETEXPLORER
capabilities.update({'logLevel' : 'ERROR'})
remote_server = "http://127.0.0.1:4444/wd/hub"
driver = webdriver.Remote(remote_server, capabilities)
driver.get('http://stackoverflow.com/')

javax.xml.ws.WebServiceException timeout from soap request using jax-ws

I am getting the following exception when I run my build in jenkins.
Here is what I did
I created java files from running the following command
wsimport -keep -verbose http://foobar.com/ws/server?wsdl
I copied the files created to my project and used it to construct the client stubs
Here is how my client looks
public class SoapClient {
private ServiceSoap soapService;
//30 secs
private static final Integer REQUEST_TIMEOUT_MILLI = 90000;
//15 secs
private static final Integer CONNECT_TIMEOUT_MILLI = 90000;
public SoapClient(){
Service service = new Service();
service.setHandlerResolver(new JaxWsHandlerResolver());
soapService = service.getServiceSoap();
((BindingProvider)soapService).getRequestContext()
.put(BindingProviderProperties.REQUEST_TIMEOUT,REQUEST_TIMEOUT_MILLI);
((BindingProvider)soapService).getRequestContext()
.put(BindingProviderProperties.CONNECT_TIMEOUT,CONNECT_TIMEOUT_MILLI);
}
public SoapResponse processRequest(String refNum){
SoapResponse response = null;
try {
response = soapService.requestInfo(refNum);
}
catch (Exception ex){
LOG.error("error connecting to Soap service, {}",ex.getLocalizedMessage());
LOG.error(ex.getMessage());
LOG.error(ex.getCause().toString());
ex.printStackTrace();
}
return response;
}
}
I have set connection and request timeouts above. I have increased upto 10 minutes. till I see the same exception. I have included the jaxws-rt dependency in my pom.xml
Here it is
<dependency>
<groupId>com.sun.xml.ws</groupId>
<artifactId>jaxws-rt</artifactId>
<jaxws.version>2.2.10</jaxws.version>
</dependency>
Here is the exception I am seeing
2016-05-05 04:43:40,337 [main] ERROR foo.bar.foobar.client.SoapClient - java.net.SocketTimeoutException: Read timed out
javax.xml.ws.WebServiceException: java.net.SocketTimeoutException: Read timed out
at com.sun.xml.ws.transport.http.client.HttpClientTransport.readResponseCodeAndMessage(HttpClientTransport.java:210)
at com.sun.xml.ws.transport.http.client.HttpTransportPipe.createResponsePacket(HttpTransportPipe.java:241)
at com.sun.xml.ws.transport.http.client.HttpTransportPipe.process(HttpTransportPipe.java:232)
at com.sun.xml.ws.transport.http.client.HttpTransportPipe.processRequest(HttpTransportPipe.java:145)
at com.sun.xml.ws.transport.DeferredTransportPipe.processRequest(DeferredTransportPipe.java:110)
at com.sun.xml.ws.api.pipe.Fiber.__doRun(Fiber.java:1136)
at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:1050)
at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:1019)
at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:877)
at com.sun.xml.ws.client.Stub.process(Stub.java:463)
at com.sun.xml.ws.client.sei.SEIStub.doProcess(SEIStub.java:191)
at com.sun.xml.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:108)
at com.sun.xml.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:92)
at com.sun.xml.ws.client.sei.SEIStub.invoke(SEIStub.java:161)
at com.sun.proxy.$Proxy50.requestInfo(Unknown Source)
at
The exception seems to be sun related. I have read other posts. but none helped me actually. I am stuck here for two days and any help here would be much appreciated.
Thanks
EDIT
I pretty much see same issue as here. but no solution has been posted

Akka Remote: get autogenerated port

I have a Java client, which obtains an autogenerated port. After starting the actor system, I want to access the port.
Config clientConfig = ConfigFactory.parseString("akka.remote.netty.tcp.port = 0")
.withFallback(ConfigFactory.parseString("akka.remote.netty.tcp.hostname = " + serverHostName))
.withFallback(ConfigFactory.load("common"));
actorSystem = ActorSystem.create("clientActorSystem", clientConfig);
// how to access the generated port here..!?
The port must already be set since the log output after ActorSystem.create(...) is like that:
[INFO] [03/31/2016 14:11:32.042] [main] [akka.remote.Remoting] Starting remoting
[INFO] [03/31/2016 14:11:32.233] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://actorSystem#localhost:58735]
[INFO] [03/31/2016 14:11:32.234] [main] [akka.remote.Remoting] Remoting now listens on addresses: [akka.tcp://actorSystem#localhost:58735]
If I try to get it via the configuration with actorSystem.settings().config().getValue("akka.remote.netty.tcp.port"), I still get 0 as defined before.
Has anyone an idea how this port (58735 in the example) can be accessed?
Using scala you can get Option of port on which Actor system is currently running:
val port = system.provider.getDefaultAddress.port
Hope you will be able to get the same code in Java.
The accepted answer probably worked for older versions of Akka but as of now (version 2.5.x) you will be getting something like:
Error:(22, 18) method provider in trait ActorRefFactory cannot be accessed in akka.actor.ActorSystem
The solution would be to use akka extensions. Here is how I use it:
Example. scala
package example
import akka.actor._
class AddressExtension(system: ExtendedActorSystem) extends Extension {
val address: Address = system.provider.getDefaultAddress
}
object AddressExtension extends ExtensionId[AddressExtension] {
def createExtension(system: ExtendedActorSystem): AddressExtension = new AddressExtension(system)
def hostOf(system: ActorSystem): String = AddressExtension(system).address.host.getOrElse("")
def portOf(system: ActorSystem): Int = AddressExtension(system).address.port.getOrElse(0)
}
object Main extends App {
val system = ActorSystem("Main")
println(AddressExtension.portOf(system))
}