Instance of running embedded jetty server at certain port - jetty

An embedded jetty server is running at a particular port number. I want to get that running instance to stop from

Take a look at either the STOP.PORT mechanism or the shutdown handler.
http://wiki.eclipse.org/Jetty/Howto/Secure_Termination
or
http://download.eclipse.org/jetty/stable-7/apidocs/org/eclipse/jetty/server/handler/ShutdownHandler.html
there is example embedded usage in the shutdown handler javadoc linked.
cheers

Related

Are there any known Linux inetd/"wait"-capable web servers with graceful idle shutdown?

I would like to start a web server on-demand as an inetd "tcp/wait" service which shuts itself down after a programmable period of inactivity.
Many web servers already support inetd "tcp/nowait" mode, but this mode has the disadvantage that a new process needs to be forked for every new connection. It is therefore slower and more resource-intensive than running a dedicated server daemon.
A web server supporting inetd's "tcp/wait" would only be launched by inetd for the first request, then serve any number of requests using the same server instance until no requests occurred for some period of idle time, in which case the server instance automatically terminates and lets inetd start it again once the next period of activity starts.
Such a tcp/wait inetd web server should have approximately the same efficiency as a dedicated web server (i. e. running permanently) during times of activity. However, it will automatically shut down in times of inactivity, saving system resources.
Irregular "anti-demand"-driven shutdowns will also clean up any memory leaks from the web server and possibly associated FGCI-services (which would terminate together with the web server).
I know that it is already possible to use systemd's socket activation in combination with lighttpd's -i option to implement what I want.
However, I want a solution that also works without systemd, depending on nothing else than a running Internet superserver no matter how the latter one has been started (inetd/xinetd started by sysvinit, runit, manually, or systemd's socket activation replacing inetd/xinetd).

Remote debug WebLogic cluster

I need to remotely debug a Java EE application running on a WebLogic 10.3.5 cluster. It is important that I debug it whilst clustered, not running on a single box.
I have read docs that state you can either modify the Java Options in the start script or the debug flag in the domain config, however what I do not understand is how you know which server to connect to when clustered.
My cluster is configured for round robin load balancing so I have no way of knowing which server to connect my debugger to.
Is it possible to connect the remote debugger to the cluster rather than a single server?
It would seem there's no domain or cluster level connection available. One must open a new debugger connection to each server.
It just wasn't obvious in my IDE that clicking debug more than once would open multiple debugger connections.
You could try something like this,
Create Debug configuration with Host and Port number of the Admin Server which manages the cluster servers.
Try debug on admin server.

API to control Linux daemon

What I need is the possibility to control a Linux daemon though some sort of API, for example check if a certain daemon is running, start/stop/restart it, etc.
Is there any Linux library that provides this functionality?
You could also use D-Bus or SNMP. However, most daemons just write their PID to some file under /var/run/ and accept the SIGTERM signal to stop, and the SIGHUP signal to reload their configuration files (usually under /etc/).
Notice that if you adopt the usual convention that your daemon program mydprog is writing its pid in /var/run/mydprog.pid some other program could read that pid there and check, using kill(2) with a 0 signal, that the daemon process is running. You might also access to some pseudo-files under /proc/1234/ (where 1234 is the daemon's pid), notably /proc/1234/status, see proc(5) for more.
You can also design your daemon so that it answers, e.g. using some JSONRPC protocol on some unix(7) or tcp(7) socket, to some queries by giving status information. You might consider using some HTTP protocol thru some HTTP server library like libonion, or any other message passing or remote procedure call protocol.
short answer is no.
Some daemons might have an api but that will be specific for that daemon.
You can run /etc/init.d/<daemon_name> start|stop|status to start stop or get the status most daemons

Jrun ColdFusion service intermittently fails to start

We occasionaly have a problem where we attempt to start the Jrun service and it fails with the following two errors:
error JRun Naming Service unable to start on port 2902
java.net.BindException: Port in use by another service or process: 2902
info No JDBC data sources have been configured for this server (see jrun-resources.xml)
error java.net.BindException: Port in use by another service or process: 8300
We then have to reboot the machine and Jrun comes up with no problem. This is very intermittent - happens perhaps one out of every 10 times we restart Jrun services.
I saw another reference on StackOverflow that if Windows Services take longer than 30 seconds to restart Windows shuts down the startup proccess. Perhaps that is the issue here? The logs indeed indicate that these errors are thrown about 37+ seconds after the restart command is issued.
We are on a 64bit platform on WinServer 2008.
Thanks!
We've been experiencing a similar problem on some of our servers. Unfortunately, netstat never indicated any sort of actual port conflict for us. My suspicion is that it's related to our recent deployment of a ColdFusion "cumulative hotfix" to our servers. We use the multi-server edition of CF 8.0.1 enterprise with a large number of instances on each machine -- each with its own JVM and its own distinct set of ports. Each CF instance is attached to its own IIS website and runs as its own Windows Service.
Within the past few weeks, we started getting similar "port in use" exceptions on startup, on our 32-bit machines as well as our 64-bit machines, all of which are running Windows Server 2003. I found several possible culprits and tried the following:
In jrun-jms.xml for each CF instance, there's an entry for the RMI transport layer that reads <port>0</port> -- which, according to the JRun documentation, means "choose a random port." I made that non-random and distinct per instance (in the 2600-2650 range) and restarted each instance. Things improved temporarily, perhaps coincidentally.
In the same file, under the entry for the TCPIP transport later, every instance defaulted to <port>2522</port> -- so I changed those to distinct ports per instance in the 2500-2550 range and restarted each instance. That didn't seem to help at all.
I tried researching whether ports in the 2500-3000 range might be used for any other purpose, and I couldn't find anything obvious, and besides, netstat wasn't telling me that any of my choices were in use.
I found something online about Windows designating ports from 1024 to 5000 as the "dynamic port" range, so I added 10000 to the port numbers I had set in jrun-jms.xml and restarted each instance again. Still didn't help.
I tried changing the port in jndi.properties, also by adding 10000 to the port numbers. Unfortunately this meant wiping out all my wsconfig connections to IIS and creating them again from scratch. I had to edit wsconfig_jvm.config as well, adding -DWSConfig.PortScanStartPort=12900 to java.args, so it could detect my CF instances. (By default it only scans ports 2900-3000. See bpurcell.org for details. It's an old post but still relevant.) So far so good!
My best guess is that Adobe (or MS Windows) changed the way some of its code grabs "random" ports. But all I know for sure so far is that the steps outlined above appear to have fixed the problem.
Have you verified that the services are in fact stopping? Task manager should show no instances of jrun.exe. You can also check to see what is bound to that port by opening a command window and running
netstat -a -b
This will list all your open ports, plus what program is using them. You can also use
netstat -a -o
Which does the same thing as the above, but will list the process id instead of the program name. You can then cross-reference those with task manager. You'll need to enable showing the PIDs in task manager by going to View->Select Columns and making sure PID is checked. My guess would be that the jrun processes are not shutting down in a timely fashion.

Remote Shutdown without (!) RPC Service

There are different ways of shutting down a computer remotely.
Here are three I know of:
Invoking the Shutdown method of the Win32_OperatingSystem class through a remote WMI connection
Using the Microsoft Windows shutdown.exe
Letting your (whatever).exe copy itself to the systemfolder on the target machine, register itself as a service and start it remotely with parameters so that it initiates a local shutdown.
Number 3 is why sysinternals does, e.g. However, it requires that you have file & printer sharing active so that it is able to copy itself to the target and invoke the service.
Number 2 works almost everywhere... but also needs to have file & printer sharing being enabled. Because: This activates the RPC service which is needed for remotely invoking the shutdown.
As far as I can tell, even Number 1, the WMI solution, not only needs WMI installed on the target, but also the RPC service enabled.
My problem is:
I need a solution that allows me to shutdown a remote computer without RPC being enabled on it.
Is there a way?
Note: A way within a context of a business solution ;-)
I believe that you can use IPMI for such tasks. It requires hardware support though. We used it for lights-out management over a serial port in a solution a few years ago. We had some issues with the hardware support for soft shutdown since it requires some integration with the OS. From what I remember, you can mimic the hardware reaction to pressing the power button using a network packet sent by an IPMI utility. HTH.