How to get Jetty thread dump? - jetty

I have a Ubuntu Server 10.10 64-bit running a web application on Jetty 6.1.24-6 on Sun's JVM, both installed from standard Ubuntu repositories.
I'm trying to track down a problem with this server (100% cpu after some time, it might be related to a known bug on NIO's Selector, although it looks like changing the connector to old io SocketConnector didn't solve the problem!), and need to take a thread dump.
Unfortunately I'm unable to get the thread dump. I've tried to send a SIGQUIT to the process, and tried to attach JStack to it, but neither approach works.
I see no output at all from SIGQUIT (in any of the log files generated by Jetty), and JStack, even when run as root (or jetty) and with "-F", says that it has been attached to the process, but then blocks and produces no more output!
How can I obtain the thread dump?

You have to do this as the same user as the jetty process is running under. On Ubuntu this user is normally called jetty.
So try
sudo -u jetty jstack <pid>
this will send a thread dump to stdout (your shell).
You can also
sudo -u jetty kill -QUIT <pid>
which will send the thread dump to jetty's stdout (normally /var/log/jetty/out.log)
To get the pids use the jps command or ps ax|grep java

did you try VisualVM (/usr/lib/java-6-sun/bin/jvisualvm) with remote connection ? It can capture a thread dump

Related

How to kill a process in GCP

Often times, some rogue processes gets in a busy spin mode using up 100% of the CPUs. I have a GCP Ubunutu instance with 4 CPU Cores and 32 Gigs of RAM. I still get into this situation of 100% CPU usage and I can't even SSH into the VM instance.
Does GCP provide a way of killing the offending process? Through gcloud SDK command or web console?
As Serhii Rohoza mentioned, GCP does not provide you any tool to kill proccess.
Instead, you can SSH your VM instance and figure out what process is eating away your CPU and stop it, by executing this commands:
Open a terminal with Ctrl+Alt+t
Execute the command "top"
Note the process using the most CPU
If the process isn't a system process, kill it with "sudo pkill [processname]"
where [processname] is the name of the process you want to kill.
If it is a system process, don't kill it, but try to Google the name of it and figure out what functionality it does in Ubuntu.

Executable in docker container does not register breakpoints from gdb remote debugging

Remote Setup
I need to debug a complex C++ program which is installed in a docker container, controlled by Kubernetes.
The docker container also provides a gdbserver and exposes the container port 44444.
Host Setup
The gdb part to control and examine the program is set in a another docker container.
This is due to the case, that the SUSE environment is only available in this container,
not on my Ubuntu 18.04 machine in a VM Box.
Local debugging works well
Debugging the program locally in the SUSE docker container works well. The program halts
at the specified breakpoints and these breakpoints are also specified in remote debugging.
All breakpoints are solely defined in the program's basic source code files, not in any libs.
It has been verified, that the executable in the remote docker container is identical
to the one in the host container; it has been compiled with debug symbols and non-optimized code (-ggdb -O0).
Problem
Debugging the program remotely only lacks in stopping at the defined breakpoints on the host.
The program in the container is started in background. When gdbserver attaches its process_id
the program halts until 'continue' is issued within the gdb host session and forwarded to gdbserver in the remote container.
The program is deployed with basic C++ class files and shared program libraries together with shared project libraries.
It is started with parameters and exits after the job is done.
When the program is started it reads configuration files, connects to a database,
reads database entries, prepares and formats the data to XML formated entries and
writes them into an output file.
HelloWorld remote debugging test works well
To verify that the remote debugging setup and connection via gdbserver port works well,
I created a simple HelloWorld C++ program and copied this into the same remote docker container
and tested the breakpoint behaviour therein.
The remote debug test scenario is working successfully when the HelloWorld program is run in the container:
the internal container port 44444 is mapped to the same external port id 44444:
$ kubectl port-forward eric-bss-eb-tools-65c4955565-xdqtx 44444:44444
Forwarding from 127.0.0.1:44444 -> 44444
Forwarding from [::1]:44444 -> 44444
HelloWorld in the remote container is started in background and sleeps a few seconds
bash-4.4$ ./HelloWorld &
[1] 1068
gdbserver attaches to the HelloWorld process_id and waits to forward the gdb commands
bash-4.4$ ./gdbserver :44444 --attach 1068 // gdbserver uses the exposed port
Attached; pid = 1068
Listening on port 44444
gdb in the host container is started in the HelloWorld source code folder with TUI mode
$ gdb -tui HelloWorld
reading symbols from HelloWorld...done.
(gdb) b 13
Breakpoint 1 at 0x400b2d: file HelloWorld.cpp, line 13.
(gdb) b 15
Breakpoint 2 at 0x400b37: file HelloWorld.cpp, line 15.
gdb connects to the gdbserver via localhost and (external) port id 44444
(gdb) target remote :44444
(gdb) c
Continuing.
the remote HelloWorld stops at breakpoint 2; variables can be examined;
further gdb commands like 'next' and 'step' can be issued; everything works smart
Target program remote debugging doesn't stop at breakpoints
When the target C++ program in the container is debugged with the same scenario it does not stop at the defined breakpoints:
the workflow is identical to the HelloWorld test scenario with the exception,
that the breakpoints are defined after gdb has made the connection to gdbserver
(target remote :44444).
This has been done as per the advice in the 2nd comment from this answer: (Remote gdb debugging does not stop at breakpoints).
Nevertheless, the breakpoints are still ignored even when they are defined after
the connection to the remote target has been established.
the program in the remote docker container is halted by the gdbserver and continues
its execution when gdb issues the 'continue' command, but does not stop at any of the breakpoints.
I tried several hints as per other analogous problem descriptions, but breakpoints are still ignored.
E.g. using hardware break points as been advised in the answer of the same request here:
(Remote gdb debugging does not stop at breakpoints)
Running the remote docker container with securityContext: privileged=true is forbidden in my environment, hence this could not be tested. See proposal here:
(gdb does not hit any breakpoints when I run it from inside Docker container)
What am I missing to get remote debugging in a docker container halted at defined breakpoints?
Due to a security enhancement in Ubuntu (versions >= 10.10), users are not allowed to ptrace processes that are not a descendant of the debugger.
By default, process A cannot trace a running process B unless B is a direct child of A
(or A runs as root).
Direct debugging is still always allowed, e.g. gdb EXE and strace EXE.
The restriction can be loosen by changing the value of /proc/sys/kernel/yama/ptrace_scope from 1 (=default) to 0 (=tracing allowed for all processes). The security setting can be changed with:
echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
HelloWorld remote debugging test works well
How did it happen, that remote debugging in the HelloWorld container worked well?
The HelloWorld container was created with USER userName in the Dockerfile
which is the same user name as been logged in to Ubuntu.
The Dockerfile for to deploy the development container (with the C++ program to be debugged) defines both a different user name and group name than being used in my Ubuntu login.
All credits for the description of ptrace scope belong to the following post,
see 2nd answer by Eliah Kagan - thank you for the thorough explanation! - here:
https://askubuntu.com/questions/143561/why-wont-strace-gdb-attach-to-a-process-even-though-im-root
Target program remote debugging doesn't stop at breakpoints
A guess: target program fork()s and executes most of the code in a child process (and your gdbserver attaches the parent).
To verify this, insert some printf("%s:%d: pid=%d\n", __FILE__, __LINE__, getpid()); calls into strategic places in the target program. If my guess is correct, you should see that the pid changes between main() and connect_to_database().

Embedded Jetty keeps running after application exits

I have a Jetty 9 server embedded in my app and found that when I launch the program in eclipse and then stop it via the eclipse red button, Jetty stays running in the background.
I have to do a netstat to find which process owns the port and then do a taskkill.
Now do I configure jetty to die when the host app dies?
The Server instance should be told to stop.
server.stop();
Jetty does not know about any host app, it runs on its own daemon thread.
You could also just use java.lang.System.exit(int) to close the JVM and all threads it has running.
Killing the JVM (via the Eclipse Red Button) should kill all threads on that JVM too. If it doesn't, you have discovered a bug in Eclipse.
But before you go there, you should know that Eclipse IDE itself has it's own Eclipse Jetty server running (used for a number of things internally, and also for serving the documentation / help page).
So just the fact that you see an Eclipse Jetty instance running is misleading, it could be the one that the Eclipse IDE itself is running for its own reasons.

How do I run a Ipython cell from command line in the background?

I'm trying to run a big algorithm (ML) that takes about 4 hours in IPython. The problem is that my local CPU does not support this and hence I have to use AWS. My network isn't very stable and there are frequent disconnects from the server. So, my question is:
How can I run a cell from commandline (ssh) with nohup option so that it will continue running even after disconnecting from the IPYthon server? And how do I go back and fetch the results and kill the process at the end of it?

VMWare Workstation won't suspend from command line

I'm trying to automate VMWare Desktop on Windows 7 to suspend all vm's before I do a backup job each night. I used to have a script that did this but I've noticed now that it won't suspend anymore with the same command that used to work.
If I do vmrun list I get a list of the running vms with no issue.
If I do vmrun suspend "V:\Virtual Machines\RICHARD-DEV\RICHARD-DEV.vmx" it just hangs and I have to kill the command with CTRL+C.
I've even tried a newer command using -T to specify it's workstation, ie vmrun -T ws suspend "V:\Virtual Machines\RICHARD-DEV\RICHARD-DEV.vmx" and still no love.
If I have the vm already stopped, I can issue vmrun start "V:\Virtual Machines\RICHARD-DEV\RICHARD-DEV.vmx" and it starts fine.
As well as the suspend command, the stop command also does not work. I'm running VMWare Workstation 11.1.3 build-3206955 on Windows 7.
Any ideas?
Update:
I installed latest VMWare Tools on the guest, as well as the latest Vix on the Host so everything should be up to date.
I can start a vm using vmrun with no problem using vmrun -T ws start <path to vmx> but the command doesn't come back to the command prompt, so I'm assuming it's not getting confirmation from the vm that it is now running.
If I cancel the 'start' command and now try and suspend I'm getting the same lack of communication from the guest. If I manually suspend the vm, once it's suspended I get an 'Error: vm is not running' and the 'suspend' command finally times out and comes back.
So, it looks to me like there is no communication from vmrun to the guest about what state it's in etc. Is there a way to debug the communication from the host to the guest using vmrun or other means? Are there ports I need open in the guest OS?
So, I never did get vmrun to work properly on my main system, although I did get it behave ok on my laptop so there is something weird happening on this machine. I also installed a trial of the latest VMWare 12 and the same thing happens.
As a workaround, I ended up changing the power management settings in my guest OS so that it would 'sleep' after 1 hr of inactivity. When this happens VMWare detects it and automatically suspends the guest which is really what I'm looking for. Not the most slick solution but it does manage to unlock the files I need to be backed up in a nightly backup.