Debugging jetty daemon process in gradle - jetty

Using a JettyRun task, it's easy to debug. You can merely add something like -Xdebug -Xrunjdwp:transport=dt_socket,address=12233,server=y,suspend=n to your GRADLE_OPTS and hook up to the gradle process itself.
However, if you run a JettyRun task with daemon = true, this doesn't work. Example of one such task:
task jettyRunDaemon (type: JettyRun) {
contextPath = '/'
classpath = sourceSets.test.runtimeClasspath
webAppSourceDirectory = file('src/test/webapp')
daemon = true
}
I've tried some other things, such as setting the org.gradle.jvmargs with a similar thing as above, to no avail. How can I get the debug args sent into the daemon process?

I would give org.gradle.jvmargs another shot. Try putting the following into a gradle.properties file:
org.gradle.jvmargs=-XX:MaxPermSize=256M -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=4001
I'm using this with gradle 1.8 and I'm able to attach and step through code.

Related

Run script to setup environment in VSCode after connecting to container or before compling/debugging

I am working on a docker containerized c++ project that has defined shell scripts to setup the environment (using "source") for compiling and debugging. They have a lot of environment variables and could change at anytime so it would be hard to move them all into the launch.json file (and tedious to keep up with) so I need to call them before compiling or debugging.
The scripts only need to run once so if there was a way to run them after connection to the container that would be the best solution, however I cannot find anything like that.
I have tried to use the "preLaunchTask" in the launcher to run a task before debugging but it seems that the task's shell is different from the debug shell.
Is there anyway to handle this?
For the moment I am using a task to generate a .env file
printenv > ${workspaceFolder}/.preenv && . ${workspaceFolder}/setupEnv &&
printenv > ${workspaceFolder}/.postenv && grep -vFf
${workspaceFolder}/.preenv ${workspaceFolder}/.postenv >
${workspaceFolder}/.env
I have VSCode mount a directory as the container's homedir, and then put a .bash_profile file containing the necessary setups (or other suitable shell startup file) in that directory.
My /path/to/.devcontainer/devcontainer.json includes:
"mounts": [
"source=${localWorkspaceFolder}/.devcontainer/home,target=/home/username,type=bind,consistency=delegated"
],
"remoteUser": "username",
"userEnvProbe": "loginShell", // Ensures .bash_profile fires
Then my /path/to/.devcontainer/home/.bash_profile contains the necessary invocations to set my environment.

my system V init script don't return

This is script content, located in /etc/init.d/myserviced:
#!/lib/init/init-d-script
DAEMON="/usr/local/bin/myprogram.py"
NAME="myserviced"
DESC="The description of my service"
When I start the service (either by calling it directly or by calling sudo service myserviced start), I can see program myprogram.py run, but it did not return to command prompt.
I guess there must be something that I misunderstood, so what is it?
The system is Debian, running on a Raspberry Pi.
After more works, I finally solved this issue. There are 2 major reasons:
init-d-script actually calls start-stop-daemon, who don't work well with scripts specified via --exec option. When killing scripts, you should only specify --name option. However, as init-d-script always fill --exec option, it cannot be used with script daemons. I have to write the sysv script by myself.
start-stop-daemon won't magically daemonize the thing you provide. So the executable provided to start-stop-daemon should be daemonized itself, but not a regular program.

Run python script with droneapi without terminal

I managed to run examples in command prompt after running mavproxy.py and loading droneapi. But when I double click on on my script, it throws me "'local_connect' is not defined", it runs in terminal as was told above, but I cannot run it only with double click. So my question is: Is there any way to run script using droneapi only with double click?
using Windows 8.1
Thanks in advance
You'll want to look at the Running an App/Example section of the guide. For now, you can only run a DroneKit script by launching it from inside the MAVProxy terminal. For example, after launching:
$ mavproxy.py --master=127.0.0.1:14550
MANUAL> module load droneapi.module.api
DroneAPI loaded
You can use the api start command to run a local script:
MANUAL> api start vehicle_state.py
STABILIZE>
Get all vehicle attribute values:
Location: Attitude: Attitude:pitch=-0.00405988190323,yaw=-0.0973932668567,roll=-0.00393210304901
Velocity: [0.06, -0.07, 0.0]
GPS: GPSInfo:fix=3,num_sat=10
groundspeed: 0.0
airspeed: 0.0
mount_status: [None, None, None]
Mode: STABILIZE
Armed: False
I think Sony Nguyen is asking for running the vehicle_state.py outside the Mavproxy command prompt, just like runnning the .py file normally.
I'm also looking for a solution as well.
You can only run dronekit from mavproxy at the moment (its structured as a mavproxy module, there are plans to restructure it), however if you simply want to avoid having to load up MavProxy and then running code manually, you can use the cmd flag:
mavproxy.py --cmd="api start app.py"

How do I make a number of looping scripts execute at startup?

I have a few Python scripts, all of them involving while True: and a wait timer so they run at varying intervals. They do things like monitor a serial port and look for new versions of my code on a remote server. I haven't used cron because some require offsets (e.g run at ten seconds past the minute) and I wanted to keep things very simple.
Using rc.local, I run hook.py on startup. What can I put in hook.py to run a.py, b.py and c.py simultaneously and continuously? I tried subprocess (with shell = True) but I'm not sure the next line / next subprocess command will execute until the first one finishes - which will never happen. Plus it has some weird behaviour I'm struggling to debug (I can rw files using their absolute paths if I run the script directly; when subprocess runs them, it can't find the files).
Any suggestions? Just want something simple that can simultaneously execute several new python scripts. Platform is a Raspberry Pi.
Alternatively: if there's code I can put in rc.local that will spawn a new python process for all .py files in a specified directory, that would work too.
This sounds like it would be better suited for spawning via cron instead of an infinite while loops.
But if you want to continue running them in rc.local just put the & at the end of your command:
/usr/bin/python /home/you/command.py &
This runs the command in the background.
If you want to run all Python files in a given directory I would write a bash script like:
for file in /home/you/*.py
do
if [ "$?" == "0" ]
then
/usr/bin/python "$file" &
fi
done
We will need more information about your path issues to tell you more.

jenkins perform operations after build failed

I would like to process some operations only if the build failed. For example, if runtime execution has thrown a core dump (it doesn't happen always, of course) and I want to move it somewhere, so that the next day build won't remove it.
Does anyone know how to perform anything in case a build fails?
Try Groovy Postbuild Plugin. With this you can use hudson api's to check if the build is a failure or not, and then do the required actions using groovy script. For example, you can use following script to check if the build is unstable or better
if(manager.build.result.isBetterOrEqualTo(hudson.model.Result.UNSTABLE))
{
\\ do something
}
Well if it is set up to log to std out, it will be in the Jenkins log, if not, can you set it up to log to a file in you workspace , then you can package as an artifact based on the name... If you are running in a posix system you can redirect stderr to stdout and direct those both to a file in your run command. Or pipe them through tee, so you get them in both