VirtualBox Headless - Linux - virtualbox

I have Virtualbox installed on a machine that I want to run headless.
If I ssh into that machine I'm able to run
VBoxHeadless --startvm "WindowsXP" &
and everything runs great.
I want to run the guest headless.
I put the above command into
/etc/rc.local
but it doesn't spark up, after I restart.
I've written a bash script that starts the guest too and tried putting a reference to the bash script into rc.local but it does't work.
What am I doing wrong? Doesn't rc.local run commands after all the init.d scripts have run?
Thanks in advance!

You have to run VBoxHeadless as the same user that you used to create the setup. Have you done that?
You can use the su command for this
su - <username> -c 'VBoxHeadless --startvm "WindowsXP"'

For windows users its the same idea.
C:\Program File\Oracle\VirtualBox\VBoxHeadless.exe --startvm "uuid|name" --vrde off
More info can be found in:
http://www.virtualbox.org/manual/ch07.html#vboxheadless

Related

How to run Jupyter Lab on every startup on amazon ec2 instance?

I have already tried user-data method and rc.local methods but none are working. I am not a pro so would like some help on this.
These are the 3 commands i want to run on every startup of ec2 instance:
tmux (start a tmux session so i dont loose the data when connection resets)
source pyenv/bin/activate (Activate the venv)
jupyter-lab --ip 0.0.0.0 --no-browser --allow-root (run the jupyter lab)
I'm using a ubuntu ec2 instance btw. Thanks in advance.
If i can acheieve this using nohup instead of tmux i'd be willing to that as well.
I wasn't able to find a solution anywhere so any help is appreciated, thank you.

Set up EC2 instance to allow debugging on my local machine using PhpStorm

I have a EC2 instance using nginx and a PHP test web application on it.
Currently every time I want to debug using PhpStorm I have to enter this command in my Ubuntu terminal:
sudo ssh -N -R 9000:localhost:9000 -i "devInstanceNginx.pem" ec2-user#test.co.uk
I am wondering what I can do so that I don't have to do that command.
(Note I will only generally debug from one of 2 devices if that makes a difference)
You can create a "Shell script" run configuration with this script. Configuration can be added via Run > Edit Configurations:

How to run bash script on Google Cloud VM?

I found this auto shutdown script for VM instances on GCP and tried to add that into the VM's metadata.
Here's a link to that shutdown script.
The config sets it so that after 20 mins the idle VM will shut down, but it's been a few hours and it never shut down. Are there any more steps I have to do after adding the script to the VM metadata?
The metadata script I added:
Startup scripts are executed while the VM starts. If you execute your "shutdown script" at the boot there will be nothing for it to do. Additionally in order for this to work a proper service has to be created and it will be using this script to detect & shutdown the VM in case it's idling.
So - even if the main script ashutdown was executed at boot and there was no idling it did nothing. And since the service wasn't there to run it again your instance will run indefinatelly.
For this to work you need to install everything on the VM in question;
Download all three files to some directory in your vm, for example with curl:
curl -LJO https://raw.githubusercontent.com/GoogleCloudPlatform/ai-platform-samples/master/notebooks/tools/auto-shutdown/ashutdown
curl -LJO https://raw.githubusercontent.com/GoogleCloudPlatform/ai-platform-samples/master/notebooks/tools/auto-shutdown/ashutdown.service
curl -LJO https://raw.githubusercontent.com/GoogleCloudPlatform/ai-platform-samples/master/notebooks/tools/auto-shutdown/install.sh
Make install.sh exacutable: sudo chmod +x install.sh
Run it: sudo ./install.sh.
This should install & run the ashutdown service in your system.
You can check if it's running with service ashutdown status.
These instructions are for Debian system so if you're running CentOS or other flavour of Linux they may differ.

Running a process with nohup after SSH to a server using Python

I am trying to run a program as a background process in a server (AWS EC2 instance).
I have used boto.manage.cmdshell to obtain an ssh connection to the server.
However, I am having trouble running this command:
"nohup daemon-program param 2>&1 > ./logs/out.log &"
It runs fine if I manually ssh into the machine and run this command.
My console hangs after ssh-ing into the machine and running this command via python script.
If I remove nohup, the program starts and quits when the ssh session ends.
I would like it to run as a bg process even after I quit.
I tried reading about pty and nohup manual, but I seem to have missed something here.
Kindly point me to a (better?) instruction manual or explain why this fails while manual execution succeeds.
TIA!
If anyone is stuck, ran the command inside byobu and it worked.

Deploying a new VM with Vagrant and AWS user-data not working

I have a provisioning setup with vagrant and puppet that works well locally and I'm now tryign to move it to AWS using vagrant-aws.
As I understand it I can make use the AWS user-data field in vagrant as specified to run commands on the first boot of a new vm like so:
aws.user_data = File.read("user_data.txt")
Where user_data.txt contains:
#!/bin/bash
sudo apt-get install -y puppet-common
Then my existing puppet provisioning scripts should be able to run. However this errors out on the vagrant up command with:
[aws] Running provisioner: puppet...
The `puppet` binary appears to not be in the PATH of the guest. This
could be because the PATH is not properly setup or perhaps Puppet is not
installed on this guest. Puppet provisioning can not continue without
Puppet properly installed.
But when I ssh into the machine I see that the user-data did get parsed and puppet is installed successfully. Is the puppet provisioner running before the user-data install puppet maybe? Or is there some better way to install puppet on a vm before trying to provision?
It is broken, but there's a workaround if you're using Ubuntu which is far simpler than building your own AMI.
Add the following line to your config:
aws.user_data = "#cloud-config\nbootcmd:\n - echo 'manual' > /etc/init/ssh.override\npackages:\n - puppet\nruncmd:\n - [ 'rm', '/etc/init/ssh.override' ]\n - [ 'service', 'ssh', 'start' ]\n"
This tells Cloudinit to disable SSH startup early in the boot process and re-enable it once your packages are installed. Now Vagrant can only SSH in to run puppet once the packages are fully installed.
This will probably work for other distros that use Cloudinit aside from Ubuntu, altho it is Upstart specific so the commands may need tweaking.
Well I worked around this by building my own AMI with puppet and other things I need installed, still seems like vagrant-aws is broken or I'm misunderstanding something else here.