I want to copy a file to a vm using guestcontrol copyto. I have had a lot of trouble with the documentation which has not been fully updated with the changes in VirtualBox 5 and have also found that the copy commands broke in VirtualBox 5.
It seems that there will be a fix in the next version, but is there a workaround to use until that time?
I can read a file to stdout using run + cat:
VBoxManage guestcontrol 1-echo --username root --password root run /bin/cat /etc/network/interfaces
but I can't use stdin to write a file as it looks like the signal to close the stdin isn't forwarded to the guest (this creates the file and hangs without writing anything):
echo "Hello" | VBoxManage guestcontrol 1-echo --username root --password root run --no-wait-stdout /usr/bin/tee /test
Related
I have a docker container running in a small AWS instance with limited disk space. The logs were getting bigger, so I used the commands below to delete the evergrowing log files:
sudo -s -H
find /var -name "*json.log" | grep docker | xargs -r rm
journalctl --vacuum-size=50M
Now I want to see what's the behaviour of one of the running docker containers, but it claims the log file has disappeared (from the rm command above):
ubuntu#x-y-z:~$ docker logs --follow name_of_running_docker_1
error from daemon in stream: Error grabbing logs: open /var/lib/docker/containers/d9562d25787aaf3af2a2bb7fd4bf00994f2fa1a4904979972adf817ea8fa57c3/d9562d25787aaf3af2a2bb7fd4bf00994f2fa1a4904979972adf817ea8fa57c3-json.log: no such file or directory
I would like to be able to see again what's going on in the running container, so I tried:
sudo touch /var/lib/docker/containers/d9562d25787aaf3af2a2bb7fd4bf00994f2fa1a4904979972adf817ea8fa57c3/d9562d25787aaf3af2a2bb7fd4bf00994f2fa1a4904979972adf817ea8fa57c3-json.log
And again docker follow, but while interacting with the software that should produce logs, I can see that nothing is happening.
Is there any way to rescue the printing into the log file again without killing (rebooting) the containers?
Is there any way to rescue the printing into the log file again without killing (rebooting) the containers?
Yes, but it's more of a trick than a real solution. You should never interact with /var/lib/docker data directly. As per Docker docs:
part of the host filesystem [which] is managed by Docker (/var/lib/docker/volumes/ on Linux). Non-Docker processes should not modify this part of the filesystem.
For this trick to work, you need to configure your Docker Daemon to keep containers alive during downtime before first running our container. For example, by setting your /etc/docker/daemon.json with:
{
"live-restore": true
}
This requires Daemon restart such as sudo systemctl restart docker.
Then create a container and delete its .log file:
$ docker run --name myhttpd -d httpd:alpine
$ sudo rm $(docker inspect myhttpd -f '{{ .LogPath }}')
# Docker is not happy
$ docker logs myhttpd
error from daemon in stream: Error grabbing logs: open /var/lib/docker/containers/xxx-json.log: no such file or directory
Restart Daemon (with live restore), this will cause Docker to somehow re-take management of our container and create our log file back. However, any logs generate before log file deletion are lost.
$ sudo systemctl restart docker
$ docker logs myhttpd # works! and log file is created back
Note: this is not a documented or official Docker feature, simply a behavior I observed with my own experimentations using Docker 19.03. It may not work with other Docker versions
With live restore enabled, our container process keeps running even though Docker Daemon is stopped. On Docker daemon restart, it probably somehow try to re-read from the still alive process stdout and stderr and redirect output to our log file (hence re-creating it)
explaining all that has been tried and double checked.
Set up on local windows machine:
Xming installed and running.
in ssh_config ForwardX11 is set to yes.
In VS code remote connection config the the Forward X11 is set to yes.
Set up on GCP compute engine with Debian / Linux 9 and 1 GPU[free tier]:
xauth is installed.
In the sshd_config file below is set:
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost no
The sshserver has be restarted to ensure below setting are read .
from local workstation I fire gcloud compute ssh --ssh-flag="-X" tensorflow-2-vm(instance name) and the response is :
/usr/bin/xauth: file /home/user/.Xauthority does not exist,
So, I attempted to perform the below on the remote compute engine with instance name - tensorflow-2-vm and user trapti_kalra:
trapti_kalra#tensorflow-2-vm:~$ xauth list
xauth: file /home/trapti_kalra/.Xauthority does not exist
trapti_kalra#tensorflow-2-vm:~$ mv .Xauthority old.Xauthority
mv: cannot stat '.Xauthority': No such file or directory
trapti_kalra#tensorflow-2-vm:~$ touch ~/.Xauthority
trapti_kalra#tensorflow-2-vm:~$ xauth generate :0 . trusted
xauth: (argv):1: unable to open display ":0".
trapti_kalra#tensorflow-2-vm:~$ sudo xauth generate :0 . trusted
xauth: file /root/.Xauthority does not exist
xauth: (argv):1: unable to open display ":0".
so, looks like something is missing, any help will be appreciated. This was working with a EC2 server before I moved to GCP.
Create n new file: touch ~/.Xauthority
Log out and back in again with your ssh session. (I'm using MobaXterm)
Then it writes the needed.
You logged into your Linux server over ssh and got the following error;
.Xauthority does not exist
Solution :
Let's go into the /etc/ssh/sshd_config file and remove the # sign at the beginning of the 3 lines below
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost yes
Then systemctl restart sshd
Login again and you will not get the error.
There are many solutions to this problem, it can also depend on what machine you originate from. If you come from a Linux box, enabling sshd config options like:
X11Forwarding yes
could be enough.
When you use a Macbook however the scenario is different. In that case, you need to install xQuartz with brew:
brew install xquartz
And after this start it:
xQuartz &
After this is done the xQuartz logo appears in your bar and you can right-click the icon and start the terminal from the Applications menu. After you perform this you can run the following:
echo $DISPLAY from this terminal. This should give you the output:
:0
When you have another terminal such as iTerm, you can export this value in another terminal with export DISPLAY=:0 As long as xQuartz is still running the other terminal should be able to continue to use xQuartz.
After this you can SSH into the remote machine and check if the display variable is set:
$: ssh -Y anldisr#my-remote-machine
$: echo $DISPLAY
localhost:11.0
It took me a hour to figure this out, hope it helps someone. :)
This also happened when I added a new user to remote machine without giving the user a sudo privilege during creation.
To resolve, I used the root user or a sudo privileged user to assign a sudo privilege to the new user. Exit the new user and ssh again into your server.
> $ sudo usermod -aG sudo [newUser]
JupyterLab is in a gcp Deep learning vm.
Since few hours ago I can't save any changes in JupyterLab.
There are unsaved changes.
save notebook greyed out.
Also, if I try to delete a file from the left pane, it gives a 500 error.
The only change I recall making prior to this breaking is this. I had this error when I tried to do git operations in the command line.
Another git process seems to be running in this repository, e.g.
an editor opened by 'git commit'. Please make sure all processes
are terminated then try again. If it still fails, a git process
may have crashed in this repository earlier:
Therefore I did this
rm -f ./.git/index.lock
And the git command worked correctly. This the only thing I recall doing prior to this error.
Afterwards, I deleted the repository and recloned it.
And since I had to type sudo everytime I had the user claim the jupyter directory. But this error was there before I made this change.
udo chown your_username directory
Somehow group permissions for jupyter directory has changed to read and execute only. Simply adding write permissions solved this.
drwxr-xr-x 13 praveen jupyter 4096 Feb 9 11:46 jupyter
Command
chmod 771 /home/jupyter/
this has been useful to me :
chmod -R 777 /folder_name. The -R (or --recursive) options make it recursive.
Or if you want to make all the files in the current directory have all permissions type:
chmod -R 777 ./
Reference
Boiling my issue down to the simplest case, I'm using Compute Engine with the following startup-script:
#! /bin/bash
sudo useradd -m drupal
su drupal
cd /home/drupal
touch test.txt
I can confirm the drupal user exists after this command, so does the test file. However I expect the owner of the test file to be 'drupal' (hence the su). However, when I use this as a startup script I can still confirm ROOT is the owner of the file:
meaning my
su drupal
did not work. sudo su drupal also does not make any difference. I'm using Google Container OS, but same happens on a Debian 8 image.
sudo su is not a command run within a shell -- it starts a new shell.
That new shell is no longer running your script, and the old shell that is running the script waits for the new one to exit before it continues.
The sudo su command will start a new shell. The old shell waits for the old one to exit and continues executing the rest of the code.
Your script is running in the 'old' shell, which means these commands:
cd /home/drupal
touch test.txt
are still executed as root and thus the owner of these files is root as well.
You can modify your script to this:
#! /bin/bash
sudo useradd -m drupal
sudo -u drupal bash -c 'cd ~/; touch text2.txt'
and it should work.
The -u flag executes the command as the user specified, in this case 'drupal'
I wrote some stuff underneath - but looks like this should work:
how to run script as another user without password
The other option would be to ssh into your own machine as the other user, you can use sshpass to send the password, or get your own public key.
When I test a similar script:
su [my username]
touch test.txt
It actually logs in as me, and doesn't finish until I ctrl-d
Further testing reveals that the only way to own the file is if I invoke the script from the shell, ie:
su me
touch test.txt
./test2.sh
test2.sh:
touch test2.txt
gives both files to root, even if I own both scripts.
This follows that everything YOU do is yours, you can't make something for someone else.
I tried using VBoxManage guestproperty wait <vmname> ... but what looked like obvious patterns didn't work. I'm writing a script which imports a new VM, configures it, launches it, takes a snapshot, and then closes it, and obviously I need to know when the VM is running before taking the final two steps.
Thanks.
You can use showvminfo output:
for Linux:
VBoxManage showvminfo "vm_name" | grep State
for Windows:
VBoxManage showvminfo "vm_name" | findstr State
See the thread below:
https://unix.stackexchange.com/questions/28611/how-to-automatically-start-and-shut-down-virtualbox-machines