In Google Cloud Platform's Ubuntu 16.04.1 instance, the output of my startup script was written to /var/log/startupscript.log.
Since they upgraded to 16.04.02 I can't find the logs anymore.
Any idea?
UPDATE from the official documentation:
Startup script output is written to the following log files:
CentOS and RHEL: /var/log/messages
Debian: /var/log/daemon.log
Ubuntu 14.04, 16.04, and 16.10: /var/log/syslog
SLES 11 and 12: /var/log/messages
The correct answer (by now) is to use journalctl:
sudo journalctl -u google-startup-scripts.service
You can re-run a startup script like this:
sudo google_metadata_script_runner startup
See also: https://cloud.google.com/compute/docs/instances/startup-scripts/linux
There are two ways to search for the log file probably a lot more but i know the below.
locate -i startupscript.log - you may need to update your indexes periodically for this option to be optimal.
From root find / -iname startupscript.log -print .
Related
I have an EMR cluster 5.28.1 running in AWS but I forgot to install from python libraries as part of the bootstrap action. Now that the cluster is running, I was simply attempting to add a step via the EMR console. Here are my settings
JAR: s3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar
Main class: None
Arguments: s3://xxxx/install_python_libraries.sh
Unfortunately, I get the following error.
Cannot run program "s3://xxxxx/install_python_libraries.sh" (in directory "."): error=2, No such file or directory
I am not sure what I am doing wrong. The shell script looks like this.
#!/bin/bash -xe
# Non-standard and non-Amazon Machine Image Python modules:
sudo pip-3.6 install boto3
sudo pip-3.6 install xmltodict
I also tried this by simply using 'command-runner.jar' but I get the same error. Can you please help me figure out the problem so I do this via the console? I would like to install the libraries on all nodes - master and core.
Thanks
The issue is the xxx.sh files EOL/carriage return type.
In other words, if it is Windows ("\r\n") then it will not work and return the ./ file not found error.
Convert it to unix type ("\n") using something like notepad++ and it will run fine.
(In notepad++ edit>EOL Conversion>Unix(LF) hit save and try again)
1. Summarize the problem
I am following this simple tutorial from Developers RedHat to get a simple node/express container working.
I cannot get a container to run under a CentOS 7 VM on GCE.
I have a CentOS 7 GCE virtual machine, where I have Docker installed.
I am able to successfully build and run Docker containers and push them to Google's container registry with no problem.
Now I am trying to build podman/buildah containers, and do the same.
I have buildman/podman installed. When I run this:
podman build -t hello-world-nodejs .
I get the following error message:
cannot clone: Invalid argument user namespaces are not enabled in /proc/sys/user/max_user_namespaces Error: could not get runtime: cannot re-exec process
any ideas?
Additionally, if there are any guides into getting this image into Google's container registry, and running under Cloud Run, it would be greatly appreciated.
Ultimately the destination for some containers is a cloud service.
2. Provide background including what you've already tried
I have tried doing a web search for a solution, nothing found that has solved the problem so far.
3. Show some code
podman build -t hello-world-nodejs .
4. Describe expected and actual results including any error messages
I can create and run docker images/containers on this GCE VM, I am trying to do the same with buildah/podman.
The following solved this issue for me:
sudo bash -c 'echo 10000 > /proc/sys/user/max_user_namespaces'
sudo bash -c "echo $(whoami):110000:65536 > /etc/subuid"
sudo bash -c "echo $(whoami):110000:65536 > /etc/subgid"
And then if you encounter an errors related to lchown run the following:
sudo rm -rf ~/.{config,local/share}/containers /run/user/$(id -u)/{libpod,runc,vfs-*}
I have spun up a CentOS 7 VM on GCE and got same issue. The issue is caused because User Namespaces is not enabled on the kernel by default. You have 2 options, either running podman as root (or using sudo) or enabling User Namespaces in your CentOS VM (the hard way).
According to the post here, the use of user namespace and the allocations of uid and gid’s that are required to make rootless containers work securely in your environment.
Probably StackOverflow is not the best place to ask this question. It's better to ask in the ServerFault site since it's a server and not coding problem.
Currently, the Full Remote Mode of CLion only supports Linux as a remote host OS. Is it possible to have a FreeBSD remote host?
Yes, you can!
However, note that I'm recalling these steps retrospectively, so probably I have missed one step or two. Should you encounter any problem, please feel free to leave a comment below.
Rent a FreeBSD server, of course :)
Update your system to the latest release. Otherwise, you may get weird errors like "libdl.so.1" not found when installing packages. The one I'm using is FreeBSD 12.0-RELEASE-p3.
Create a user account. Don't forget to make it a member of wheel, and uncomment the %wheel ALL=(ALL) ALL line in /usr/local/etc/sudoers.
Set up SSH. This step is especially tricky, because we need to use both public-key and password authentication.
Due to a known bug, in some cases, the remote host must use password authentication, or you'll get an error when setting up the toolchain. You can enable password authentication by setting PasswordAuthentication yes in /etc/ssh/sshd_config, followed by a sudo /etc/rc.d/sshd restart.
It appears that CLion synchronizes files between the local and remote host with rsync and SSH. For some reasons I cannot explain, this process will hang forever if the host server doesn't support passphrase-less SSH key login. Follow this answer to create an SSH key as an additional way of authentication.
CLion assumes the remote host OS to be Linux, so we must fix some incompatibilities between GNU/Linux and FreeBSD.
Install GNU utilities with sudo pkg install coreutils.
Rename the BSD utility stat with sudo mv /usr/bin/stat /usr/bin/_stat.
Create a "new" file /usr/bin/stat with the content in Snippet 1. This hack exploits the fact that CLion sets the environment variable JETBRAINS_REMOTE_RUN to 1 before running commands on the remote server.
Do sudo chmod a+x /usr/bin/stat to make it executable.
Again, rename the BSD utility ls with sudo mv /bin/ls /bin/_ls.
Create a "new" file /bin/ls with the content in Snippet 2, like before.
Lastly, sudo chmod a+x /bin/ls.
Install the dependencies with sudo pkg install rsync cmake gcc gdb gmake.
Now you can follow the official instructions, and connect to your shiny FreeBSD host!
Snippet 1
#!/bin/sh
if [ -z "$JETBRAINS_REMOTE_RUN" ]
then
exec "/usr/bin/_stat" "$#"
else
exec "/usr/local/bin/gnustat" "$#"
fi
Snippet 2
#!/bin/sh
if [ -z "$JETBRAINS_REMOTE_RUN" ]
then
exec "/bin/_ls" "$#"
else
exec "/usr/local/bin/gls" "$#"
fi
Additionally you need to fix one more incompatibility between GNU/Linux and FreeBSD.
Check gtar is installed if no pkg install gtar
Rename the BSD utility tar with mv /usr/bin/tar /usr/bin/_tar
Create a "new" file /usr/bin/tar with the content in Snippet 3, like before.
Lastly, sudo chmod a+x /usr/bin/tar
Snippet 3
#!/bin/sh
if [ -z "$JETBRAINS_REMOTE_RUN" ]
then
exec "/usr/bin/_tar" "$#"
else
exec "/usr/local/bin/gtar" "$#"
fi
Starting CLion 2020.1 the instruction regarding gnustat and "ls" is not relevant anymore. Because CLion 2020.1 includes the proper fixes in jsch-nio library (https://github.com/lucastheisen/jsch-nio/commit/410cf5cbb489114b5da38c7c05237f6417b9125b)
Starting CLion 2020.2 doesn't use tar --dereference option, so the instruction regarding gtar (gnutar) is also not relevant anymore.
According to this page, one can pull the kernel sources from the following location in Google Cloud Storage.
gs://cos-tools/<build-number>/
I am trying to find the source for a running instance of the Container-Optimized OS, but I have not found documentation describing how to extract a build number from the running instance. The output of uname -r is 4.4.111+ but I do not know how to map this to a build number that I can use for pulling the source.
How does one find the build number?
Inside the running COS instance, you can find the version in /etc/lsb-release.
$ cat /etc/lsb-release | grep CHROMEOS_RELEASE_VERSION
CHROMEOS_RELEASE_VERSION=10452.101.0
Then, in a machine with gsutil installed and configured
$ gsutil ls gs://cos-tools/10452.101.0/
gs://cos-tools/10452.101.0/kernel-src.tar.gz
gs://cos-tools/10452.101.0/kernel-src.tar.gz.md5
The best way would be to look at /etc/os-release:
$ cat /etc/os-release | grep BUILD_ID
BUILD_ID=12607.7.0
See this Chromium OS design doc for more details of the meaning of all the fields in /etc/lsb-release and /etc/os-release in Chromium OS, and whether you can rely on it or not. Container-optimized OS is based on Chromium OS.
So I'm having some adventures with the vagrant-aws plugin, and I'm now stuck on the issue of syncing folders. This is necessary to provision the machines, which is the ultimate goal. However, running vagrant provision on my machine yields
[root#vagrant-puppet-minimal vagrant]# vagrant provision
[default] Rsyncing folder: /home/vagrant/ => /vagrant
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir -p '/vagrant'
I'm almost positive the error is caused because ssh-ing manually and running that command yields 'permission denied' (obviously, a non-root user is trying to make a directory in the root directory). I tried ssh-ing as root but it seems like bad practice. (and amazon doesn't like it) How can I change the folder to be rsynced with vagrant-aws? I can't seem to find the setting for that. Thanks!
Most likely you are running into the known vagrant-aws issue #72: Failing with EC2 Amazon Linux Images.
Edit 3 (Feb 2014): Vagrant 1.4.0 (released Dec 2013) and later versions now support the boolean configuration parameter config.ssh.pty. Set the parameter to true to force Vagrant to use a PTY for provisioning. Vagrant creator Mitchell Hashimoto points out that you must not set config.ssh.pty on the global config, you must set it on the node config directly.
This new setting should fix the problem, and you shouldn't need the workarounds listed below anymore. (But note that I haven't tested it myself yet.) See Vagrant's CHANGELOG for details -- unfortunately the config.ssh.pty option is not yet documented under SSH Settings in the Vagrant docs.
Edit 2: Bad news. It looks as if even a boothook will not be "faster" to run (to update /etc/sudoers.d/ for !requiretty) than Vagrant is trying to rsync. During my testing today I started seeing sporadic "mkdir -p /vagrant" errors again when running vagrant up --no-provision. So we're back to the previous point where the most reliable fix seems to be a custom AMI image that already includes the applied patch to /etc/sudoers.d.
Edit: Looks like I found a more reliable way to fix the problem. Use a boothook to perform the fix. I manually confirmed that a script passed as a boothook is executed before Vagrant's rsync phase starts. So far it has been working reliably for me, and I don't need to create a custom AMI image.
Extra tip: And if you are relying on cloud-config, too, you can create a Mime Multi Part Archive to combine the boothook and the cloud-config. You can get the latest version of the write-mime-multipart helper script from GitHub.
Usage sketch:
$ cd /tmp
$ wget https://raw.github.com/lovelysystems/cloud-init/master/tools/write-mime-multipart
$ chmod +x write-mime-multipart
$ cat boothook.sh
#!/bin/bash
SUDOERS_FILE=/etc/sudoers.d/999-vagrant-cloud-init-requiretty
echo "Defaults:ec2-user !requiretty" > $SUDOERS_FILE
echo "Defaults:root !requiretty" >> $SUDOERS_FILE
chmod 440 $SUDOERS_FILE
$ cat cloud-config
#cloud-config
packages:
- puppet
- git
- python-boto
$ ./write-mime-multipart boothook.sh cloud-config > combined.txt
You can then pass the contents of 'combined.txt' to aws.user_data, for instance via:
aws.user_data = File.read("/tmp/combined.txt")
Sorry for not mentioning this earlier, but I am literally troubleshooting this right now myself. :)
Original answer (see above for a better approach)
TL;DR: The most reliable fix is to "patch" a stock Amazon Linux AMI image, save it and then use the customized AMI image in your Vagrantfile. See below for details.
Background
A potential workaround is described (and linked in the bug report above) at https://github.com/mitchellh/vagrant-aws/pull/70/files. In a nutshell, add the following to your Vagrantfile:
aws.user_data = "#!/bin/bash\necho 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty && chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty\nyum install -y puppet\n"
Most importantly this will configure the OS to not require a tty for user ec2-user, which seems to be the root of the problem. I /think/ that the additional installation of the puppet package is not required for the actual fix (although Vagrant may use Puppet for provisioning the machine later, depending on how you configured Vagrant).
My experience with the described workaround
I have tried this workaround but Vagrant still occasionally fails with the same error. It might be a "race condition" where Vagrant happens to run its rsync phase faster than cloud-init (which is what aws.user_data is passing information to) can prepare the workaround for #72 on the machine for Vagrant. If Vagrant is faster you will see the same error; if cloud-init is faster it works.
What will work (but requires more effort on your side)
What definitely works is to run the command on a stock Amazon Linux AMI image, and then save the modified image (= create an image snapshot) as a custom AMI image of yours.
# Start an EC2 instance with a stock Amazon Linux AMI image and ssh-connect to it
$ sudo su - root
$ echo 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty
$ chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty
# Note: Installing puppet is mentioned in the #72 bug report but I /think/ you do not need it
# to fix the described Vagrant problem.
$ yum install -y puppet
You must then use this custom AMI image in your Vagrantfile instead of the stock Amazon one. The obvious drawback is that you are not using a stock Amazon AMI image anymore -- whether this is a concern for you or not depends on your requirements.
What I tried but didn't work out
For the record: I also tried to pass a cloud-config to aws.user_data that included a bootcmd to set !requiretty in the same way as the embedded shell script above. According to the cloud-init docs bootcmd is run "very early" in the startup cycle for an EC2 instance -- the idea being that bootcmd instructions would be run earlier than Vagrant would try to run its rsync phase. But unfortunately I discovered that the bootcmd feature is not implemented in the outdated cloud-init version of current Amazon's Linux AMIs (e.g. ami-05355a6c has cloud-init 0.5.15-69.amzn1 but bootcmd was only introduced in 0.6.1).