Docker-Desktop context select not updating the kubectl active context - kubectl

I'm running Docker Desktop for Windows with WSL2 integration. I'm also running minikube. The Docker Desktop GUI correctly shows both known contexts:
I've selected the 'minikube' context because that's the one I want to use.
However when I go into the WSL terminal and run kubectl config get-contexts, I can only see the docker-desktop context, which is the active one:
admin#RODDY01-PC:/mnt/c/workspace$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* docker-desktop docker-desktop docker-desktop
Running which kubectl reveals that I'm using the kubectl that docker-desktop installed when doing the WSL integration:
admin#RODDY01-PC:/mnt/c/workspace$ which kubectl
/usr/local/bin/kubectl
admin#RODDY01-PC:/mnt/c/workspace$ ls -la /usr/local/bin/kubectl
lrwxrwxrwx 1 root root 55 Aug 1 17:49 /usr/local/bin/kubectl -> /mnt/wsl/docker-desktop/cli-tools/usr/local/bin/kubectl
(In Windows, where kubectl is bundled as part of docker at C:\Program Files\Docker\Docker\resources\bin\kubectl.exe, it correctly shows both contexts and indicates minikube is the active; it's just the Linux/WSL2 kubectl instance that's not updating/recognizing the active and available contexts.)
How can I get Docker Desktop to properly update the kubectl active context in WSL2?
Version info: Docker Desktop 3.5.2; Windows 10 Professional; Debian 10 on WSL2.

The problem turned out to be that I actually had two kube config files.
Windows, which was correctly being updated by the GUI, had a config located at: C:\Users\<username\.kube\config.
WSL also had one, at ~/.kube/config. This one was NOT up-to-date, and was actually missing the entire minikube context definition.
I solved this by setting an environment variable in my ~/.bashrc file to point $KUBECONFIG to the Windows kube config file. One of my colleagues having a similar issue instead deleted the linux file and created a symlink to the Windows one; both solutions work.
Tangentially related: the minikube configuration include certificates that use Windows-style paths in the config file; it was necessary to embed these certificates into the configuration file in order for them to work properly cross-OS.

Related

"puppet agent --test" on client machine aren't getting manifest from the Puppet master server

Issue
So I have two AWS instances: a Puppet master and a Puppet client. When I run sudo puppet agent --test on my client, the tasks defined in my master's manifest didn't apply to the client instance.
Where I am right now
puppetmaster is installed on the master instance
puppet is installed on client instance
Master just finished signing my client's certificate. No errors were displayed
Master has a /etc/puppet/manifests/site.pp
Client's puppet.conf file has a server=dns_of_master line
My Puppet version is 5.4.0. I'm using the default manifest configuration.
Here's the guide that I'm following: https://www.digitalocean.com/community/tutorials/getting-started-with-puppet-code-manifests-and-modules. The only changes are the site.pp content and that I'm using AWS.
If it helps, here's my AWS instances' AMI: ami-06d51e91cea0dac8d
Details
Here's the content on my master's /etc/puppet/manifests/site.pp:
node default {
package { 'nginx':
ensure => installed
}
service { 'nginx':
ensure => running,
require => Package['nginx']
}
file { '/tmp/hello_world':
ensure => present,
content => 'Hello, World!'
}
}
The file has a permission of 777.
Here's the ouput when I run sudo puppet agent --test. This is after I ran sudo puppet agent --enable:
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Caching catalog for my_client_dns
Info: Applying configuration version '1578968015'
Notice: Applied catalog in 0.02 seconds
I have looked at other StackOverflow posts with this issue. I know that my catalog is not getting applied due to the lack of status messages and the quick time. Unfortunately, the solutions didn't apply to my case:
My site.pp is named correctly and in the correct file path /etc/puppet/manifests
I didn't touch my master's puppet.conf file
I tried restarting the server with sudo systemctl but nothing happens
So I have fixed the issue. The guide that I was following required an older version of Ubuntu (16.4, rather than 18.4 as I'm using). This needs a different AMI than the one that I used to create the instances.

How do I provide credentials to the docker awslogs driver using Docker for Mac?

I'm trying to use the docker awslogs driver and getting the following error:
"docker: Error response from daemon: Failed to initialize logging
driver: NoCredentialProviders: no valid providers in chain.
Deprecated."
According to this GitHub comment, I need to set the AWS_SHARED_CREDENTIALS_FILE environment variable for the docker daemon, but I'm not sure how to do that when using Docker for Mac.
The command I'm using to start the container is:
docker run -d \
--log-driver=awslogs \
--log-opt awslogs-region=us-east-1 \
--log-opt awslogs-group=my-log-group \
my-image
Version information:
Docker for Mac 1.12.1-rc1-beta23 build 11375
OS X El Capitan 10.11.6
but I'm not sure how to do that when using Docker for Mac.
With boot2docker, you would need to modify /var/lib/boot2docker/profile in order to add this variable.
See "Docker daemon config file on boot2docker".
It does persists across the TinyCore-based VM reboot, and the docker daemon would then take it into account.
With the new docker for Mac xhyve-based, the idea should be the same.
/var/lib/boot2docker/profile does exist as well, as shown in this answer.
The official docker dameon doc points to:
--config-file=/etc/docker/daemon.json Daemon configuration file
So try and modify this file.
By default, the comments mention:
~/Library/Containers/com.docker.docker/Data/database/com.doc‌​ker.driver.amd64-lin‌​ux/etc/docker/daemon‌​.json
Using information taken from this answer: Docker deamon config path under mac os
You can connect to the VM layer that runs the docker daemon using:
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
And you can modify /etc/docker/daemon.json to add the needed variables there.
Once you make your changes, you can just run:
service docker restart
from within the moby terminal to restart the docker daemon.
Do notice that if you restart docker from your mac, the changes will not persist.
On a side note, if you encounter a login screen when connecting with the screen command, try username: root to access the system.

How can I get cadvisor (Docker) working with AWS/Debian?

I have an AWS instance set up (Debian) onto which I've installed Docker.
I can successfully run the hello-world container, as well as running ubuntu as recommended in the Docker install validation.
I want to run cadvisor. So I ran the recommended quick-start script:
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/var/run:/var/run:rw \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--publish=8080:8080 \
--detach=true \
--name=cadvisor \
google/cadvisor:latest
That gave me no error but when I do a 'sudo docker ps' nothing's there; like it fired up and died or otherwise shut itself down.
I tried adding "--logtostderr" to the end to see what I could see--and saw:
I0108 19:19:55.308016 00001 storagedriver.go:89] Caching 60 recent stats in memory; using "" storage driver
I0108 19:19:55.308353 00001 manager.go:78] cAdvisor running in container: "/docker/e3b5ede6f6def6b36d7682814aefc2b414defaea065ccf977a1a2542a80c310c"
F0108 19:19:55.337891 00001 cadvisor.go:76] Failed to create a Container Manager: failed to get cache information for node 0: open /sys/devices/system/cpu/cpu1/cache: no such file or directory
Do I need to do something different for a Debian system?
If you notice the docker command and the error we are explicitly mounting in the sys directory from the host system. --volume=/sys:/sys:ro and the error is complaining about file in a sub-directory /sys/devices/system/cpu/cpu1/cache. So if that file/folder does not exist in your host vm it won't work inside docker.
I have tested both ubuntu and amazon standard AMI and they seem to have the file mentioned. I don't see debian in the standard AMIs so I have no easy way to test debian but I suspect the image you are using has the required kernel modules or settings missing. Why not use one of the standard Amazon AMIs?
There was a bug we fixed in cAdvisor. The latest version of cAdvisor should work just fine with Debian on AWS or anywhere.

Vagrant Rsync Error before provisioning

So I'm having some adventures with the vagrant-aws plugin, and I'm now stuck on the issue of syncing folders. This is necessary to provision the machines, which is the ultimate goal. However, running vagrant provision on my machine yields
[root#vagrant-puppet-minimal vagrant]# vagrant provision
[default] Rsyncing folder: /home/vagrant/ => /vagrant
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir -p '/vagrant'
I'm almost positive the error is caused because ssh-ing manually and running that command yields 'permission denied' (obviously, a non-root user is trying to make a directory in the root directory). I tried ssh-ing as root but it seems like bad practice. (and amazon doesn't like it) How can I change the folder to be rsynced with vagrant-aws? I can't seem to find the setting for that. Thanks!
Most likely you are running into the known vagrant-aws issue #72: Failing with EC2 Amazon Linux Images.
Edit 3 (Feb 2014): Vagrant 1.4.0 (released Dec 2013) and later versions now support the boolean configuration parameter config.ssh.pty. Set the parameter to true to force Vagrant to use a PTY for provisioning. Vagrant creator Mitchell Hashimoto points out that you must not set config.ssh.pty on the global config, you must set it on the node config directly.
This new setting should fix the problem, and you shouldn't need the workarounds listed below anymore. (But note that I haven't tested it myself yet.) See Vagrant's CHANGELOG for details -- unfortunately the config.ssh.pty option is not yet documented under SSH Settings in the Vagrant docs.
Edit 2: Bad news. It looks as if even a boothook will not be "faster" to run (to update /etc/sudoers.d/ for !requiretty) than Vagrant is trying to rsync. During my testing today I started seeing sporadic "mkdir -p /vagrant" errors again when running vagrant up --no-provision. So we're back to the previous point where the most reliable fix seems to be a custom AMI image that already includes the applied patch to /etc/sudoers.d.
Edit: Looks like I found a more reliable way to fix the problem. Use a boothook to perform the fix. I manually confirmed that a script passed as a boothook is executed before Vagrant's rsync phase starts. So far it has been working reliably for me, and I don't need to create a custom AMI image.
Extra tip: And if you are relying on cloud-config, too, you can create a Mime Multi Part Archive to combine the boothook and the cloud-config. You can get the latest version of the write-mime-multipart helper script from GitHub.
Usage sketch:
$ cd /tmp
$ wget https://raw.github.com/lovelysystems/cloud-init/master/tools/write-mime-multipart
$ chmod +x write-mime-multipart
$ cat boothook.sh
#!/bin/bash
SUDOERS_FILE=/etc/sudoers.d/999-vagrant-cloud-init-requiretty
echo "Defaults:ec2-user !requiretty" > $SUDOERS_FILE
echo "Defaults:root !requiretty" >> $SUDOERS_FILE
chmod 440 $SUDOERS_FILE
$ cat cloud-config
#cloud-config
packages:
- puppet
- git
- python-boto
$ ./write-mime-multipart boothook.sh cloud-config > combined.txt
You can then pass the contents of 'combined.txt' to aws.user_data, for instance via:
aws.user_data = File.read("/tmp/combined.txt")
Sorry for not mentioning this earlier, but I am literally troubleshooting this right now myself. :)
Original answer (see above for a better approach)
TL;DR: The most reliable fix is to "patch" a stock Amazon Linux AMI image, save it and then use the customized AMI image in your Vagrantfile. See below for details.
Background
A potential workaround is described (and linked in the bug report above) at https://github.com/mitchellh/vagrant-aws/pull/70/files. In a nutshell, add the following to your Vagrantfile:
aws.user_data = "#!/bin/bash\necho 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty && chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty\nyum install -y puppet\n"
Most importantly this will configure the OS to not require a tty for user ec2-user, which seems to be the root of the problem. I /think/ that the additional installation of the puppet package is not required for the actual fix (although Vagrant may use Puppet for provisioning the machine later, depending on how you configured Vagrant).
My experience with the described workaround
I have tried this workaround but Vagrant still occasionally fails with the same error. It might be a "race condition" where Vagrant happens to run its rsync phase faster than cloud-init (which is what aws.user_data is passing information to) can prepare the workaround for #72 on the machine for Vagrant. If Vagrant is faster you will see the same error; if cloud-init is faster it works.
What will work (but requires more effort on your side)
What definitely works is to run the command on a stock Amazon Linux AMI image, and then save the modified image (= create an image snapshot) as a custom AMI image of yours.
# Start an EC2 instance with a stock Amazon Linux AMI image and ssh-connect to it
$ sudo su - root
$ echo 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty
$ chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty
# Note: Installing puppet is mentioned in the #72 bug report but I /think/ you do not need it
# to fix the described Vagrant problem.
$ yum install -y puppet
You must then use this custom AMI image in your Vagrantfile instead of the stock Amazon one. The obvious drawback is that you are not using a stock Amazon AMI image anymore -- whether this is a concern for you or not depends on your requirements.
What I tried but didn't work out
For the record: I also tried to pass a cloud-config to aws.user_data that included a bootcmd to set !requiretty in the same way as the embedded shell script above. According to the cloud-init docs bootcmd is run "very early" in the startup cycle for an EC2 instance -- the idea being that bootcmd instructions would be run earlier than Vagrant would try to run its rsync phase. But unfortunately I discovered that the bootcmd feature is not implemented in the outdated cloud-init version of current Amazon's Linux AMIs (e.g. ami-05355a6c has cloud-init 0.5.15-69.amzn1 but bootcmd was only introduced in 0.6.1).

Why could VirtualBox not find a registered machine named Windows_7?

I'm trying to change TCP/UDP of a virtual machine using VBoxManage setextradata.
Whenever I type the command:
sudo VBoxManage setextradata Windows_7 "VBoxInternal/Devices/pcnet/0/LUN#0/Config/guestEmule_TCP/Protocol" TCP
I get the following error:
VirtualBox Command Line Management Interface Version 2.1.4
(C) 2005-2009 Sun Microsystems, Inc.
All rights reserved.
[!] FAILED calling a->virtualBox->FindMachine(Bstr(a->argv[0]), machine.asOutParam()) at line 3688!
[!] Primary RC = VBOX_E_OBJECT_NOT_FOUND (0x80BB0001) - Object corresponding to the supplied arguments does not exist
[!] Full error info present: true , basic error info present: true
[!] Result Code = VBOX_E_OBJECT_NOT_FOUND (0x80BB0001) - Object corresponding to the supplied arguments does not exist
[!] Text = Could not find a registered machine named 'Windows_7'
[!] Component = VirtualBox, Interface: IVirtualBox, {339abca2-f47a-4302-87f5-7bc324e6bbde}
[!] Callee = IVirtualBox, {339abca2-f47a-4302-87f5-7bc324e6bbde}
The virtual-machine has been created using GUI. Any idea?
$ cd /Users/marco/Library/VirtualBox/Machines/Windows_7
$ ls
Logs Windows_7.xml
Windows 7.xml.1.5-macosx.bak
$ sudo vboxmanage registervm Windows_7.xml
VirtualBox Command Line Management Interface Version 2.1.4
(C) 2005-2009 Sun Microsystems, Inc.
All rights reserved.
[!] FAILED calling a->virtualBox->OpenMachine(Bstr(a->argv[0]), machine.asOutParam()) at line 762!
[!] Primary RC = NS_ERROR_FAILURE (0x80004005) - Operation failed
[!] Full error info present: true , basic error info present: true
[!] Result Code = NS_ERROR_FAILURE (0x80004005) - Operation failed
[!] Text = Could not lock the settings file '/var/root/Library/VirtualBox/Windows_7.xml' (VERR_FILE_NOT_FOUND)
[!] Component = Machine, Interface: IMachine, {ea6fb7ea-1993-4642-b113-f29eb39e0df0}
[!] Callee = IVirtualBox, {339abca2-f47a-4302-87f5-7bc324e6bbde}
It fails because you are using sudo. VirtualBox is designed to be run by any user (in the vboxusers group), and sudo runs the command as the root user whose VirtualBox configuration is empty.
You can check that by typing:
sudo VBoxManage -nologo list vms # Should print only a newline
VBoxManage -nologo list vms # Detailled information about all your VMs
!! WINDOWS ONLY!!
If you are not on an admin account and are trying to modify your VM in a administrator cmd window, type these commands:
cd "C:\Program Files\Oracle\VirtualBox"
VboxManage registervm "C:\Users\Your Name Here\VirtualBox VMs\Your VM name here\Your VM name here.vbox"
Now run your virtual box modify commands or what ever else you are doing and it should work!
Not a direct answer, but just to put it out there for other people searching for it:
On Mac OS X, you can tell VirtualBox to load VMs from another user's home directory, provided the file permissions allow it, or if you are running VirtualBox as the root user using sudo (e.g. if you absolutely have to access your host's web server on port 80).
The way to do this is to set VBOX_USER_HOME appropriately, e.g.
VBOX_USER_HOME=/Users/the_other_user/Library/VirtualBox
If you want to run VBoxHeadless under root, use:
sudo VBOX_USER_HOME=/Users/your_user_id/Library/VirtualBox nohup \
VBoxHeadless -s "IE10 - Win7" </dev/null &>/dev/null &
I had a similar error message, whenever I used sudo to start VBoxSDL:
Error: machine with the given name not found!
Check if this VM has been corrupted and is now inaccessible.
And similar to ypocat's answer, I solved it for Ubuntu using a small script like this:
#!/bin/bash
export VBOX_USER_HOME=/home/username/.config/VirtualBox
VBoxSDL --startvm nameOfVM
You can use it whenever you need to start your VM as root.
SOLUTION_1:
Missing Virtual Technology might be the reason. For Intel systems they have Intel VT-x (for AMD they have AMD-V), so make sure it is enabled. You can enable it in the boot screen go to BIOS Setup in that look for System Configuration tab and enable Virtual Technology
.
SOLUTION_2:
Open terminal or cmd (Run as admin) for Windows, and run SC START VBOXDRV. If it says the service is already running then try SC STOP VBOXDRV and then SC START VBOXDRV.
RealScar solution helped me in addition to other commands and it worked in Ubuntu 20.04, too.
I had the problem initially indicated (VirtualBox unable to find a registered machine). I was getting no results after typing sudo VBoxManage -nologo list vms, so I manually registered the existing machine typing sudo vboxmanage registervm /home/user/VirtualBox\ VMs/machinenamefolder/machinename.vbox. It worked great.
Note: I was creating a Cuckoo Malware Sandbox Analysis.
REASON: In the above case its uuid-mismatch (w.r.t. what is pointed to what is generated). Typically case of either improper edit of the vbox config files or accidentally deleted images/configs associated.
SOLUTION:
As FIRST Step: Correct the UUID-Mappings;
So, for example, the uuids can be regenerated to correct the mappings. (1)
Or otherwise, if attempted to edit .vbox/.vmdk/VirtualBox.xml files, the mappings should be corrected. (2)
As SECOND Step: Re-register the virtual-machine to the UI.
For example, if can't open the vm's from vbox gui or terminal. Remove the "inaccessible" entry from the GUI first". And then, open the folder of virtual machine and open file <machine name>.vbox with virtualbox and it will get registered. Provided, the uuid-mappings have been already resolved. Else follow the errors, while you attempt to register to make necessary changes in .vbox file.