I am trying to start the minikube using the minikube start command and this is the error i am getting. Even installed the latest version of virtual box but still it gives me this error.
Can someone please tell me why this is happening?
Follow the advice of the error message. Did you try installing virtualbox-dkms and the linux headers?
$ sudo apt update
$ sudo apt install virtualbox-dkms linux-headers-generic
Follow the instructions in the docs, if you aren't already: https://kubernetes.io/docs/tasks/tools/install-minikube/
First of all I would recomend you to install newest version of Minikbue (currently it is 1.5.2) and Kubectl.
Second thing check if your machine is supporting virtualization. It can bo done via command egrep -q 'vmx|svm' /proc/cpuinfo && echo yes || echo no.
If it is no you have to:
If you are running within a VM, your hypervisor does not allow nested virtualization. You will need to use the None (bare-metal) driver.
If you are running on a physical machine, ensure that your BIOS has hardware virtualization enabled.
Minikube set VirtualBox as default driver, but you can use other. Here under Hypervisor Setup you might find that you can also use KVM or None as driver for Linux OS.
Solutions:
1. As Minikube output advised, try to install
- $ sudo apt-get install virtualbox-dkms linux-headers-generic
- run $ sudo modprobe vboxdrv
- reinstall VirtualBox
2. If there is no virtualization option on your Laptop you can use Minikube with --vm-driver=none flag.
$ sudo minikube start --vm-driver=none
If you would use this option, you might need to specify --cpus=X and --memory=XXXX as default is requesting less resources.
Another think is that none driver provides limited isolation and may reduce system security and reliability. More info can be found here.
$ minikube start
๐ minikube v1.5.2 on Ubuntu 18.04
๐ฅ Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB)
...
$ sudo minikube start --vm-driver=none
๐ minikube v1.5.2 on Ubuntu 18.04
๐คน Running on localhost (CPUs=2, Memory=7470MB, Disk=9749MB) ...
After successfull installation, don't forget to execute mentioned commands.
โช sudo mv /home/<your_user>/.kube /home/<your_user>/.minikube $HOME
โช sudo chown -R $USER $HOME/.kube $HOME/.minikube
Related
I have a dilemma, I am trying to set up the Microsoft slqsrv drivers for PHP and a laravel project so that it can connect to an RDS service and do my migrations, however, the Microsoft page dictates that the supported versions for the Ubuntu Server are 18, 20 and 21. The following snippet is the exact commands for an Ubuntu server from the official Microsoft page.
if ! [[ "18.04 20.04 21.04" == *"$(lsb_release -rs)"* ]];
then
echo "Ubuntu $(lsb_release -rs) is not currently supported.";
exit;
fi
sudo su
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
curl https://packages.microsoft.com/config/ubuntu/$(lsb_release -rs)/prod.list >
/etc/apt/sources.list.d/mssql-release.list
exit
sudo apt-get update
sudo ACCEPT_EULA=Y apt-get install -y msodbcsql18
# optional: for bcp and sqlcmd
sudo ACCEPT_EULA=Y apt-get install -y mssql-tools18
echo 'export PATH="$PATH:/opt/mssql-tools18/bin"' >> ~/.bashrc
source ~/.bashrc
# optional: for unixODBC development headers
sudo apt-get install -y unixodbc-dev
Trying to run the commands without the if statement installs "something" but it ends up with errors, moreover, the pdo_sqlsrv and sqlsrv extensions do show up in the extensions list of PHP, running the command "php -m" shows that they are indeed loaded, but if I try to run the migration it shows the alert that the OBDC driver is missing.
What makes me think this is not working is that my EC2 instance has the Ubuntu 22 version, which would make sense since the drivers are not supported at the moment and are not installed properly. The options I concluded are that either somehow downgrade my Ubuntu version from my EC2 server or create a new instance with a version that supports the sqlsrv drivers. I don't know if there's a third option for the installation to work properly in this version, but I assume the previous two are the more sensible.
My question is, is it possible or recommended to downgrade the Ubuntu version of the EC2 server? or should I create a new instance with a compatible version?
One of the main benefits of the cloud is resource provisioning speed.
It takes seconds to create a new EC2 instance, it's much easier & quicker to just create a new instance with the Ubuntu 20.04 LTS or Ubuntu 18.04 LTS AMIs available.
I'm trying build an image for use on EC2 instances in an AWS Batch job. I need to use Ubuntu 18.04 because the goal is to run some Fortran software that I can only get to compile on Ubuntu 18.04. I have the Fortran software and some python scripts running well on a manually started Ubuntu 18.04 EC2 instance.
Now, I'm trying to build an image with Docker (that I'll eventually apply to 100s or 1000s of EC2 instances)... but I have to get CloudWatchAgent (CWA) installed and started, and I can't get CWA to start in the Docker build. CWA starts and runs fine in my manual EC2 development instance (Ubuntu 18.04). I initially had problems with CWA in my manual instance because CWA uses systemctl, and so I had to manually install systemd, and that worked after a reboot. But, I'm not able to replicate this in my Docker build, but always get the error:
System has not been booted with systemd as init system (PID 1). Can't operate.
unknown init system
The command '/bin/sh -c sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file:amazon-cloudwatch-agent.json' returned a non-zero code: 1
I tried starting with an ubuntu 18.04 image that is supposed to have systemd already installed, and tried rebooting my EC2 instance, same error. Here's the source: https://hub.docker.com/r/jrei/systemd-ubuntu
I looked for other ideas, e.g.: Docker System has not been booted with systemd as init system
... but couldn't figure out how to make it work in a Docker build.
So,
am I using the Ubuntu 18.04 image (that has systemd) in my build wrong- how to use in a Docker build?
is there another way to start CloudWatchAgent in Ubuntu 18.04 that gets around the systemd problem?
would it work/is there a way to restart the operating system inside the Docker container, during the docker build stage?
am I stuck and will have to try recompile everything on a different Ubuntu or AMI like Amazon Linux?
Or is there something else I'm missing?
Here's my Docker file:
#version with systemd already installed
FROM jrei/systemd-ubuntu#sha256:1b65424e0ec4f6772576b55c49e1470ba506504d1033e9da5795785b1d6a4d88 as ubuntu-base
RUN apt-get update && apt-get install -y \
sudo \
wget \
python3-pip
RUN sudo apt-get -y install libgfortran3
RUN sudo pip3 install boto3
RUN wget https://s3.us-east-2.amazonaws.com/amazoncloudwatch-agent-us-east-2/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
RUN sudo dpkg -i -E ./amazon-cloudwatch-agent.deb
COPY . .
RUN cp amazon-cloudwatch-agent.json /opt/aws/amazon-cloudwatch-agent/etc/
ENV ECS_AVAILABLE_LOGGING_DRIVERS = awslogs
RUN sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file:amazon-cloudwatch-agent.json
RUN mkdir -p cpseqlogs
CMD python3 cpsequence.py
Thanks for any suggestions, ideas, or tips (I'm fairly new to Docker, but not totally green on linux).
Background:
I created a sandbox VM with VirtualBox on my macOS. It correctly spins up a VM (with CentOS7 running on it) on which I can access to.
Inside this sandbox vm, I want to spin up several vms in order to test Ansible Playbooks with Kitchen CI & Vagrant, thus I installed VirtualBox by downloading it from the following link: https://download.virtualbox.org/virtualbox/5.2.8/VirtualBox-5.2-5.2.8_121009_el7-1.x86_64.rpm
After the installation I executed the command:
[david#vmkitchen-env ansible-test]# VBoxManage --version
It returned:
WARNING: The vboxdrv kernel module is not loaded. Either there is no module
available for the current kernel (3.10.0-693.2.1.el7.x86_64) or it
failed to load. Please recompile the kernel module and install it
by
sudo /sbin/vboxconfig
You will not be able to start VMs until this problem is fixed.
5.2.8r121009
I installed the Development tools, but I keep getting the same issue.
I don't think I need to recompile any kernel module. Any idea?
Thanks in advance for your help.
So, after searching on the internet, and not just on the VirtualBox website, I found the solution, and I was right: I did not need to compile any module.
The following is the reference to the CentOS wiki page:
https://wiki.centos.org/HowTos/Virtualization/VirtualBox
In a few words, I had to install dkms and kernel-devel packages. In order to do so, I needed to install EPEL repository; but personally I prefer to install and enable the IUS repository.
The following are the set of commands that worked for me:
yum groupinstall "Development tools"
yum install https://centos7.iuscommunity.org/ius-release.rpm
yum install dkms
yum install kernel-devel
reboot
After the machine had rebooted, I was able to get VirtualBox working fine.
I verified by the command line:
[david#vmkitchen-env ansible-test]# VBoxManage --version
And it returned the correct value:
5.2.8r121009
Below steps fixed the issue for me.
1.sudo /sbin/vboxconfig
vboxdrv.sh: Stopping VirtualBox services.
vboxdrv.sh: Starting VirtualBox services.
vboxdrv.sh: Building VirtualBox kernel modules.
This system is currently not set up to build kernel modules.
Please install the Linux kernel "header" files matching the current kernel
for adding new hardware support to the system.
The distribution packages containing the headers are probably:
kernel-devel kernel-devel-3.10.0-957.10.1.el7.x86_64
2.This website has the kernel module - website
wget https://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/kernel-devel-3.10.0-957.10.1.el7.x86_64.rpm
4.yum localinstall kernel-devel-3.10.0-957.10.1.el7.x86_64.rpm -y
5.sudo /sbin/vboxconfig
Issue resolved
On Fedora 36, I only had to run
sudo /sbin/vboxconfig
I am new to AWS.
I created a Linux free tier instance and its up and running as i am able to access it via ssh putty as i am a windows user.
Now, I wanted to RDP the Linux Instance to see the interface but i am unable to do so. I am unable to find any option for that.
As per my understanding by reading online forms, it is not possible to RDP LinuxInstance on AWS.
If anyone can give there expert opinion if it is possible to RDP the Linux Instance? If not then Is there any way I can access the graphical interface for Linux Instance in AWS or I just have to work with command line interface from my local machine?
Amazon had a page that described how to do this with various linux versions but they took it down. Here are the steps for Linux 16.04. I just did this on a new ubuntu ec2 instance and it worked fine.
sudo apt update && sudo apt upgrade
sudo sed -i 's/^PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config
sudo /etc/init.d/ssh restart
sudo passwd ubuntu
sudo apt install xrdp xfce4 xfce4-goodies tightvncserver -y
echo xfce4-session> /home/ubuntu/.xsession
sudo cp /home/ubuntu/.xsession /etc/skel
sudo sed -i '0,/-1/s//ask-1/' /etc/xrdp/xrdp.ini
sudo service xrdp restart
RDP is a proprietary protocol developed by Microsoft, Linux servers do not come in with GUI, you need to SSH into Linux box and then install packages to enable desktop GUI functionality
Here is an article from AWS
https://aws.amazon.com/premiumsupport/knowledge-center/connect-ec2-centos-windows/
Note: Amazon Linux does not provide any desktop GUI functionality
try install GNOME packages i.e.,
$sudo yum -y groupinstall "Server with GUI"
$sudo systemctl enable xrdp; systemctl start xrdp
now try login from your windows machine using "windows remote desktop or any vnc client
If I want to build my Dockerfile, it can't connect to the network or at least DNS:
Sending build context to Docker daemon 15.95 MB
Sending build context to Docker daemon
Step 0 : FROM ruby
---> eeb85dfaa855
Step 1 : RUN apt-get update -qq && apt-get install -y build-essential libpq-dev
---> Running in ec8cbd41bcff
W: Failed to fetch http://httpredir.debian.org/debian/dists/jessie/InRelease
W: Failed to fetch http://httpredir.debian.org/debian/dists/jessie-updates/InRelease
W: Failed to fetch http://security.debian.org/dists/jessie/updates/InRelease
W: Failed to fetch http://httpredir.debian.org/debian/dists/jessie/Release.gpg Could not resolve 'httpredir.debian.org'
W: Failed to fetch http://httpredir.debian.org/debian/dists/jessie-updates/Release.gpg Could not resolve 'httpredir.debian.org'
W: Failed to fetch http://security.debian.org/dists/jessie/updates/Release.gpg Could not resolve 'security.debian.org'
W: Some index files failed to download. They have been ignored, or old ones used instead.
Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package build-essential
INFO[0001] The command "/bin/sh -c apt-get update -qq && apt-get install -y build-essential libpq-dev" returned a non-zero code: 100
But if I run exactly the same command via docker run it works:
docker run --name="test" ruby /bin/sh -c 'apt-get update -qq && apt-get install -y build-essential libpq-dev'
Does anybody have an idea, why docker build does not work? I have tried all DNS related tipps on StackOverflow, like starting docker with --dns 8.8.8.8 etc.
Thanks in advance
Check what networks are available on your host with the below command:
docker network ls
then pick one that you know is working, the host one could be a good candidate.
Now assuming you are in the directory where it is available your Dokerfile, build your image appending the flag --networks and change the <image-name> with yours:
docker build . -t <image-name> --no-cache --network=host
Docker definitely seems to have some network issues. I managed to fix this problem with
systemctl restart docker
... which is basically just the unix-level 'restart-the-daemon' command in Debian 8.
I had similar problem. But as I was running AWS linux i had no systemctl. I solved using:
sudo service docker restart
My docker build also failed while trying to run apt-get upgrade with the exact same errors. I was using docker-machine on Mac OSX and a simple docker-machine restart default solved this issue. No idea what initially caused this, though.
Another case of the above reported behaviour - this time building a docker image from Jenkins:
[...]
Step 3 : RUN apt-get update && apt-get install -y curl libapache2-mod-proxy-html
---> Running in ea7aca5dea9b
Err http://security.debian.org jessie/updates InRelease
Err http://security.debian.org jessie/updates Release.gpg
Could not resolve 'security.debian.org'
Err http://httpredir.debian.org jessie InRelease
[...]
In my case it turned out that the DNS wasn't reachable from within the container - but still from the docker host !? (The containers resolver configuration was okay(!))
After restarting the docker machine (a complete reboot - a 'docker.service restart' didn't do the trick) it's been working again.
So one of my activities (or of a colleague of mine) must have broken the docker networking then !?? Maybe some firewalld modification activity ???
I'm still investigating as I'm not sure which activity may have corrupted the docker networking then ...
I have the exact same issue with a Raspberry.
Start/stopping the service did not help, but re-installing the package (dpkg -i docker-hypriot_1.10.3-1_armhf.deb && service docker start in my case) immediately solved the situation : apt-get update manages to resolve and reach the servers.
There must be some one-shot actions in the installation process...
Also faced the same issue today. My workaround was to restart your docker-machine. In my case, it's on VirtualBox.
Once you power off it and then restart the machine, http://security.debian.org seemed resolved.
Hope this helps.
A couple of suggestions, not sure if they will work or not. Can you change the ...apt-get install -y... to ...apt-get install -yqq...
Also, has that image changed that you're trying to build from?