Vagrantfile with multiple vm and providers - amazon-web-services

I am trying to write a Vagrantfile with multiple machines backed up by multiple providers. I specifically want to be able to spawn more than one of those machines in one go. Basically I want to run the command:
vagrant up vb_vm aws_vm
I am aware of the --provider flag, but this would apply to all machines being spawned, so not applicable in my case.
This is my (very trimmed down but still valid) Vagrantfile:
Vagrant.configure(2) do |config|
config.vm.define 'vb_vm' do |vb_vm|
vb_vm.vm.box='ubuntu/trusty64' # from hashicorp
vb_vm.vm.provider :virtualbox do |v|
end
end
config.vm.define 'aws_vm' do |aws_vm|
aws_vm.vm.box = "aws/dummy"
aws_vm.vm.box_url = 'https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box'
aws_vm.vm.provider :aws do |a, override|
a.access_key_id = 'something'
a.secret_access_key = 'something'
a.ami='something'
end
end
end
A vagrant box list shows that the boxes used for each definitions are of the right type:
aws/dummy (aws, 0)
ubuntu/trusty64 (virtualbox, 20150928.0.0)
But a vagrant status gives me (note that I do have the lxc plugin available, which became the default)
Current machine states:
aws_vm not created (aws)
vb_vm not created (lxc)
So this shows that spawning multiple machine with multiple provider is indeed possible, but the choice of provider is wrong.
I am aware of the tricks to set up the default provider, but this only makes things worse (virtualbox used everywhere, aws not used at all...)
I am aware of old stackoverflow questions as well, but they are related to a much older version of Vagrant.
So the question is: how do I make sure that each box defined uses its proper provider?

The trick will be to create the VM with their own provider.
example: I've defined a quick Vagrantfile (minimized) with boxes for each provider
Vagrant.configure(2) do |config|
config.vm.define "db" do |db|
db.vm.box = "..."
db.vm.hostname = "db"
end
config.vm.define "app", primary: true do |app|
app.vm.box = "..."
app.vm.hostname = "app"
app.ssh.forward_agent = true
app.ssh.forward_x11 = true
app.vm.provider "vmware_fusion" do |vm|
vm.vmx["memsize"] = "4096"
end
end
end
I create each VM separately
fhenri#machine:~/project/examples/vagrant/multimachine$ vagrant up db --provider=virtualbox
Bringing machine 'db' up with 'virtualbox' provider...
....
fhenri#machine:~/project/examples/vagrant/multimachine$ vagrant up app
Bringing machine 'app' up with 'vmware_fusion' provider...
....
then I halt everything and next time I do vagrant up
fhenri#machine:~/project/examples/vagrant/multimachine$ vagrant up
Bringing machine 'db' up with 'virtualbox' provider...
Bringing machine 'app' up with 'vmware_fusion' provider...
and status looks good
fhenri#machine:~/project/examples/vagrant/multimachine$ vagrant status
Current machine states:
db running (virtualbox)
app running (vmware_fusion)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

Related

"puppet agent --test" on client machine aren't getting manifest from the Puppet master server

Issue
So I have two AWS instances: a Puppet master and a Puppet client. When I run sudo puppet agent --test on my client, the tasks defined in my master's manifest didn't apply to the client instance.
Where I am right now
puppetmaster is installed on the master instance
puppet is installed on client instance
Master just finished signing my client's certificate. No errors were displayed
Master has a /etc/puppet/manifests/site.pp
Client's puppet.conf file has a server=dns_of_master line
My Puppet version is 5.4.0. I'm using the default manifest configuration.
Here's the guide that I'm following: https://www.digitalocean.com/community/tutorials/getting-started-with-puppet-code-manifests-and-modules. The only changes are the site.pp content and that I'm using AWS.
If it helps, here's my AWS instances' AMI: ami-06d51e91cea0dac8d
Details
Here's the content on my master's /etc/puppet/manifests/site.pp:
node default {
package { 'nginx':
ensure => installed
}
service { 'nginx':
ensure => running,
require => Package['nginx']
}
file { '/tmp/hello_world':
ensure => present,
content => 'Hello, World!'
}
}
The file has a permission of 777.
Here's the ouput when I run sudo puppet agent --test. This is after I ran sudo puppet agent --enable:
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Caching catalog for my_client_dns
Info: Applying configuration version '1578968015'
Notice: Applied catalog in 0.02 seconds
I have looked at other StackOverflow posts with this issue. I know that my catalog is not getting applied due to the lack of status messages and the quick time. Unfortunately, the solutions didn't apply to my case:
My site.pp is named correctly and in the correct file path /etc/puppet/manifests
I didn't touch my master's puppet.conf file
I tried restarting the server with sudo systemctl but nothing happens
So I have fixed the issue. The guide that I was following required an older version of Ubuntu (16.4, rather than 18.4 as I'm using). This needs a different AMI than the one that I used to create the instances.

Requirements for launching Google Cloud AI Platform Notebooks with custom docker image

On AI Platform Notebooks, the UI lets you select a custom image to launch. If you do so, you're greeted with an info box saying that the container "must follow certain technical requirements":
I assume this means they have a required entrypoint, exposed port, jupyterlab launch command, or something, but I can't find any documentation of what the requirements actually are.
I've been trying to reverse engineer it without much luck. I nmaped a standard instance and saw that it had port 8080 open, but setting my image's CMD to run Jupyter Lab on 0.0.0.0:8080 did not do the trick. When I click "Open JupyterLab" in the UI, I get a 504.
Does anyone have a link to the relevant docs, or experience with doing this in the past?
There are two ways you can create custom containers:
Building a Derivative Container
If you only need to install additional packages, ou should create a Dockerfile derived from one of the standard images (for example, FROM gcr.io/deeplearning-platform-release/tf-gpu.1-13:latest), then add RUN commands to install packages using conda/pip/jupyter.
The conda base environment has already been added to the path, so no need to conda init/conda activate unless you need to setup another environment. Additional scripts/dynamic environment variables that need to be run prior to bringing up the environment can be added to /env.sh, which is sourced as part of the entrypoint.
For example, let’s say that you have a custom built TensorFlow wheel that you’d like to use in place of the built-in TensorFlow binary. If you need no additional dependencies, your Dockerfile will be similar to:
Dockerfile.example
FROM gcr.io/deeplearning-platform-release/tf-gpu:latest
RUN pip uninstall -y tensorflow-gpu && \
pip install -y /path/to/local/tensorflow.whl
Then you’ll need to build and push it somewhere accessible to your GCE service account.
PROJECT="my-gcp-project"
docker build . -f Dockerfile.example -t "gcr.io/${PROJECT}/tf-custom:latest"
gcloud auth configure-docker
docker push "gcr.io/${PROJECT}/tf-custom:latest"
Building Container From Scratch
The main requirement is that the container must expose a service on port 8080.
The sidecar proxy agent that executes on the VM will ferry requests to this port only.
If using Jupyter, you should also make sure your jupyter_notebook_config.py is configured as such:
c.NotebookApp.token = ''
c.NotebookApp.password = ''
c.NotebookApp.open_browser = False
c.NotebookApp.port = 8080
c.NotebookApp.allow_origin_pat = (
'(^https://8080-dot-[0-9]+-dot-devshell\.appspot\.com$)|'
'(^https://colab\.research\.google\.com$)|'
'((https?://)?[0-9a-z]+-dot-datalab-vm[\-0-9a-z]*.googleusercontent.com)')
c.NotebookApp.allow_remote_access = True
c.NotebookApp.disable_check_xsrf = False
c.NotebookApp.notebook_dir = '/home'
This disables notebook token-based auth (auth is instead handled through oauth login on the proxy), and allows cross origin requests from three sources: Cloud Shell web preview, colab (see this blog post), and the Cloud Notebooks service proxy. Only the third is required for the notebook service; the first two support alternate access patterns.
To complete Zain's answer, below you can find a minimal example using official Jupyter image, inspired by this repo https://github.com/doitintl/AI-Platform-Notebook-Using-Custom-Container:
Dockerfile
FROM jupyter/base-notebook:python-3.9.5
EXPOSE 8080
ENTRYPOINT ["jupyter", "lab", "--ip", "0.0.0.0", "--allow-root", "--config", "/etc/jupyter/jupyter_notebook_config.py"]
COPY jupyter_notebook_config.py /etc/jupyter/
jupyter_notebook_config.py
(almost the same as Zain's, but with an extra pattern enabling the communication with the kernel; the communication didn't work without it)
c.NotebookApp.ip = '*'
c.NotebookApp.token = ''
c.NotebookApp.password = ''
c.NotebookApp.open_browser = False
c.NotebookApp.port = 8080
c.NotebookApp.allow_origin_pat = '(^https://8080-dot-[0-9]+-dot-devshell\.appspot\.com$)|(^https://colab\.research\.google\.com$)|((https?://)?[0-9a-z]+-dot-datalab-vm[\-0-9a-z]*.googleusercontent.com)|((https?://)?[0-9a-z]+-dot-[\-0-9a-z]*.notebooks.googleusercontent.com)|((https?://)?[0-9a-z\-]+\.[0-9a-z\-]+\.cloudshell\.dev)|((https?://)ssh\.cloud\.google\.com/devshell)'
c.NotebookApp.allow_remote_access = True
c.NotebookApp.disable_check_xsrf = False
c.NotebookApp.notebook_dir = '/home'
c.Session.debug = True
And finally, think about this page while troubleshooting: https://cloud.google.com/notebooks/docs/troubleshooting

_tds.InterfaceError when trying to connect to Azure Data Warehouse through Python 2.7 and ctds

I'm trying to connect a python 2.7 script to Azure SQL Data Warehouse.
The coding part is done and the test cases work in our development environment. We're are coding in Python 2.7 in MacOS X and connecting to ADW via ctds.
The problem appears when we deploy on our Azure Kubernetes pod (running Debian 9).
When we try to instantiate a connection this way:
# init a connection
self._connection = ctds.connect(
server='myserver.database.windows.net',
port=1433,
user="my_user#myserver.database.windows.net",
timeout=1200,
password="XXXXXXXX",
database="my_db",
autocommit=True
)
we get an exception that only prints the user name
my_user#myserver.database.windows.net
the type of the exception is
_tds.InterfaceError
The code deployed is the exact same and also the requirements are.
The documentation we found for this exception is almost non-existent.
Do you guys recognize it? Do you know how can we go around it?
We also tried in our old AWS instances of EC2 and AWS Kubernetes (which rans the same OS as the Azure ones) and it also doesn't work.
We managed to connect to ADW via sqlcmd, so that proves the pod can in fact connect (I guess).
EDIT: SOLVED. JUST CHANGED TO PYODBC
def connection(self):
""":rtype: pyodbc.Connection"""
if self._connection is None:
env = '' # whichever way you have to identify it
# init a connection
driver = '/usr/local/lib/libmsodbcsql.17.dylib' if env == 'dev' else '{ODBC Driver 17 for SQL Server}' # my dev env is MacOS and my prod is Debian 9
connection_string = 'Driver={driver};Server=tcp:{server},{port};Database={db};Uid={user};Pwd={password};Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;'.format(
driver=driver,
server='myserver.database.windows.net',
port=1433,
db='mydb',
user='myuser#myserver',
password='XXXXXXXXXXXX'
)
self._connection = pyodbc.connect(connection_string, autocommit=True)
return self._connection
As Ron says, pyodbc is recommended because it enables you to use a Microsoft-supported ODBC Driver.
I'm going to go ahead and guess that ctds is failing on redirect, and you need to force your server into "proxy" mode. See: Azure SQL Connectivity Architecture
EG
# Get SQL Server ID
sqlserverid=$(az sql server show -n sql-server-name -g sql-server-group --query 'id' -o tsv)
# Set URI
id="$sqlserverid/connectionPolicies/Default"
# Get current connection policy
az resource show --ids $id
# Update connection policy
az resource update --ids $id --set properties.connectionType=Proxy

Vagrant managed docker container doesn't start

I've been trying to write a vagrant file to start up my docker container to run a small web app I've been writing. However when I try use vagrant up I eventually get an error saying
The container started either never left the "stopped" state or
very quickly reverted to the "stopped" state. This is usually
because the container didn't execute a command that kept it running,
and usually indicates a misconfiguration.
If you meant for this container to not remain running, please
set the Docker provider configuration "remains_running" to "false":
config.vm.provider "docker" do |d|
d.remains_running = false
end
I'm very new to vagrant so I'm not really sure what the best way to try and fix the problem is.
My vagrant file contains
Vagrant.configure("2") do |config|
config.vm.synced_folder "thelibrary", "/thelibrary"
config.vm.provider "docker" do |d|
d.image = "django-dev"
d.has_ssh = false
d.ports = ["8000:8000"]
d.cmd = ["python", "/thelibrary/manage.py", "runserver", "0.0.0.0:8000"]
end
end
I'm not sure why it says the command doesn't keep running. I can run the docker container with the same command and it will spin up my django app without any issues.
I had the same problem but adding option
d.create_args = ["-i"]
solved my problem
I spent the day try to get the docker machine running.. finally got it working. Here is what I have in my vangrantfile, hope this can at least get you started:
config.vm.provider :docker do |d|
d.image = "paintedfox/postgresql"
d.name = "db"
d.cmd = ["/sbin/my_init", "--enable-insecure-key"]
end
vagrant status returns me this:
Current machine states:
dev running (docker)
Another solution that you can try is to remove all your existing images and start fresh, it could be that your image is broken.

Running pre-import customizations in Vagrant

I need to do some customization on created VM either before importing it or just before running it first time. For instance, I need to clear stale NAT port forwarding rules that tend to be left after the box with the same name, remove some disk controllers (reattach existing disks to IDE controller instead of SATA for compatiblity with older OS revisions that do not understand SATA, etc).
There are pre-boot and pre-import events in Vagrant code, but I wonder if there's any way of running some virtualbox/vagrant commands before booting created vm?
Yes, for running VBoxManage commands, see the "VBoxManage Customizations" chapter in the docs. The commands are by default run on pre-boot phase, but you can also specify the phase as a first argument:
Vagrant.configure("2") do |config|
# ...
config.vm.provider "virtualbox" do |v|
v.customize "pre-boot", ["modifyvm", :id, "--cpus", 2]
end
end
But I think there problem is that you don't have an easy and reliable way to get the disk image path.