Virtualbox Headless: not starting via systemd [closed] - virtualbox

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Already tried this Topic but doesn't solved it
I have placed a file called vbox.service under /lib/systemd/system/vbox.service with the following content:
[Unit]
Description=Virtualbox Headless VM
[Service]
ExecStart=/usr/bin/VBoxHeadless --startvm 4decf7c1-7eda-461c-92aa-835d2405a22e
ExecStop=/usr/bin/VBoxManage controlvm 4decf7c1-7eda-461c-92aa-835d2405a22e poweroff
User=my_user
[Install]
WantedBy=muti-user.target
If I start and stop it via
sudo systemctl start vbox and sudo systemctl stop vbox, everything works fine
Then i entered the following:
sudo systemctl enable vbox, but it wont start at boot
Here is the output
sudo systemctl status vbox
vbox.service - Virtualbox Headless VM
Loaded: loaded (/usr/lib/systemd/system/vbox.service; enabled)
Active: inactive (dead)
CGroup: name=systemd:/system/vbox.service
Jan 05 02:38:59 exia pulseaudio[1428]: [pulseaudio] main.c: Daemon startup failed.
Jan 05 02:40:08 exia systemd[1]: Started Virtualbox Headless VM.
Jan 05 02:42:02 exia systemd[1]: Stopping Virtualbox Headless VM...
Jan 05 02:42:02 exia VBoxManage[1546]: 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Jan 05 02:42:02 exia VBoxHeadless[1375]: Oracle VM VirtualBox Headless Interface 4.2.6_OSE
Jan 05 02:42:02 exia VBoxHeadless[1375]: (C) 2008-2012 Oracle Corporation
Jan 05 02:42:02 exia VBoxHeadless[1375]: All rights reserved.
Jan 05 02:42:02 exia VBoxHeadless[1375]: VRDE server is listening on port 3389.
Jan 05 02:42:02 exia VBoxHeadless[1375]: VRDE server is inactive.
Jan 05 02:42:02 exia systemd[1]: Stopped Virtualbox Headless VM.
/usr/bin/VBoxHeadless --startvm 4decf7c1-7eda-461c-92aa-835d2405a22e works fine
Any ideas, though?

This page on the Arch Linux wiki contains a slightly more flexible systemd.service file and some additional options which may hold the key.
That page also mentions that VirtualBox 4.2 has a built in auto start mechanism which may work better.
/etc/systemd/system/vboxvmservice#.service
[Unit]
Description=VBox Virtual Machine %i Service
Requires=systemd-modules-load.service
After=systemd-modules-load.service
[Service]
User=user
Group=vboxusers
ExecStart=/usr/bin/VBoxHeadless -s %i
ExecStop=/usr/bin/VBoxManage controlvm %i savestate
[Install]
WantedBy=multi-user.target

Your service is not being enabled for auto-start at bootstrap because you've given it the name of a non-existent target to be part of when it is enabled. The correct spelling is WantedBy=multi-user.target.

If after running sudo systemctl enable vbox the symlink /etc/systemd/system/multi-user.target.wants/vbox.service is not created, you can go ahead and create it manually with
sudo ln -sf /lib/systemd/system/vbox.service /etc/systemd/system/multi-user.target.wants/vbox.service
That should fix the problem.
Some similar problems solved that way.
Another alternative that worked for me, was to save the .service file on /etc/systemd/system/ instead of /lib/systemd/system/. Doing this allowed systemctl enable to work properly.

Related

How can I get Jenkins to run on a different port?

I'm having some trouble in trying to change my jenkins port as I was hoping to use port 8080 for a different service. I've tried this so far:
Currently running on amazon linux:
Jenkins version: Jenkins 2.332.1
I've tried to edit the config file: /etc/sysconfig/jenkins to:
JENKINS_PORT="7777"
After I restart jenkins however, the port does not change:
● jenkins.service - Jenkins Continuous Integration Server
Loaded: loaded (/usr/lib/systemd/system/jenkins.service; disabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/jenkins.service.d
└─override.conf
Active: active (running) since Tue 2022-04-05 15:52:24 UTC; 1min 33s ago
Main PID: 1017 (java)
Tasks: 36
Memory: 500.6M
CGroup: /system.slice/jenkins.service
└─1017 /usr/bin/java -Djava.awt.headless=true -jar /usr/share/java/jenkins.war --webroot=%C/jenkins/war --httpPort=8080
Apr 05 15:53:38 ip-172-0-2-240.eu-west-1.compute.internal jenkins[1017]: at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
Apr 05 15:53:38 ip-172-0-2-240.eu-west-1.compute.internal jenkins[1017]: at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
What am i missing here?
Check the service starting command
/usr/bin/java -Djava.awt.headless=true -jar /usr/share/java/jenkins.war --webroot=%C/jenkins/war --httpPort=8080
Edit the service by changing --httpPort=8080 to desired port then call daemon-reload and restart the service
Also, ensure the Security Group is configured for that port
There is a different fix in this link https://cdmana.com/2022/03/202203242138366513.html which suggest editing JENKINS_PORT in /usr/lib/systemd/system/jenkins.service the calling service jenkins start

Failed to start Redis In-Memory Data Store. Ubuntu 18.04

I am trying to install redis on my AWS server. I have Ubuntu 18.04 installed on it. I am following steps to install redis from digitalocean article.
When i run sudo systemctl status redis command i am getting below error.
screenshot
I tried to edit /etc/systemd/system/redis.service file and added Type=forking under [Service] section but still getting the same error.
Can anyone suggest me how i can get it fixed?
Thanks in advance.
Based on same digitalocean tutorial, actually it's running fine.
Run this command sudo systemctl restart redis.service, we get (showing "failed" on last line):
● redis.service - Redis In-Memory Data Store
Loaded: loaded (/etc/systemd/system/redis.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2021-06-28 12:03:11 +03; 1min 0s ago
Process: 20428 ExecStart=/usr/local/bin/redis-server /etc/redis/redis.conf (code=exited, status=
Main PID: 20428 (code=exited, status=203/EXEC)
Jun 28 12:03:11 XYZ systemd[1]: redis.service: Service hold-off time over, scheduling restar
Jun 28 12:03:11 XYZ systemd[1]: redis.service: Scheduled restart job, restart counter is at
Jun 28 12:03:11 XYZ systemd[1]: Stopped Redis In-Memory Data Store.
Jun 28 12:03:11 XYZ systemd[1]: redis.service: Start request repeated too quickly.
Jun 28 12:03:11 XYZ systemd[1]: redis.service: Failed with result 'exit-code'.
Jun 28 12:03:11 XYZ systemd[1]: Failed to start Redis In-Memory Data Store.
But if you run sudo service redis-server status, we get (showing "running" on 3rd line):
● redis-server.service - Advanced key-value store
Loaded: loaded (/lib/systemd/system/redis-server.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2021-06-28 11:50:13 +03; 19min ago
Docs: http://redis.io/documentation,
man:redis-server(1)
Process: 19278 ExecStop=/bin/kill -s TERM $MAINPID (code=exited, status=0/SUCCESS)
Process: 19371 ExecStart=/usr/bin/redis-server /etc/redis/redis.conf (code=exited, status=0/SUCC
Main PID: 19382 (redis-server)
Tasks: 4 (limit: 4915)
CGroup: /system.slice/redis-server.service
└─19382 /usr/bin/redis-server 127.0.0.1:6379
Jun 28 11:50:13 XYZ systemd[1]: Starting Advanced key-value store...
Jun 28 11:50:13 XYZ systemd[1]: redis-server.service: Can't open PID file /var/run/redis/red
Jun 28 11:50:13 XYZ systemd[1]: Started Advanced key-value store.
After searching for hours, it seems like it's some difference between systemctl & service and nothing more, but the actual redis server is running fine. Corrects me if that's not the case. Here's the link: https://askubuntu.com/questions/903354/difference-between-systemctl-and-service-commands
You can even check if redis is working fine, by redis-cli ping, should print PONG
I also encountered this problem, then I tried to check it again.
Finally, I found that when I authorized /var/lib/redis, I entered the wrong command, causing the redis account to have no access to /var/lib/redis.
sudo chown redis:redis /var/lib/redis
sudo systemctl restart redis
succeeded.

Deploying Django Channels: how to keep Daphne running after exiting shell on web server

As practice, I'm trying to deploy Andrew Godwin's multichat example with Django Channels 2.1.1 on DigitalOcean Ubuntu 16.04.4. However, I don't know how to exit the Ubuntu server without Channels' Daphne server stopping.
Right now, almost my entire familiarity with deployment comes from this tutorial, and that's how I've deployed the site. But according to Channels' documentation, I have to run one of these three in production to start Daphne:
daphne myproject.asgi:application
daphne -p 8001 myproject.asgi:application
daphne -b 0.0.0.0 -p 8001 myproject.asgi:application
So, in addition to following the DigitalOcean tutorial, I ran these also. The third one worked for me and the site ran well. However, if I exit shell on Ubuntu, Daphne stops too.
The tutorial has gunicorn access a sock file (--bind unix:/home/sammy/myproject/myproject.sock), and in my research so far I've seen on a few sites published before 2018 that somehow somewhere a daphne.sock file is generated. So, I guess Channels deploys similarly? But I haven't seen any details about how this is done.
How do I deploy the multichat example so I can exit the Ubuntu web server without Daphne stopping?
Update 6 May 2018, 8pm CET:
I tried kagronick's solution below and created a systemd file at /etc/systemd/system/daphne_seb.service:
[Unit]
Description=daphne daemon for seb
After=network.target
[Service]
User=root
Group=www-data
WorkingDirectory=/home/seb/seb
ExecStart=/home/seb/env_seb/bin/python /home/seb/seb/manage.py daphne -b 0.0.0.0 -p 8001 multichat.asgi:application
Restart=on-failure
[Install]
WantedBy=multi-user.target
I did systemctl daemon-reload and systemctl start daphne_seb.service and it ran for a couple of seconds. Then, systemctl status daphne_seb.service said Unknown command: 'daphne'.
I tried restarting it. Then I rebooted the OS. Now status says:
daphne_seb.service - daphne daemon for seb
Loaded: loaded (/etc/systemd/system/daphne_seb.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Sun 2018-05-06 19:33:31 UTC; 1s ago
Process: 2459 ExecStart=/home/seb/env_seb/bin/python /home/seb/seb/manage.py daphne -b 0.0.0.0 -p 8001 multichat.asgi:application (code=exited, status=1
Main PID: 2459 (code=exited, status=1/FAILURE)
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Main process exited, code=exited, status=1/FAILURE
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Unit entered failed state.
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Failed with result 'exit-code'.
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Service hold-off time over, scheduling restart.
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: Stopped daphne daemon for seb.
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Start request repeated too quickly.
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: Failed to start daphne daemon for seb.
I checked the filepaths. They're correct. I also tried adding Environment=DJANGO_SETTINGS_MODULE=multichat.settings, which I saw here. But that didn't help either.
Update 6 May 2018, 10pm CET:
Then, I read here that only 5 restarts are allowed within a 10-second period. So I removed Restart=on-failure to start the service myself.
This brought me back to Unknown command: 'daphne'. This solution suggested to me that I shouldn't be pointing to Python but to Daphne in my virtualenv, so I changed it: ExecStart=/home/seb/env_seb/bin/daphne. I also removed /home/seb/seb/manage.py based on the same solution. This gave me a new problem:
daphne_seb.service - daphne daemon for seb
Loaded: loaded (/etc/systemd/system/daphne_seb.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2018-05-06 19:55:24 UTC; 5s ago
Process: 2903 ExecStart=/home/seb/env_seb/bin/daphne daphne -b 0.0.0.0 -p 8001 multichat.asgi:application (code=exited, status=2)
Main PID: 2903 (code=exited, status=2)
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: [-v VERBOSITY] [-t HTTP_TIMEOUT] [--access-log ACCESS_LOG]
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: [--ping-interval PING_INTERVAL] [--ping-timeout PING_TIMEOUT]
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: [--application-close-timeout APPLICATION_CLOSE_TIMEOUT]
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: [--ws-protocol [WS_PROTOCOLS [WS_PROTOCOLS ...]]]
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: [--root-path ROOT_PATH] [--proxy-headers]
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: application
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: daphne: error: unrecognized arguments: multichat.asgi:application
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Unit entered failed state.
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Failed with result 'exit-code'.
I check my multichat.asgi:application and it's in the right place. So I can't figure out why it says error: unrecognized arguments: multichat.asgi:application.
I use systemd for it. You would set a unit file in /etc/systemd/system/daphne.service
with contents like:
[Unit]
Description=My Daphne Service
After=network.target
[Service]
Type=simple
User=wwwrunn
WorkingDirectory=/srv/myapp
ExecStart=/path/to/my/virtualenv/bin/python /path/to/daphne -b 0.0.0.0 -p 8001 myproject.asgi:application
Restart=on-failure
[Install]
WantedBy=multi-user.target
I have all the parts of my application configured this way. It allows them to resume if they ever crash. Systemd can handle all the logging. And they are all started in the right order. Use Type=simple so you don't need to do any forking or writing of PID files. It will just manage the one process. I run a few of these to spread out the load.
After you put this file in place you would run the following commands:
Make the system aware of the service
sudo systemctl daemon-reload
Start the service
sudo systemctl start daphne.service
Make it run on reboot
sudo systemctl enable daphne.service
You will need to edit some of the parameters to fit your needs. If you called the file something other than daphne.service you'll need to change it in the commands. The ExecStart line will need to be changed to your python path and the namespace where your application lives.
Your logs will now show up in journalctl. To see logs for this service you would do journalctl -u daphine.service. If you want to tail the logs put -f at the end for "follow". journalctl has a bunch of options that you can find with --help or by looking up resources online. The same goes with systemd unit files.
Of course you should know that Daphine shouldn't be exposed directly to the web. Nginx, Apache, or another web server should be proxying to Daphne and serving your static assets. (You could also use a CDN for that)
My question is in fact the same as this one. Questions like that usually get sent to this solution. I did not recognise the question as the same as mine at first because (i.) I wasn't familiar with the vocabulary of deployment with Channels yet and (ii.) a lot of these questions and their solutions mention things that have already been deprecated for Channels 2. I've lost the page but, for instance, Channels 2 doesn't require us to do python manage.py runworker.
There's also the issue of upstart and systemd. Several solutions use an upstart script. But, I had a vague sense that maybe if I'm on Ubuntu 16.04.4, I should use systemd. I did a google and found that systemd had indeed replaced upstart, according to articles like this one.
#kagronick provides the correct systemd solution, but I had to edit it to make it work for me:
# /etc/systemd/system/daphne_seb.service
[Unit]
Description=daphne daemon for seb
After=network.target
[Service]
User=root
Group=www-data
WorkingDirectory=/home/seb/seb
ExecStart=/home/seb/env_seb/bin/daphne -b 0.0.0.0 -p 8001 multichat.asgi:application
# I turned this off for testing purposes.
# Still not sure if should use 'on-failure' or 'always'.
# Restart=on-failure
[Install]
WantedBy=multi-user.target
And then in shell:
systemctl daemon-reload
systemctl start daphne_seb.service

How to debug a failed systemctl service (code=exited, status=217/USER)?

I'm trying to add my first service on rhel7 (which resides in AWS/EC2), but - the service is not configured correctly - as I get:
[ec2-user#ip-172-30-1-96 ~]$ systemctl status clouddirectd.service -l
● clouddirectd.service - CloudDirect Daemon
Loaded: loaded (/usr/lib/systemd/system/clouddirectd.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since Tue 2018-01-09 16:09:42 EST; 8s ago
Main PID: 10064 (code=exited, status=217/USER)
Jan 09 16:09:42 ip-172-30-1-96.us-west-1.compute.internal systemd[1]: clouddirectd.service: main process exited, code=exited, status=217/USER
Jan 09 16:09:42 ip-172-30-1-96.us-west-1.compute.internal systemd[1]: Unit clouddirectd.service entered failed state.
Jan 09 16:09:42 ip-172-30-1-96.us-west-1.compute.internal systemd[1]: clouddirectd.service failed.
Also:
[ec2-user#ip-172-30-1-96 ~]$ systemctl is-active clouddirectd
activating
[ec2-user#ip-172-30-1-96 ~]$ sudo systemctl list-units --type service --all | grep clouddirectd
clouddirectd.service loaded activating auto-restart CloudDirect Daemon
And my unit file is:
[ec2-user#ip-172-30-1-96 ~]$ cat /usr/lib/systemd/system/clouddirectd.service
[Unit]
Description=CloudDirect Daemon
After=network.target
[Service]
Environment=AWS_SHARED_CREDENTIALS_FILE=/etc/sonar/.aws/credentials
#ExecStart=/usr/lib/sonar/clouddirect/virtualenv/bin/python /usr/bin/sonar/clouddirectd -c /etc/sonar/clouddirect/clouddirectd.conf
ExecStart=/usr/lib/sonar/clouddirect/virtualenv/bin/python /usr/bin/clouddirect -c /etc/sonar/clouddirect.conf
# #PERM# allow group write permission on newly created files
UMask=0007
#User=clouddirectd
User=clouddirect
Group=sonar
KillSignal=SIGINT
TimeoutStopSec=60min
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
Can you suggest how to debug this systemctl service so it won't keep dying and auto restarting?
The error 217 indicate the user did not exist at the time the service tried to start. In your case the user specified in your service is clouddirect.
Main PID: 10064 (code=exited, status=217/USER)
Jan 09 16:09:42 ip-172-30-1-96.us-west-1.compute.internal systemd[1]: clouddirectd.service: main process exited, code=exited, status=217/USER
This could be caused if that is not the actual user name (for example if it has a typo), it can also be caused if the user is part of some external user store (ex: LDAP or Active Directory) and the service which needs to start that allows the Linux server to access the external user store is not up yet. For example vasd.service starts a product used to allow Linux to authenticate against Active Directory, if vasd.service is not up and you have specified a user that is only available in Active Directory you would want to add that service in your After= line. For example:
After=network.target vasd.service
There's two parts to the question. One is how to diagnose a 217/USER, the other is how to fix it. I'll just focus on the former.
For the 217/USER there's some good pointers here:
https://www.reddit.com/r/linuxquestions/comments/oaya49/systemd_service_not_starting_with_status217/
217 doesnt' "always" mean it's a user problem, it just means it exited with a 217. May or may not...
You could use journalctl to see the logs of which services "seem to come up after it does" initially or what not.
It's possible that "network users" aren't yet available at the time the system is started during boot, you can fix that by adding After=nss-user-lookup.target https://systemd.io/UIDS-GIDS/ though that's not the case here since it still fails after restarting, which is later. systemd expects the user specified to "be available" when the service starts. So for "system users" (which start early running processes) they need to be available on the local box. For later started processes they can be "network users".
You could also try changing your group and username (and environment) to what you "think" systemd is running and run it manually, see what happens. https://serverfault.com/questions/410577/execute-a-command-from-another-group
Kind of wish systemd output more debug so you could tell what it is running more easily...
In certain bizarre cases you may need to specify both User= and Group= https://superuser.com/a/1452367/39364
In our case running "vintela status" had a message "SELinux may not be configured correctly" and sure enough, after disabling SELinux, it started working as expected, no more 217. [redhat 8]

docker-machine create with digitalocean driver: ssh command error

I´m using docker tools on windows.
create command was working perfectly last week and I managed to create a number of machines on Digital Ocean. Then I tried today with no success. I repeated the same command with different regions and I always get the same result:
λ docker-machine create -d digitalocean --digitalocean-access-token=MYTOKEN --digitalocean-region=ams2 vmname
Running pre-create checks...
Creating machine...
(fernu) Creating SSH key...
(fernu) Creating Digital Ocean droplet...
(fernu) Waiting for IP address to be assigned to the Droplet...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Error creating machine: Error running provisioning: ssh command error:
command : sudo systemctl -f start docker
err : exit status 1
output : Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
If I execute the suggested command:
root#fernu:~# systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/docker.service.d
└─10-machine.conf
Active: inactive (dead) (Result: exit-code) since Fri 2017-06-30 20:56:13 UTC; 8min ago
Docs: https://docs.docker.com
Process: 4943 ExecStart=/usr/bin/docker daemon -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver aufs --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=digitalocean (code=exited, status=1/FAILURE)
Main PID: 4943 (code=exited, status=1/FAILURE)
Jun 30 20:56:13 fernu systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 30 20:56:13 fernu systemd[1]: Failed to start Docker Application Container Engine.
Jun 30 20:56:13 fernu systemd[1]: docker.service: Unit entered failed state.
Jun 30 20:56:13 fernu systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 30 20:56:13 fernu systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Jun 30 20:56:13 fernu systemd[1]: Stopped Docker Application Container Engine.
Jun 30 20:56:13 fernu systemd[1]: docker.service: Start request repeated too quickly.
Jun 30 20:56:13 fernu systemd[1]: Failed to start Docker Application Container Engine.
Any help would be appreciated
Update
It´s working with ubuntu 14:
--digitalocean-image=ubuntu-14-04-x64 so it seams like a problem with the default image (ubuntu-16-04-x64)
This seems to be hitting a lot of people. TL;DR: There is a bug in docker-machine v0.12.0 and this issue can be resolved by upgrading.
Logging in to the DigitalOcean instance and running journalctl -xe provides more information:
-- Unit docker.service has begun starting up.
Jul 07 20:03:52 docker-sandbox docker[4930]: `docker daemon` is not supported on Linux. Please run `do
Jul 07 20:03:52 docker-sandbox systemd[1]: docker.service: Main process exited, code=exited, status=1/
Jul 07 20:03:52 docker-sandbox systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
The key here is docker daemon is not supported on Linux. A bug in docker-machine's version comparison code caused an incorrect systemd unit file to be produced (located at /etc/systemd/system/docker.service.d/10-machine.conf) on certain versions of Ubuntu.
A fix has been committed and a new release (v0.12.1) was made.
You can grab the latest release at: https://github.com/docker/machine/releases/tag/v0.12.1