How I should be able to add two port numbers to my new Inbound Rule for the GitHub webhook I created with the Type being Custom TCP, the Protocol being TCP, the Port Range being 8080, and most importantly, the Source being Anywhere with two custom source ports of 0.0.0.0/0 and ::/0 which is seen below in a screenshot of the specific reference from the course itself:
Then when I try and do the same for my EC2 Instance's Inbound Rules, I can only add one source for the port range of 8080 as seen below...
As a result, I cannot have both 0.0.0.0/0 and ::/0 connected to port range 8080 simultaneously as seen again below for clarification
When I try and do so, I am forced into creating two separate Inbound Rules, both being Custom TCP, Port Range 8080, and the Source of each directing to 0.0.0.0/0 and ::/0 independent of each other, not under one unified Inbound Rule
As a result, I believe this is why I am not capable of creating an established webhook from my GitHub repo to my AWS EC2 instance.
For more clarification, the course itself is walking me through how to properly setup a webhook from GitHub to my newly created AWS EC2 in conjunction with the directions here in the very detailed and well-written Strapi documentation, however, after I have followed these directions and have properly created a webhook.js file within my NodeWebHooks file of my root SSH, this is the error I get when I try to establish the connection which is most probably because of the inability to setup my Inbound Rule properly per the additional directive of the course.
As I still try and attempt to follow the documentation with two separate Inbound Rules from Port 8080 with a source of 0.0.0.0/0 and ::/0 respectively, in accordance with the guide, in my root SSH I enter:
echo $PATH
and copy and paste the path within the file of:
sudo nano /etc/systemd/system/webhook.service
in the proper script as below:
or as code, it is:
[Unit]
Description=Github webhook
After=network.target
[Service]
Environment=PATH=/home/ubuntu/.npm-global/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
Type=simple
User=ubuntu
ExecStart=/usr/bin/nodejs /home/ubuntu/NodeWebHooks/webhook.sj
Restart=on-failure
[Install]
WantedBy=multi-user.target
Then when I save and exit from that file, in order to enable it, I do:
sudo systemctl enable webhook.service
and receive the appropriate desired response of "Created symlink ... etc..." where it clarifies that a webhook.service has been created.
However, then when I run the following:
sudo systemctl start webhook
sudo systemctl status webhook
I get an error which states the following:
or alternatively, as code it is the following response:
ubuntu#ip-172-31-3-74:~$ sudo systemctl start webhook
ubuntu#ip-172-31-3-74:~$ sudo systemctl status webhook
● webhook.service - Github webhook
Loaded: loaded (/etc/systemd/system/webhook.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2021-10-02 22:45:47 UTC; 7s ago
Process: 27129 ExecStart=/usr/bin/nodejs /home/ubuntu/NodeWebHooks/webhook.sj (code=exited, status=1/FAILURE)
Main PID: 27129 (code=exited, status=1/FAILURE)
Oct 02 22:45:47 ip-172-31-3-74 systemd[1]: webhook.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 22:45:47 ip-172-31-3-74 systemd[1]: webhook.service: Failed with result 'exit-code'.
Oct 02 22:45:47 ip-172-31-3-74 systemd[1]: webhook.service: Service hold-off time over, scheduling restart.
Oct 02 22:45:47 ip-172-31-3-74 systemd[1]: webhook.service: Scheduled restart job, restart counter is at 5.
Oct 02 22:45:47 ip-172-31-3-74 systemd[1]: Stopped Github webhook.
Oct 02 22:45:47 ip-172-31-3-74 systemd[1]: webhook.service: Start request repeated too quickly.
Oct 02 22:45:47 ip-172-31-3-74 systemd[1]: webhook.service: Failed with result 'exit-code'.
Oct 02 22:45:47 ip-172-31-3-74 systemd[1]: Failed to start Github webhook.
The issue is most likely that the webhook.service file references under [Service]: home/ubuntu/NodeWebHooks/webhook.sj, when it is probably named /home/ubuntu/NodeWebHooks/webhook.js. It's the little things that get you!
Related
I'm having some trouble in trying to change my jenkins port as I was hoping to use port 8080 for a different service. I've tried this so far:
Currently running on amazon linux:
Jenkins version: Jenkins 2.332.1
I've tried to edit the config file: /etc/sysconfig/jenkins to:
JENKINS_PORT="7777"
After I restart jenkins however, the port does not change:
● jenkins.service - Jenkins Continuous Integration Server
Loaded: loaded (/usr/lib/systemd/system/jenkins.service; disabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/jenkins.service.d
└─override.conf
Active: active (running) since Tue 2022-04-05 15:52:24 UTC; 1min 33s ago
Main PID: 1017 (java)
Tasks: 36
Memory: 500.6M
CGroup: /system.slice/jenkins.service
└─1017 /usr/bin/java -Djava.awt.headless=true -jar /usr/share/java/jenkins.war --webroot=%C/jenkins/war --httpPort=8080
Apr 05 15:53:38 ip-172-0-2-240.eu-west-1.compute.internal jenkins[1017]: at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
Apr 05 15:53:38 ip-172-0-2-240.eu-west-1.compute.internal jenkins[1017]: at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
What am i missing here?
Check the service starting command
/usr/bin/java -Djava.awt.headless=true -jar /usr/share/java/jenkins.war --webroot=%C/jenkins/war --httpPort=8080
Edit the service by changing --httpPort=8080 to desired port then call daemon-reload and restart the service
Also, ensure the Security Group is configured for that port
There is a different fix in this link https://cdmana.com/2022/03/202203242138366513.html which suggest editing JENKINS_PORT in /usr/lib/systemd/system/jenkins.service the calling service jenkins start
I have installed redis in my AWS server. I have followed this: https://www.digitalocean.com/community/tutorials/how-to-install-secure-redis-centos-7
$ systemctl start redis.service
$ systemctl enable redis
-> Created symlink /etc/systemd/system/multi-user.target.wants/redis.service → /usr/lib/systemd/system/redis.service.
$ systemctl status redis.service
● redis.service - Redis persistent key-value database
Loaded: loaded (/usr/lib/systemd/system/redis.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/redis.service.d
└─limit.conf
Active: failed (Result: exit-code) since Wed 2020-08-26 02:28:25 UTC; 10s ago
Main PID: 5012 (code=exited, status=1/FAILURE)
Aug 26 02:28:25 ip-xxx-xx-xx-xx.ap-southeast-2.compute.internal systemd[1]: Starting Redis persistent key-value database...
Aug 26 02:28:25 ip-xxx-xx-xx-xx.ap-southeast-2.compute.internal systemd[1]: Started Redis persistent key-value database.
Aug 26 02:28:25 ip-xxx-xx-xx-xx.ap-southeast-2.compute.internal systemd[1]: redis.service: Main process exited, code=exited, status=1/FAILURE
Aug 26 02:28:25 ip-xxx-xx-xx-xx.ap-southeast-2.compute.internal systemd[1]: redis.service: Failed with result 'exit-code'.
And when I check the /var/log/redis/redis.log this is what I see:
5012:C 26 Aug 2020 02:28:25.574 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
5012:C 26 Aug 2020 02:28:25.574 # Redis version=5.0.3, bits=64, commit=00000000, modified=0, pid=5012, just started
5012:C 26 Aug 2020 02:28:25.574 # Configuration loaded
5012:C 26 Aug 2020 02:28:25.574 * supervised by systemd, will signal readiness
5012:M 26 Aug 2020 02:28:25.575 # Could not create server TCP listening socket 127.0.0.1:6379: bind: Address already in use
And upon checking the ports:
$ netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 2812/redis-server *
tcp6 0 0 :::6379 :::* LISTEN 2812/redis-server *
This is showing the port 6379 is actually being used by redis-server.
So why cannot it start then?
Do I need to add any inbound/outbound rules in AWS? Please help.
UPDATE
ExecStart=/usr/bin/redis-server /etc/redis.conf --supervised systemd command on terminal returns bash: /etc/redis.conf: Permission denied. Looks like need to give right permission to /etc/redis.conf file.
$ ls -l /etc/redis.conf
-rw-r-----. 1 redis redis 62189 Aug 26 03:04 /etc/redis.conf
So what permission do I need to give here? Who should own this file?
As practice, I'm trying to deploy Andrew Godwin's multichat example with Django Channels 2.1.1 on DigitalOcean Ubuntu 16.04.4. However, I don't know how to exit the Ubuntu server without Channels' Daphne server stopping.
Right now, almost my entire familiarity with deployment comes from this tutorial, and that's how I've deployed the site. But according to Channels' documentation, I have to run one of these three in production to start Daphne:
daphne myproject.asgi:application
daphne -p 8001 myproject.asgi:application
daphne -b 0.0.0.0 -p 8001 myproject.asgi:application
So, in addition to following the DigitalOcean tutorial, I ran these also. The third one worked for me and the site ran well. However, if I exit shell on Ubuntu, Daphne stops too.
The tutorial has gunicorn access a sock file (--bind unix:/home/sammy/myproject/myproject.sock), and in my research so far I've seen on a few sites published before 2018 that somehow somewhere a daphne.sock file is generated. So, I guess Channels deploys similarly? But I haven't seen any details about how this is done.
How do I deploy the multichat example so I can exit the Ubuntu web server without Daphne stopping?
Update 6 May 2018, 8pm CET:
I tried kagronick's solution below and created a systemd file at /etc/systemd/system/daphne_seb.service:
[Unit]
Description=daphne daemon for seb
After=network.target
[Service]
User=root
Group=www-data
WorkingDirectory=/home/seb/seb
ExecStart=/home/seb/env_seb/bin/python /home/seb/seb/manage.py daphne -b 0.0.0.0 -p 8001 multichat.asgi:application
Restart=on-failure
[Install]
WantedBy=multi-user.target
I did systemctl daemon-reload and systemctl start daphne_seb.service and it ran for a couple of seconds. Then, systemctl status daphne_seb.service said Unknown command: 'daphne'.
I tried restarting it. Then I rebooted the OS. Now status says:
daphne_seb.service - daphne daemon for seb
Loaded: loaded (/etc/systemd/system/daphne_seb.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Sun 2018-05-06 19:33:31 UTC; 1s ago
Process: 2459 ExecStart=/home/seb/env_seb/bin/python /home/seb/seb/manage.py daphne -b 0.0.0.0 -p 8001 multichat.asgi:application (code=exited, status=1
Main PID: 2459 (code=exited, status=1/FAILURE)
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Main process exited, code=exited, status=1/FAILURE
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Unit entered failed state.
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Failed with result 'exit-code'.
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Service hold-off time over, scheduling restart.
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: Stopped daphne daemon for seb.
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Start request repeated too quickly.
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: Failed to start daphne daemon for seb.
I checked the filepaths. They're correct. I also tried adding Environment=DJANGO_SETTINGS_MODULE=multichat.settings, which I saw here. But that didn't help either.
Update 6 May 2018, 10pm CET:
Then, I read here that only 5 restarts are allowed within a 10-second period. So I removed Restart=on-failure to start the service myself.
This brought me back to Unknown command: 'daphne'. This solution suggested to me that I shouldn't be pointing to Python but to Daphne in my virtualenv, so I changed it: ExecStart=/home/seb/env_seb/bin/daphne. I also removed /home/seb/seb/manage.py based on the same solution. This gave me a new problem:
daphne_seb.service - daphne daemon for seb
Loaded: loaded (/etc/systemd/system/daphne_seb.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2018-05-06 19:55:24 UTC; 5s ago
Process: 2903 ExecStart=/home/seb/env_seb/bin/daphne daphne -b 0.0.0.0 -p 8001 multichat.asgi:application (code=exited, status=2)
Main PID: 2903 (code=exited, status=2)
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: [-v VERBOSITY] [-t HTTP_TIMEOUT] [--access-log ACCESS_LOG]
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: [--ping-interval PING_INTERVAL] [--ping-timeout PING_TIMEOUT]
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: [--application-close-timeout APPLICATION_CLOSE_TIMEOUT]
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: [--ws-protocol [WS_PROTOCOLS [WS_PROTOCOLS ...]]]
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: [--root-path ROOT_PATH] [--proxy-headers]
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: application
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: daphne: error: unrecognized arguments: multichat.asgi:application
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Unit entered failed state.
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Failed with result 'exit-code'.
I check my multichat.asgi:application and it's in the right place. So I can't figure out why it says error: unrecognized arguments: multichat.asgi:application.
I use systemd for it. You would set a unit file in /etc/systemd/system/daphne.service
with contents like:
[Unit]
Description=My Daphne Service
After=network.target
[Service]
Type=simple
User=wwwrunn
WorkingDirectory=/srv/myapp
ExecStart=/path/to/my/virtualenv/bin/python /path/to/daphne -b 0.0.0.0 -p 8001 myproject.asgi:application
Restart=on-failure
[Install]
WantedBy=multi-user.target
I have all the parts of my application configured this way. It allows them to resume if they ever crash. Systemd can handle all the logging. And they are all started in the right order. Use Type=simple so you don't need to do any forking or writing of PID files. It will just manage the one process. I run a few of these to spread out the load.
After you put this file in place you would run the following commands:
Make the system aware of the service
sudo systemctl daemon-reload
Start the service
sudo systemctl start daphne.service
Make it run on reboot
sudo systemctl enable daphne.service
You will need to edit some of the parameters to fit your needs. If you called the file something other than daphne.service you'll need to change it in the commands. The ExecStart line will need to be changed to your python path and the namespace where your application lives.
Your logs will now show up in journalctl. To see logs for this service you would do journalctl -u daphine.service. If you want to tail the logs put -f at the end for "follow". journalctl has a bunch of options that you can find with --help or by looking up resources online. The same goes with systemd unit files.
Of course you should know that Daphine shouldn't be exposed directly to the web. Nginx, Apache, or another web server should be proxying to Daphne and serving your static assets. (You could also use a CDN for that)
My question is in fact the same as this one. Questions like that usually get sent to this solution. I did not recognise the question as the same as mine at first because (i.) I wasn't familiar with the vocabulary of deployment with Channels yet and (ii.) a lot of these questions and their solutions mention things that have already been deprecated for Channels 2. I've lost the page but, for instance, Channels 2 doesn't require us to do python manage.py runworker.
There's also the issue of upstart and systemd. Several solutions use an upstart script. But, I had a vague sense that maybe if I'm on Ubuntu 16.04.4, I should use systemd. I did a google and found that systemd had indeed replaced upstart, according to articles like this one.
#kagronick provides the correct systemd solution, but I had to edit it to make it work for me:
# /etc/systemd/system/daphne_seb.service
[Unit]
Description=daphne daemon for seb
After=network.target
[Service]
User=root
Group=www-data
WorkingDirectory=/home/seb/seb
ExecStart=/home/seb/env_seb/bin/daphne -b 0.0.0.0 -p 8001 multichat.asgi:application
# I turned this off for testing purposes.
# Still not sure if should use 'on-failure' or 'always'.
# Restart=on-failure
[Install]
WantedBy=multi-user.target
And then in shell:
systemctl daemon-reload
systemctl start daphne_seb.service
I'm trying to add my first service on rhel7 (which resides in AWS/EC2), but - the service is not configured correctly - as I get:
[ec2-user#ip-172-30-1-96 ~]$ systemctl status clouddirectd.service -l
● clouddirectd.service - CloudDirect Daemon
Loaded: loaded (/usr/lib/systemd/system/clouddirectd.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since Tue 2018-01-09 16:09:42 EST; 8s ago
Main PID: 10064 (code=exited, status=217/USER)
Jan 09 16:09:42 ip-172-30-1-96.us-west-1.compute.internal systemd[1]: clouddirectd.service: main process exited, code=exited, status=217/USER
Jan 09 16:09:42 ip-172-30-1-96.us-west-1.compute.internal systemd[1]: Unit clouddirectd.service entered failed state.
Jan 09 16:09:42 ip-172-30-1-96.us-west-1.compute.internal systemd[1]: clouddirectd.service failed.
Also:
[ec2-user#ip-172-30-1-96 ~]$ systemctl is-active clouddirectd
activating
[ec2-user#ip-172-30-1-96 ~]$ sudo systemctl list-units --type service --all | grep clouddirectd
clouddirectd.service loaded activating auto-restart CloudDirect Daemon
And my unit file is:
[ec2-user#ip-172-30-1-96 ~]$ cat /usr/lib/systemd/system/clouddirectd.service
[Unit]
Description=CloudDirect Daemon
After=network.target
[Service]
Environment=AWS_SHARED_CREDENTIALS_FILE=/etc/sonar/.aws/credentials
#ExecStart=/usr/lib/sonar/clouddirect/virtualenv/bin/python /usr/bin/sonar/clouddirectd -c /etc/sonar/clouddirect/clouddirectd.conf
ExecStart=/usr/lib/sonar/clouddirect/virtualenv/bin/python /usr/bin/clouddirect -c /etc/sonar/clouddirect.conf
# #PERM# allow group write permission on newly created files
UMask=0007
#User=clouddirectd
User=clouddirect
Group=sonar
KillSignal=SIGINT
TimeoutStopSec=60min
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
Can you suggest how to debug this systemctl service so it won't keep dying and auto restarting?
The error 217 indicate the user did not exist at the time the service tried to start. In your case the user specified in your service is clouddirect.
Main PID: 10064 (code=exited, status=217/USER)
Jan 09 16:09:42 ip-172-30-1-96.us-west-1.compute.internal systemd[1]: clouddirectd.service: main process exited, code=exited, status=217/USER
This could be caused if that is not the actual user name (for example if it has a typo), it can also be caused if the user is part of some external user store (ex: LDAP or Active Directory) and the service which needs to start that allows the Linux server to access the external user store is not up yet. For example vasd.service starts a product used to allow Linux to authenticate against Active Directory, if vasd.service is not up and you have specified a user that is only available in Active Directory you would want to add that service in your After= line. For example:
After=network.target vasd.service
There's two parts to the question. One is how to diagnose a 217/USER, the other is how to fix it. I'll just focus on the former.
For the 217/USER there's some good pointers here:
https://www.reddit.com/r/linuxquestions/comments/oaya49/systemd_service_not_starting_with_status217/
217 doesnt' "always" mean it's a user problem, it just means it exited with a 217. May or may not...
You could use journalctl to see the logs of which services "seem to come up after it does" initially or what not.
It's possible that "network users" aren't yet available at the time the system is started during boot, you can fix that by adding After=nss-user-lookup.target https://systemd.io/UIDS-GIDS/ though that's not the case here since it still fails after restarting, which is later. systemd expects the user specified to "be available" when the service starts. So for "system users" (which start early running processes) they need to be available on the local box. For later started processes they can be "network users".
You could also try changing your group and username (and environment) to what you "think" systemd is running and run it manually, see what happens. https://serverfault.com/questions/410577/execute-a-command-from-another-group
Kind of wish systemd output more debug so you could tell what it is running more easily...
In certain bizarre cases you may need to specify both User= and Group= https://superuser.com/a/1452367/39364
In our case running "vintela status" had a message "SELinux may not be configured correctly" and sure enough, after disabling SELinux, it started working as expected, no more 217. [redhat 8]
I´m using docker tools on windows.
create command was working perfectly last week and I managed to create a number of machines on Digital Ocean. Then I tried today with no success. I repeated the same command with different regions and I always get the same result:
λ docker-machine create -d digitalocean --digitalocean-access-token=MYTOKEN --digitalocean-region=ams2 vmname
Running pre-create checks...
Creating machine...
(fernu) Creating SSH key...
(fernu) Creating Digital Ocean droplet...
(fernu) Waiting for IP address to be assigned to the Droplet...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Error creating machine: Error running provisioning: ssh command error:
command : sudo systemctl -f start docker
err : exit status 1
output : Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
If I execute the suggested command:
root#fernu:~# systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/docker.service.d
└─10-machine.conf
Active: inactive (dead) (Result: exit-code) since Fri 2017-06-30 20:56:13 UTC; 8min ago
Docs: https://docs.docker.com
Process: 4943 ExecStart=/usr/bin/docker daemon -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver aufs --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=digitalocean (code=exited, status=1/FAILURE)
Main PID: 4943 (code=exited, status=1/FAILURE)
Jun 30 20:56:13 fernu systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 30 20:56:13 fernu systemd[1]: Failed to start Docker Application Container Engine.
Jun 30 20:56:13 fernu systemd[1]: docker.service: Unit entered failed state.
Jun 30 20:56:13 fernu systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 30 20:56:13 fernu systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Jun 30 20:56:13 fernu systemd[1]: Stopped Docker Application Container Engine.
Jun 30 20:56:13 fernu systemd[1]: docker.service: Start request repeated too quickly.
Jun 30 20:56:13 fernu systemd[1]: Failed to start Docker Application Container Engine.
Any help would be appreciated
Update
It´s working with ubuntu 14:
--digitalocean-image=ubuntu-14-04-x64 so it seams like a problem with the default image (ubuntu-16-04-x64)
This seems to be hitting a lot of people. TL;DR: There is a bug in docker-machine v0.12.0 and this issue can be resolved by upgrading.
Logging in to the DigitalOcean instance and running journalctl -xe provides more information:
-- Unit docker.service has begun starting up.
Jul 07 20:03:52 docker-sandbox docker[4930]: `docker daemon` is not supported on Linux. Please run `do
Jul 07 20:03:52 docker-sandbox systemd[1]: docker.service: Main process exited, code=exited, status=1/
Jul 07 20:03:52 docker-sandbox systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
The key here is docker daemon is not supported on Linux. A bug in docker-machine's version comparison code caused an incorrect systemd unit file to be produced (located at /etc/systemd/system/docker.service.d/10-machine.conf) on certain versions of Ubuntu.
A fix has been committed and a new release (v0.12.1) was made.
You can grab the latest release at: https://github.com/docker/machine/releases/tag/v0.12.1