Mosquitto MQTT service failed to restart after adding SSL configuration - amazon-web-services

I'm trying to configure SSL access to my mosquitto bridrge on Amazon EC2, Ubuntu 18 server. I followed the steps described in mosquitto tls docs and ended up with the following files:
ca.crt
ca.key
ca.srl
client.crt
client.csr
client.key
server.crt
server.csr
server.key
in a temporary directory.
Then I copied three files:
sudo cp ca.crt /etc/mosquitto/ca_certificates/
sudo cp server.key /etc/mosquitto/certs/
sudo cp server.crt /etc/mosquitto/certs/
Then I added the following section to the configuration file:
listener 8883
cafile /etc/mosquitto/ca_certificates/ca.crt
keyfile /etc/mosquitto/certs/server.key
certfile /etc/mosquitto/certs/server.crt
Then I wanted to restart mosquitto:
sudo service mosquitto restart
This doesn't work and responds with
> Job for mosquitto.service failed because the control process exited with error code.
> See "systemctl status mosquitto.service" and "journalctl -xe" for details.
I tried both and there was just information, that the configuration is wrong.
I tried commenting out different lines and the following structure let's the service restart:
listener 8883
cafile /etc/mosquitto/ca_certificates/ca.crt
keyfile /etc/mosquitto/certs/server.key
#certfile /etc/mosquitto/certs/server.crt
Unfortunatelly, the certfile is nessesary for the configuration to work. I checked the example configuration and the docs, and the certfile is a legal and required parameter.
How can I solve this issue?

I'm running Mosquitto on Ubuntu server. I ran also into Mosquitto failing to start after adding SSL certificates and configuration. I got a standalone certificate from Let’s Encrypt by Certbot tool.
Version information:
Ubuntu 18.04.5 LTS,
Mosquitto 2.0.4. (MQTT v5.0/v3.1.1/v3.1 broker) and
Certbot 1.11.0.
In original and failing configuration the mosquitto was configured to use certificates in /etc/letsencrypt... location.
My solution was to move all certificate files from /etc/letsencrypt/archive/ into /etc/mosquitto/ -folder and make the respective certificate file pointers in mosquitto configuration to point to this location.
Most relevant debugging for the problem in the trouble shooting is available in the logfile /var/log/mosquitto/mosquitto.log file.*
Further info about troubleshooting
Playing around with ownerships did not have any effect, in this case. The final configuration with certificates in /etc/mosquitto/certs folder worked regardless if the owner of the files and certificate containing folder was mosquitto or root.
I also tried not using the symbolic links of .../live/... and tested using directly the files in /etc/letsencrypt/archive/... location instead, did not work.
I did not check if some individual file is causing the issue, just moved them all. Tried afterwards to symlink from ..mosquitto/certs one of the files only to note that mosquitto will fail to start. For this server set-up to run, I need to keep the certificate files in ...mosquitto/certs folder".

Changing the certificate/key permissions fixed the issue for me.
E.g.
sudo chmod 744 raspberrypi.crt
sudo chmod 644 raspberrypi.key
As per this forum:-
https://github.com/owntracks/tools/issues/6

Related

AWS Managed AD SSL Certifcates export

I am trying to explore AD integration and was able to succesfully complete the setup as described in AWS blog post, and verified that SSL connection is working fine from "Management box".
Based on my understanding, ldp.exe from Management box is working fine because management box is joined to this AD and certificates are propagated properly.
I have use case where another linux box (which can't be joined to AD) but should use LDAPS over SSL to do some user search. For this to work, I need to export SSL and install it on Linux box. I couldn't quite figure out how to find and export certificates in this example? Are those certificates are available on RootCA (or) SubordinateCA and how to export them? appreciate any help.
I'm assuming you generated the SSL cert in AWS via Amazon Certificate Services (ACS). Although ACS won't allow you to export the private key from ACS, you shouldn't need it. All you need to do is import the public certificate into the certificate trust store that your Linux box is using when it connects to the AD server. I can't tell you how to do that (not sure what the application is), but you should be able to extract the public cert using openssl. You'll point openssl to the ad server, and have it output the public cert.
I'm pretty sure this is the openssl command line that would do that:
openssl s_client -showcerts -connect activedirectory.yourdomain.com:636
You can download the certificate from the ldaps end point and install it as follows.
Install openldap client
sudo yum install -y openldap-clients
• Download and Add Server Certificate to the openldap cert path
openssl s_client -connect <LDAPSURL>:636 -showcerts </dev/null 2>/dev/null | openssl x509 -outform PEM > server.crt
• Configure LDAP Details
Vi /etc/openldap/ldap.conf
BASE dc=corp,dc=example,dc=com
URI ldaps://corp.example.com
TLS_CACERT /etc/openldap/certs/server.crt

Facing SSL certificate problem error in adding material in GoCD pipeline in windows

Facing the below isssue.
Error performing command: --- Command ---
git ls-remote http://gbs05291:******#git...pro/scm/fbkpla/gocd-mobileapp.git refs/heads/InvestmentApp_GoCDTest
--- Environment ---
{}
--- INPUT ----
--- EXIT CODE (128) ---
--- STANDARD OUT ---
--- STANDARD ERR ---
STDERR: fatal: unable to access 'http://*********repoIP**/scm/fbkpla/gocd-mobileapp.git/': SSL certificate problem: self signed certificate in certificate chain
1.Tried adding the certificate to keystore in gocd server with the below command..
keytool -importcert -file "C:\Users\Desktop\BitBucket.cer" -keystore "C:\Program Files (x86)\Go Server\config\keystore"
2.Tried git config --global http.sslVerify false
Please note :Able to clone the same repo from git bash.
Download the certificate, convert it into .pem file and add the .pem certificate to git config at either system level, global level or local level, depending on the requirement. This will resolve SSL self signed certificate problem.
Converting .crt to .pem file using OpenSSL:
x509 -outform der -in /certificate.crt -out /certificate.pem
add certificate to git config:
git config --system http.sslCAInfo /certificate.pem
Please go with this links. Hope your will find better solution of your problem.
Using your own SSL certificates on the Server

Host key verification failed in google compute engine based mpich cluster

TLDR:
I have 2 google compute engine instances, I've installed mpich on both.
When I try to run a sample I get Host key verification failed.
Detailed version:
I've followed this tutorial in order to get this task done: http://mpitutorial.com/tutorials/running-an-mpi-cluster-within-a-lan/.
I have 2 google compute engine vms with ubuntu 14.04 (the google cloud account is a trial one, btw). I've downloaded this version of mpich on both instances: http://www.mpich.org/static/downloads/3.3rc1
/mpich-3.3rc1.tar.gz and I installed it using these steps:
./configure --disable-fortran
sudo make
sudo make install
This is the way the /etc/hosts file looks on the master-node:
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
169.254.169.254 metadata.google.internal metadata
10.128.0.3 client
10.128.0.2 master
10.128.0.2 linux1.us-central1-c.c.ultimate-triode-161918.internal linux
1 # Added by Google
169.254.169.254 metadata.google.internal # Added by Google
And this is the way the /etc/hosts file looks on the client-node:
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
169.254.169.254 metadata.google.internal metadata
10.128.0.2 master
10.128.0.3 client
10.128.0.3 linux2.us-central1-c.c.ultimate-triode-161918.internal linux
2 # Added by Google
169.254.169.254 metadata.google.internal # Added by Google
The rest of the steps involved adding an user named mpiuser on both nodes and configuring passwordless ssh authentication between the nodes. And configuring a cloud shared directory between nodes.
The configuration worked till this point. I've downloaded this file https://raw.githubusercontent.com/pmodels/mpich/master/examples/cpi.c to /home/mpiuser/cloud/mpi_sample.c, compiled it this way:
mpicc -o mpi_sample mpi_sample.c
and issued this command on the master node while logged in as the mpiuser:
mpirun -np 2 -hosts client,master ./mpi_sample
and I got this error:
Host key verification failed.
What's wrong? I've tried to troubleshoot this problem over more than 2 days but I can't get a valid solution.
Add
package-lock.json
in ".gcloudignore file".
And deploy it again.
It turned out that my password less ssh wasn't configured properly. I've created 2 new instances and did the following things to get a working password less and thus get a working version of that sample. The following steps were execute on an ubuntu server 18.04.
First, by default, instances on google cloud have PasswordAuthentication setting turned off. In the client server do:
sudo vim /etc/ssh/sshd_config
and change PasswordAuthentication no to PasswordAuthentication yes. Then
sudo systemctl restart ssh
Generate a ssh key from the master server with:
ssh-keygen -t rsa -b 4096 -C "user.mail#server.com"
Copy the generated ssh key from the master server to the client
ssh-copy-id client
Now you get a fully functional password less ssh from master to client. However mpich still failed.
The additional steps that I did was to copy the public key to the ~/.ssh/authorized_keys file, both on master and client. So execute this command from both servers:
sudo cat .ssh/id_rsa.pub >> .ssh/authorized_keys
Then make sure the /etc/ssh/sshd_config files from both the client and server have the following configurations:
PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM no
Restart the ssh service from both client and master
sudo systemctl restart ssh
And that's it, mpich works smoothly now.

UniFi Controller issue with SSL from GoDaddy on EC2 instance

Scenario
I have AWS setup for a unifi controller, I've been to access it with https://myserverip:8443, I bypass "This connection is note sucured" and use the controller normally
Now, I need to install and SSL certificate to get the hotspot payment system going.
I have a FQDN with GoDaddy so I created a subdomain unifi.mydomain.com, that points to the elastic IP, I log on with https://unifi.mydomain.com:8443
I bought the SSL certificate from GoDaddy, added the subdomain to that certificate.
I log on my AWS with SSH, generate my csr with the following command
cd /usr/lib/unifi
sudo java -jar lib/ace.jar new_cert unifi.mydomain.dom “My Company Name” City State CC*
Then I do
cd var/lib/unifi
more unifi_certificate.csr.pem
Once I get that I copy and paste it on GoDaddy, download the cert files, go back to AWS copy the files with filezilla to /usr/lib/unifi
Then I run the following command
sudo java -jar lib/ace.jar import_cert unifi_mydomain_com.crt bundlecert.crt
They import correctly, restart unifi service and reboot EC2
When I got to any of the above address I get the following
This site can’t provide a secure connection ERR_SSL_PROTOCOL_ERROR
I've tried different browsers, incognito mode, vpn, etc, I believe it's just a matter of SSL or my server
Check your system.properties which sits in /var/lib/unifi/ open the file with vim or your text editor of choice.
Have a look at your HTTPS options, the important ones are the ciphers and protocols.
The Protocols you need are TLSv1 and potentially SSLv2Hello there should be no other SSL protocols in there.
The Ciphers you ideally want are TLS, so for example TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA
If you are having issues throw them all in, CAUTION! only use this in a demo /test environment.
unifi.https.ciphers=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,SSL_RSA_WITH_RC4_128_SHA
Remember once you have edited the system.properties you need to restart the controller.
sudo service unifi restart
Lots of help on the Unifi page
UniFi - SSL Certificate Error
UniFi - Explaining the config.properties File
UniFi - system.properties File Explanation

How can I test https connections with Django as easily as I can non-https connections using 'runserver'?

I have an application that uses "secure" cookies and want to test it's functionality without needing to set up a complicated SSL enabled development server. Is there any way to do this as simply as I can test non-encrypted requests using ./manage.py runserver?
It's not as simple as the built in development server, but it's not too hard to get something close using stunnel as an SSLifying middleman between your browser and the development server. Stunnel allows you to set up a lightweight server on your machine that accepts connections on a configured port, wraps them with SSL, and passes them along to some other server. We'll use this to open a stunnel port (8443) and pass along any traffic it receives to a Django runserver instance.
First you'll need stunnel which can be downloaded here or may be provided by your platform's package system (e.g.: apt-get install stunnel). I'll be using version 4 of stunnel (e.g.: /usr/bin/stunnel4 on Ubuntu), version 3 will also work, but has different configuration options.
First create a directory in your Django project to hold the necessary configuration files and SSLish stuff.
mkdir stunnel
cd stunnel
Next we'll need to create a local certificate and key to be used for the SSL communication. For this we turn to openssl.
Create the key:
openssl genrsa 2048 > stunnel.key
Create the certificate that uses this key (this will ask you a bunch of information that will be included in the certficate - just answer with whatever feels good to you):
openssl req -new -x509 -nodes -sha1 -days 365 -key stunnel.key > stunnel.cert
Now combine these into a single file that stunnel will use for its SSL communication:
cat stunnel.key stunnel.cert > stunnel.pem
Create a config file for stunnel called dev_https with the following contents:
pid=
cert = stunnel/stunnel.pem
sslVersion = SSLv3
foreground = yes
output = stunnel.log
[https]
accept=8443
connect=8001
TIMEOUTclose=1
This file tells stunnel what it needs to know. Specifically, you're telling it not to use a pid file, where the certificate file is, what version of SSL to use, that it should run in the foreground, where it should log its output, and that it should accept connection on port 8443 and shuttle them along to port 8001. The last parameter (TIMEOUTclose) tells it to automatically close the connection after 1 second has passed with no activity.
Now pop back up to your Django project directory (the one with manage.py in it):
cd ..
Here we'll create a script named runserver that will run stunnel and two django development servers (one for normal connections, and one for SSL connections):
stunnel4 stunnel/dev_https &
python manage.py runserver&
HTTPS=1 python manage.py runserver 8001
Let's break this down, line-by-line:
Line 1: Starts stunnel and point it to the configuration file we just created. This has stunnel listen on port 8443, wrap any connections it receives in SSL, and pass them along to port 8001
Line 2: Starts a normal Django runserver instance (on port 8000)
Line 3: Starts another Django runserver instance (on port 8001) and configures it to treat all incoming connections as if they were being performed using HTTPS.
Make the runscript file we just created executable with:
chmod a+x runserver
Now when you want to run your development server just execute ./runserver from your project directory. To try it out, just point your browser to http://localhost:8000 for normal HTTP traffic, and https://localhost:8443 for HTTPS traffic. Note that you're browser will almost definitely complain about the certificate used and require you to add an exception or otherwise explicitly instruct the browser to continue browsing. This is because you created your own certificate and it isn't trusted by the browser to be telling the truth about who it is. This is fine for development, but obviously won't cut it for production.
Unfortunately, on my machine this runserver script doesn't exit out nicely when I hit Ctrl-C. I have to manually kill the processes - anyone have a suggestion to fix that?
Thanks to Michael Gile's post and django-weave's wiki entry for the reference material.
I would recommend using the django-sslserver package.
The current package on PyPI only supports up to Django version 1.5.5 but a patch has been committed via 5d4664c. With this fix the system runs well and is a pretty simple and straightforward solution for testing https connections.
UPDATE:
Since I posted my answer the commit above has been merged into the master branch and a new release has been pushed to PyPI. So there shouldn't be any need to specify the 5d4664c commit for that specific fix.
just install
sudo pip install django-sslserver
include sslserver in installed aps
INSTALLED_APPS = (...
"sslserver",
...
)
now you can run
python manage.py runsslserver 0.0.0.0:8888
Similar to django-sslserver you could use RunServerPlus from django-extensions
It has dependencies on Werkzeug (so you get access to the excellent Werkzeug debugger) and pyOpenSSL (only required for ssl mode) so to install run:
pip install django-extensions Werkzeug pyOpenSSL
Add it to INSTALLED_APPS in your projects settings.py file:
INSTALLED_APPS = (
...
'django_extensions',
...
)
Then you can run the server in ssl mode with:
./manage.py runserver_plus --cert /tmp/cert
This will create a cert file at /tmp/cert.crt and a key file at /tmp/cert.key which can then be reused for future sessions.
There is a bunch of extra stuff included in django-extensions that you may find of use so it is worth having a quick flick through the docs.
Signup to https://ngrok.com/. You can use https to test. This might help people who just want to quickly test https.
One way is using ngrok.
Install ngrok.
download link: https://ngrok.com/download
Issue following command on terminal
ngrok http 8000
It will start ngrok session. It will list two urls. One is mapped to http://localhost:8000. Second is mapped to https://localhost:8000. Please check the screenshot below. Use either url. It will map to your local server.
Second method is as per the the solution provided by #EvanGrim. Link - https://stackoverflow.com/a/8025645/9384511
First method is preferred if you need to test https url just couple of times. But if your requirement says https is mandatory then it would be better if you follow second method.
I faced following issues with #EvanGrim solution
When accessing https url via requests, you will get subAltNames
missing warning. This should not be a problem. Your url should work
fine.
You will not be able to access https url via android volley. It will
throw error - your ip or localhost is not verified. Something like
that.
I solved these issues by adding subAltname to ssl certificate. Below is the steps I followed to run a local https server without the above two issues.
Install stunnel. stunnel version used is version4(/usr/bin/stunnel4) For ubuntu execute
sudo apt-get install stunnel
Create directory stunnel in your Django Project.
Copy openssl.cnf from /etc/ssl/openssl.cnf or /usr/lib/ssl/openssl.cnf to stunnel. Avoid editing them directly unless you know what you are doing.
Search for [ req ] or [req] section.In req section set
x509_extensions = v3_ca
req_extensions = v3_req
x509_extensions and req_extensions should be present. If not, add it. If commented, uncomment it.
Search for [v3_req] section. In this section add
subjectAltName = #alt_names
Search for [v3_ca] section. In this section add
subjectAltName = #alt_names
We need to add a new section alt_names. This can be added any where. I added after [v3_ca] section.
For local host and your ip address add as follows:
[alt_names]
DNS.1 = localhost
IP.1 = x.x.x.x
IP.2 = 127.0.0.1
To create key execute
openssl genrsa 1024 > stunnel.key
To create DER encoded certificate execute either first or second command
If you have directly edited /etc/ssl/openssl.cnf or /usr/lib/ssl/openssl.cnf
openssl req -new -x509 -nodes -sha1 -days 365 -key stunnel.key > stunnel.cert
If you have made copy of openssl.cnf in stunnel directory.
openssl req -new -x509 -nodes -sha256 -days 365 -key stunnel.key -config openssl.cnf > stunnel.cert
To create PEM ecoded certificate execute.
cat stunnel.key stunnel.cert > stunnel.pem
Create a config file for stunnel called dev_https.config with the following contents.
#not to use pid
pid=
#path of certificate file
cert = stunnel/stunnel.pem
#version of SSL to use
sslVersion = TLSv1.1
foreground = yes
#log output path
output = stunnel.log
[https]
#listen on port 8443
accept=8443
#tunnel the connection to port 8001
connect=8001
# and close connection automatically after one second
TIMEOUTclose=1
Please note sslVersion is based on OpenSSL version used by python interpreter. Mine is OpenSSL 1.1.1. For this version use TLSv1.1. Find out your openssl version and add corresponding version.
You can find out the OpenSSL version by executing
python -c "import ssl; print(ssl.OPENSSL_VERSION)"
Create a script called runserver and add below contents. This script should reside in same directory as manage.py
This script will run two stunnel and two django environment. One for http connections and one for https connections.
The issue of runserver not exiting cleanly(As mentioned by Evan) is solved by trap statement in which cleanup function is called.
stunnel4 stunnel/dev_https.config &
stunnel_pid=$!
echo "stunnel pid: $stunnel_pid"
python manage.py runserver 0.0.0.0:8000 &
py_server_bg=$!
echo "BG py server pid: $py_server_bg"
HTTPS=1 python manage.py runserver 0.0.0.0:8001
function cleanup()
{
echo "Cleaning up"
kill $stunnel_pid
kill $py_server_bg
echo "Cleaned"
}
trap cleanup EXIT
Execute
chmod a+x runserver
From you django project execute
./runserver
Now if you execute a python requests command, subjectAltNames missing warning will not be displayed. Also your android volley https request will execute fine.
data = {'login_id':'someid'}
headers = {'content-type': 'application/json'}
url = "https://localhost:8443/myauth/check-id/"
r=requests.post(url, data=json.dumps(data), headers=headers,verify='stunnel/stunnel.pem')
I would like to thank #EvanGrim for the solution. And Thanks to #Friek, #Utku and #dr. Sybren. Based on their comments I implemented cleanup function.
It can be done in one line with socat:
socat openssl-listen:8443,fork,reuseaddr,cert=server.pem,verify=0 tcp:localhost:8000
, where 8443 is a port to listen for incoming HTTPS connections, server.pem is a self-signed server certificate and localhost:8000 is a debug HTTP server launched as usual.
More details: http://www.dest-unreach.org/socat/doc/socat-openssltunnel.html
For those looking for a foregrounded version of the stunnel option for debugging purposes:
stunnel.pem is a certificate generated as in Evan Grimm's top voted answer.
Listen on all local interfaces on port 443 and forward to port 80 on localhost
sudo stunnel -f -p stunnel.pem -P ~/stunnel.pid -r localhost:80 -d 443
sudo is only necessary for incoming ports (-d [host:]port) under 1024
Handle SSL/TLS with a proxy such as Nginx rather than Django. Nginx can be set up to listen on port 443 and then forward requests to your Django dev server (typically http://127.0.0.1:8000). An Nginx configuration for this might look like the following:
server {
listen 443 ssl;
server_name django-dev.localhost;
ssl_certificate /etc/ssl/certs/nginx_chain.pem;
ssl_certificate_key /etc/ssl/private/nginx.pem;
location / {
proxy_pass http://127.0.0.1:8000/;
proxy_set_header Host $host;
}
}
You'll also need to map django-dev.localhost to 127.0.0.1 and add django-dev.localhost to ALLOWED_HOSTS in settings.py. On Linux, you'll need to add the following line to /etc/hosts:
127.0.0.1 django-dev.localhost
You'll then be able to reach your dev site by going to https://django-dev.localhost in your browser (you will need to bypass your browser's security warning).
go to sslforfree.com and download certificate file
download 3246328748324897329847324234.txt file for verify
you need to add this in views.py
from django.http import HttpResponse
def read_file(request):
f = open('7263713672163767218367687767.txt', 'r')
file_content = f.read()
f.close()
return HttpResponse(file_content, content_type="text/plain")
after
add
urls.py
path('.well-known/pki-validation/', read_file)
install
pip install django-extensions
pip install pyOpenSSL
pip install werkzeug
./manage.py runserver_plus 0:443 --cert-file certificate.crt --key-file private.key
I have very simple solution for this, just follow below steps and you will be able to run your django project/app on https:// secure server.
step.1 install openssl on your system. you can get the executable file for installation at https://slproweb.com/products/Win32OpenSSL.html. I recommended you to install Win64 OpenSSL v1.1.1L (version) it is stable compare to 3.0.0
step.2 after successful installation navigate to C:\Program Files\OpenSSL-Win64\bin in our example. you can navigate to your installed directory where u have installed openssl, here you can find out openssl.exe run it as administrator.
step.3 Enter the following command to begin generating a certificate and private key:
req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crt
step 4: You will then be prompted to enter applicable Distinguished Name (DN) information, totaling seven fields, as you can see in below screenshot:
step 5: Once completed, you will find the certificate.crt and privateKey.key files created under the \OpenSSL\bin\ directory, like this
step 6: install django-sslserver in your system.
stpe 7: execute below command on your IDLE terminal:
python manage.py runsslserver --certificate C:/Program Files/OpenSSL-Win64/bin/cer
tificate.crt --key C:/Program Files/OpenSSL-Win64/bin/privateKey.key 127.0.0.1:8000
Congratulations now your django project is running on https.
I hope this will help you, if u face any difficulty feel free to ask.
Tried to add a comment to this answer, where he asks about killing the processes but the code formatting in comments is limited. Here is my modified runserver.bash
#!/bin/bash
# Script to kill previous instances of dev servers,
# then start the 2 dev servers, one of which is https.
#
cd "$(dirname "$0")"
echo $PWD
. bin/activate
if [[ "$1" =~ "kill" ]]; then
if [[ -n `ps -ef | egrep 'python manage.py runserver [1-9,\.]*' | grep -v grep | awk '{ print $2}'` ]]; then
kill -15 `ps -ef | egrep 'python manage.py runserver [1-9,\.]*' | grep -v grep | awk '{ print $2}'`
fi
else
if [[ -n `ps -ef | egrep 'python manage.py runserver [1-9,\.]*' | grep -v grep | awk '{ print $2}'` ]]; then
kill -15 `ps -ef | egrep 'python manage.py runserver [1-9,\.]*' | grep -v grep | awk '{ print $2}'`
fi
if [[ -z `ps -ef | egrep 'stunnel4 stunnel/dev_https' | grep -v grep | awk '{ print $2}'` ]]; then
stunnel4 stunnel/dev_https &
fi
python manage.py runserver 192.168.1.183:8006 &
HTTPS=1 python manage.py runserver 192.168.1.183:8001
fi
Based on Ryabchenko Alexander answer, this is how I do it in Django for local tests:
token_url = '{}/o/token/'.format(settings.BOQ_APP_URL)
oauth = OAuth2Session(
client=LegacyApplicationClient(client_id=settings.BOQ_APP_CLIENT_ID),
)
# disable cert verification for local tests
if settings.DEBUG:
oauth.verify = False
token = oauth.fetch_token(
token_url=token_url,
username=settings.BOQ_APP_USERNAME,
password=settings.BOQ_APP_PASSWORD,
client_secret=settings.BOQ_APP_CLIENT_SECRET,
)
print(token)