I am attempting to build and test some bokeh components in the Google Cloud AI notebook environment. I have been using this cloud instance for several months with little to no issues but I cannot seem to get the tutorial, https://docs.bokeh.org/en/latest/docs/user_guide/notebook.html, working. Has anyone successfully worked with a bokeh server in this environment?
The tutorial does not seem to work for my use case. Several JavaScript errors are returned when following and adapting this method.
handler = FunctionHandler(modify_doc)
app = Application(handler)
show(modify_doc, notebook_url=remote_jupyter_proxy_url)
def remote_jupyter_proxy_url(port):
"""
Callable to configure Bokeh's show method when a proxy must be
configured.
If port is None we're asking about the URL
for the origin header.
"""
base_url = URL OF AI NOTEBOOK
host = urllib.parse.urlparse(base_url).netloc
# If port is None we're asking for the URL origin
# so return the public hostname.
if port is None:
return host
service_url_path = NOT A JUPYTER HUB SO UNCLEAR WHAT SHOULD BE HERE
proxy_url_path = 'proxy/%d' % port
user_url = urllib.parse.urljoin(base_url, service_url_path)
full_url = urllib.parse.urljoin(user_url, proxy_url_path)
print(full_url)
return full_url
When patching together the previous code, I receive the following message when inspecting the notebook:
panellayout.js:213 Mixed Content: The page at 'URL' was loaded over HTTPS, but requested an insecure script 'URL'. This request has been blocked; the content must be served over HTTPS.
Related
I am implementing a react native application using Expo and testing it on my iOS device using Expo Go. I have a Django rest framework backend running on my local machine that I can access using my browser via http://localhost:8000 - using localhost in my react native app does not work during my fetch request. For instance:
let response = await fetch(BACKEND_URL + "/shft/auth/token/obtain/", {
method: "POST",
body: JSON.stringify(data),
headers: {
"Content-Type": "application/json",
},
});
returns
Network request failed
at node_modules\whatwg-fetch\dist\fetch.umd.js:null in setTimeout$argument_0
at node_modules\react-native\Libraries\Core\Timers\JSTimers.js:null in _allocateCallback$argument_0
at node_modules\react-native\Libraries\Core\Timers\JSTimers.js:null in _callTimer
at node_modules\react-native\Libraries\Core\Timers\JSTimers.js:null in callTimers
at node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:null in __callFunction
at node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:null in __guard$argument_0
at node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:null in __guard
at node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:null in callFunctionReturnFlushedQueue
I have tried setting BACKEND_URL to localhost:8000 and my public IP via expo-constants
import Constants from "expo-constants";
const { manifest } = Constants;
const uri = `http://${manifest.debuggerHost.split(":").shift()}:8000`;
But neither seems to work. I have enabled the Corsheaders middleware in my DRF project and placed the #csrf_exempt decorator on my APIView's dispatch method, and this error persists. I also added localhost:19000 to my CORS whitelist, which is where Expo seems to host its local server. What could be the problem here? Both the Expo server and the Django server are running locally on my laptop, and otherwise the API works in Django tests.
Using curl on my API endpoints via localhost also works, though the external IP returned by expo constants does not—but this may be because I am sending from localhost.
I found a fix using information from these two posts: this stack overflow and this blog. Neither of these solutions worked for me out of the box but a little networking logic makes it all come together. The core issue here is that the Django server is hosted locally but the Expo Go App is running on a separate mobile device (despite the expo server being hosted locally as well). I've tested each of the following steps to make sure they are all necessary.
Set up Django CORS using the corsheaders app and the corresponding middleware, there are guides on how to do this but basically pip install django-cors-headers then add it to your INSTALLED_APPS and Middleware in settings.py
INSTALLED_APPS = [
...,
'corsheaders',
]
# make sure CorsMiddleware comes before CommonMiddleware
MIDDLEWARE = [
...,
'corsheaders.middleware.CorsMiddleware',
'django.middleware.common.CommonMiddleware',
...,
]
Also use the #csrf_exempt decorator on the views you must be able to access (there are security implications to this but I am using JWT authentication anyways for this mobile app so this is okay in my case - I would do my due diligence here). There are a few ways to do this for function and class views:
from django.utils.decorators import method_decorator
from django.views.decorators.csrf import csrf_exempt
#csrf_exempt
def my_view():
...
# or for class based views
class my_view(APIView):
#csrf_exempt
def dispatch(self, request, *args, **kwargs):
return super().dispatch(self, request, *args, **kwargs)
# or my preferred way for easy copy paste
#method_decorator(csrf_exempt, name='dispatch')
class my_view(APIView):
...
Expose the Django server over LAN, this step is necessary so that your device can interface with the Django server over the local network - may not be necessary if using an emulator (though I'm aware that the Android emulator for instance accesses localhost using a specially designated IP address so I would check for this).
# specify this IP to expose on LAN
python manage.py runserver 0.0.0.0:8000
Find your local machine's IP address - the blog above uses the mobile device's IP but that did not work for me, which makes sense given that both servers are running on my laptop. For windows I opened the command prompt and entered ipconfig then found the IPv4...... line which contains my address.
Use this IP to direct your API calls and add it to your CORS whitelist. For example if your localmachine IP is 10.0.0.1 and Expo is on port 19000 while Django is on port 8000 we add the following.
# Django settings.py
ALLOWED_HOSTS = ['10.0.0.1',]
CORS_ORIGIN_WHITELIST = [
'http://10.0.0.1:19000',
]
// Wherever you define the backend URL for the frontend, in my case config.js
export const BACKEND_URL = "http://10.0.0.1:8000"
// Then make requests
fetch(BACKEND_URL + "/api/endpoint/", ...
I am working on a Django REST Framework web application, for that I have a Django server running in an AWS EC2 Linux box at a particular IP:PORT. There are URLs (APIs) which I can call for specific functionalities.
In Windows machine as well as in other local Linux machine (not AWS EC2) I am able to call those APIs successfully and getting the desired results perfectly.
But the problem is when I am trying to call the APIs from within the same EC2 Linux box.
A simple code I wrote to test the call of one API from the same AWS EC2 Linux box:
import requests
vURL = 'http://<ipaddress>:<port>/myapi/'
vSession = requests.Session()
vSession.headers = {'Content-Type': 'application/json', 'Accept': 'application/json'}
vResponse = vSession.get(vURL)
if vResponse.status_code == 200:
print('JSON: ', vResponse.json())
else:
print('GET Failed: ', vResponse)
vSession.close()
This script is returning GET Failed: <Response [403]>.
In one thing I am sure that there is no authentication related issues in the EC2 instance because using this same script I got actual response in other local Linux machines (not AWS EC2) and also in Windows machine.
It seems that the calling of the API (which includes the same IP:PORT of the same AWS EC2 machine) from the same machine is somehow getting restricted by either the security policies of AWS or firewall or something else.
May be I have to do some changes in setting.py. Though I have incorporated all the required settings as per my knowledge in the settings.py, like:
ALLOWED_HOST
CORS_ORIGIN_WHITELIST
Mentioning corsheaders in INSTALLED_APPS list
Mentioning corsheaders.middleware.CorsMiddleware in MIDDLEWARE list
For example, below are the CORS settings that I have incorporated in the setting.py:
CORS_ORIGIN_ALLOW_ALL = True
CORS_ALLOW_CREDENTIALS = True
CORS_ALLOW_METHODS = ('GET', 'PUT', 'POST', 'DELETE')
CORS_ORIGIN_WHITELIST = (
< All the IP address that calls this
application are listed here,
this list includes the IP of the
AWS EC2 machine also >
)
Does anyone have any ideas regarding this issue? Please help me to understand the reason of this issue and how to fix this.
Thanks in advance.
As per discussions in comment section, it's clear that inbound and outbound are fine.
So, check the proxy settings like 'no_proxy' env variable etc. in your aws linux box itself.
command to check env variables: set
As it's allowing outbound and inbound, then try to set no_proxy value with appending your IP Address to it.
Please let me know if this helped.
Thanks.
I deployed an app successfully following this link.
After deployment, I am having trouble connecting to Cloud SQL. In my IPython notebook, before I deploy my app, I can use the following statement to connect to my cloud instance using Google SDK:
cloud_sql_proxy.exe -instances="project_name:us-east1:instance_name"=tcp:3306
After entering the above, I get a notification in Google Cloud Shell
"listening on 127.0.0.1:3306 for project_name:us-east1:instance_name
ready for new connections"
I then use my IPython notebook to test the connection:
host = '127.0.0.1' (also changed to my my ip address for google cloud sql)
user = 'cloud_sql_user'
password = 'cloud_sql_password'
conn1 = pymysql.connect(host=host,
user=user,
password=password,
db='mydb')
cur1 = conn1.cursor()
Local test results: Can connect to Cloud SQL from IPython and query cloud database. Next step: deploy
gcloud app deploy
Result: App Deployed. However, upon navigating to my website and typing in names into the input field, it takes me to a new URL and I get the error:
OperationalError at /search/
(20033), "Can't connect to MySQL server on 127.0.0.1 (timed out))
My main questions are:
How can we get PyMySQL query into a cloud database after deployment?
Do I need a Gunicorn if I'm using Windows and need to connect to their cloud database?
Is SQL alchemy needed for me? I'm not using an ORM. The online instructions aren't really that clear. My local host computer is on Windows 7, Python 3 and Django.
Edit: I edited the file based on the suggestion by the user below. I still get the error 'connection timed out'
Found it. Change the socket in your pymysql to a unix_socket = "your cloud connection string name". Let host be 'localhost', user = 'your cloud username' and password = 'your cloud sql password'
edit: don't forget the /cloud/ part in the connection string name
This post is already a bit old, I hope you already solved this!
You check if you're in production like this :
if os.getenv('GAE_INSTANCE'):
In the documentation, they manage it this way :
if os.getenv('SERVER_SOFTWARE', '').startswith('Google App Engine'):
I think that because this condition is wrong, you are overwriting the DATABASES['default']['HOST'] value to '127.0.0.1'
Hope this will be the answer you were looking for!
I am trying to write unittests for my Flask API endpoints. I want the test cases to connect to the dev server which is running on a different port 5555.
Here is what I am doing in setUp() to make a test_client.
import flask_app
flask_app.app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql+mysqldb://root:#localhost/mvp_test_db'
flask_app.app.config['TESTING'] = True
flask_app.app.config['SERVER_NAME'] = '192.168.2.2' //the IP of the dev server
flask_app.app.config['SERVER_PORT'] = 5555
self.app_client = flask_app.app.test_client()
Then when I make a request using app_client like -
r = self.app_client.post('/API/v1/dummy_api', data = {'user_id' : 1})
I get a 404 when I print r and the request never comes to the dev server (no logs printed). I am not able to inspect the URL to which the connection is being attempted above. Any ideas?
It (app.test_client) does not send requests through network interfaces. All requests are simulated. These are processed inside werkzeug routing system. To process "/API/v1/dummy_api" url you need register a view for. If it is registered, connect it in the import section. Application settings and test settings almost always have almost equal settings.
I've been trying to set up a django contact form on a simple blogging application which is currently hosted on Google cloud platform.
The app seems to work locally, it sends an email out and redirects the end user to a completed page, however when I push it to the production server it tries to send an email for around 30s then times out and I get redirected to a 404.
I've checked my nginx error and access logs as well as my gunicorn log and it seems that a gunicorn worker times out after 30 seconds, hence the 404. Initially I thought that this happens because my port 587 is locked on Google cloud network however even when I'm opening said port it still fails.
My settings.py:
Emailer EMAIL_HOST = "send.one.com"
EMAIL_PORT = "587"
EMAIL_HOST_USER = "postman#email.co.uk"
EMAIL_HOST_PASSWORD = "password"
DEFAULT_FROM_EMAIL = "postman#email.co.uk"
SERVER_EMAIL = "postman#email.co.uk"
I've tried to run it through TLS by using
EMAIL_USE_TLS = True
But no success.
Has anyone encountered this problem before?
App Engine blocks this kind of connection which opens socket.
You have 2 options if you want to send mails on Google App Engine:
Directly use Mail API provided by Google App Engine, it will be a set of API that handles mailing on Google App Engine, for reference you can look here: Google App Engine Mail API for Python
Use Django wrapper for GAE Mail API, like the one built in in rocket_engine or standalone one.
rocket engine
standalone email backend for Django on GAE
If you're doing this you can perform email sending in the same way of what Django does, it's more like a plugin could be installed into your Django project, then your settings will work without any change.