I'm very new on GCP and I've tried everything around to achieve this connection, my app connects to any other postgress service but not to Cloud SQL.
I've started to think that is a problem with my code in flask.
So far I have followed this guide:
https://cloud.google.com/sql/docs/postgres/connect-run#public-ip-default
and watched some Youtube Tutorials.
Maybe my problem is in the Cloud Proxy Auth.
MyCode looks like this:
import os
from os.path import join, dirname
from flask import Flask
from flask_cors import CORS
from flask_restful import Api
from flask_jwt_extended import JWTManager
from flask_bcrypt import Bcrypt
from flask_migrate import Migrate
from constants import ACTIVE_SCHEDULER
from db import db
from ma import ma
from resources.routes import initialize_routes, initialize_errors, initialize_cli
from sched import init_scheduler
app = Flask(__name__)
db_user = os.environ.get("DB_USER")
db_pass = os.environ.get("DB_PASS")
db_name = os.environ.get("DB_NAME")
unix_socket_path = os.environ.get("INSTANCE_UNIX_SOCKET")
db_url = 'postgresql+pg8000://{user}:{pw}#/{db}?unix_sock={path}/.s.PGSQL.5432'.format(user=db_user,pw=db_pass,db=db_name, path=unix_socket_path)
app.config["SQLALCHEMY_DATABASE_URI"] = db_url
app.config["SECRET_KEY"] = os.environ.get('JWT_SECRET_KEY', '')
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
app.config["PROPAGATE_EXCEPTIONS"] = True
and the error that I always get is:
"error": "(pg8000.exceptions.InterfaceError) Can't create a connection to host 172.17.0.1 and port 5432 (timeout is None and source_address is None). (Background on this error at: https://sqlalche.me/e/14/rvf5)"
I'm out of ideas, I hope anyone knows any guide or solution to my problem, Maybe I'm missing some access to the service account, but I think I follow correctly the instructions I even try to connect Privately with the TCP instructions and is almost the same.
Related
I'm using Flask on a project on an embedded system and I'm having performance issues. I'm running gunicorn with one eventlet worker by running:
gunicorn -b 0.0.0.0 --worker-class eventlet -w 1 'app:create_app()'
The problem I'm facing is that, when the MQTT messages start to pour with more cadence, the application starts to use almost all the CPU I have available. My initial thought was that I was handling the messages not ideally but, I even took out my handler, and just receive the messages, and the problem still persists.
I have another python application that subscribes to the same information with the paho client and this is not an issue, so I'm assuming I'm missing something on my Flask application and not the information itself.
My code is:
import eventlet
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_login import LoginManager, current_user
from flask_socketio import SocketIO
from flask_mqtt import Mqtt
eventlet.monkey_patch()
#USERS DB
db_alchemy = SQLAlchemy()
#socketIO
socketio = SocketIO(cors_allowed_origins="*", async_mode='eventlet')
# MQTT
mqtt_client = Mqtt()
'''
APPLICATION CREATION
'''
def create_app():
app = Flask(__name__)
if app.config["ENV"] == "production":
app.config.from_object("config.ProductionConfig")
else:
app.config.from_object("config.DevelopmentConfig")
#USERS DB
db_alchemy.init_app(app)
#LoginManager
login_manager = LoginManager()
login_manager.login_view = "auth.login"
login_manager.init_app(app)
#SOCKETIO
socketio.init_app(app)
#FLASK-MQTT
app.config['MQTT_BROKER_URL'] = 'localhost' #
app.config['MQTT_BROKER_PORT'] = 1883
app.config['MQTT_KEEPALIVE'] = 20
app.config['MQTT_TLS_ENABLED'] = False
mqtt_client.init_app(app)
return app
#MQTT
#mqtt_client.on_connect()
def mqtt_on_connect():
mqtt_client.subscribe('testTopic/#', 0)
#mqtt_client.on_disconnect()
def mqtt_on_disconnect():
loggerMqtt.warning(' > Disconnected from broker')
#mqtt_client.on_subscribe()
def mqtt_on_subscribe(client, obj, mid, granted_qos):
pass
#mqtt_client.on_message()
def mqtt_on_message(client, userdata, message):
pass
#mqtt_topicSplitter(client, userdata, message)
As you can see my handler mqtt_topicSplitter is commented but I'm still having performance issues. I've tried adding an sleep command [eventlet.sleep(0.1)] on the on_message handler which solved the CPU consumption problem but resulted on my application being constantly kicked from the broker.
I also tried using other workers (gevent, asyncio, ..) without success. Using the Flask development server is not an option, since is not recommended for production.
I'm sorry if I wasn't clear, but I'm not an expert, please feel free to ask me any questions if needed.
Thanks in advance.
I am not able to get http://127.0.0.1:5000/movie Url from the browser.
Every time it gives 404. The only time it worked was with URL from hello world.
I am trying to run a recommender system deployment solution with flask, based on https://medium.com/analytics-vidhya/build-a-movie-recommendation-flask-based-deployment-8e2970f1f5f1.
I have tried uninstall flask and install again but nothing seems to work.
Thank you!
from flask import Flask,request,jsonify
from flask_cors import CORS
import recommendation
app = Flask(__name__)
CORS(app)
#app.route('/movie', methods=['GET'])
def recommend_movies():
res = recommendation.results(request.args.get('title'))
return jsonify(res)
if __name__=='__main__':
app.run(port = 5000, debug = True)
http://127.0.0.1:5000/movie
be careful not writing '/' after movie like http://127.0.0.1:5000/movie/
The django app runs on the local server, but does not work on the remote.
The server does not have a GUI and does not provide the user with a link to authorization. The server outputs link to the console.
from __future__ import print_function
from apiclient import discovery
from httplib2 import Http
from oauth2client import file, client, tools
import datetime
import os
import json
SCOPES = 'https://www.googleapis.com/auth/calendar'
from .models import Aim
try:
import argparse
flags = tools.argparser.parse_args([])
except ImportError:
flags = None
def calendar_authorization(username):
store = open('app/static/secret_data/' + username +'.json', 'w')
store.close()
store = file.Storage('app/static/secret_data/' + username +'.json')
creds = store.get()
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets('app/client_secret.json', SCOPES)
flags.noauth_local_webserver = True
print("________flow_______")
print(flow.__dict__)
creds = tools.run_flow(flow, store, flags)
print("________creds_______")
print(creds.__dict__)
In the local version, I use client_secret.json, obtained from OAuth 2.0 client IDs. I suspect that I may have the wrong settings for this.
I found the information to use the Service account keys(I don't use it now). But I didn’t find a good setup guide for this.
How to set it up and paste in the code for authorization(
I did not understand how the access service key is used in the code?)?
What could be wrong?
I am trying to deploy a machine learning model on AWS EC2 instance using Flask. These are sklearn's fitted Random Forest models that are pickled using joblib. When I host Flask on localhost and load them into memory everything runs smoothly. However, when I deploy it on the apache2 server using mod_wsgi, joblib works sometimes(i.e. the models are loaded using joblib sometimes) and the other times the server just hangs. There is no error in logs. Any ideas would be appreciated.
Here is the relevant code that I am using:
# In[49]:
from flask import Flask, jsonify, request, render_template
from datetime import datetime
from sklearn.externals import joblib
import pickle as pkl
import os
# In[50]:
app = Flask(__name__, template_folder="/home/ubuntu/flaskapp/")
# In[51]:
log = lambda msg: app.logger.info(msg, extra={'worker_id': "request.uuid" })
# Logger
import logging
handler = logging.FileHandler('/home/ubuntu/app.log')
handler.setLevel(logging.ERROR)
app.logger.addHandler(handler)
# In[52]:
#app.route('/')
def host_template():
return render_template('Static_GUI.html')
# In[53]:
def load_models(path):
model_arr = [0]*len(os.listdir(path))
for filename in os.listdir(path):
f = open(path+"/"+filename, 'rb')
model_arr[int(filename[2:])] = joblib.load(f)
print("Classifier ", filename[2:], " added.")
f.close()
return model_arr
# In[54]:
partition_limit = 30
# In[55]:
print("Dictionaries being loaded.")
dict_file_path = "/home/ubuntu/Dictionaries/VARR"
dictionaries = pkl.load(open(dict_file_path, "rb"))
print("Dictionaries Loaded.")
# In[56]:
print("Begin loading classifiers.")
model_path = "/home/ubuntu/RF_Models/"
classifier_arr = load_models(model_path)
print("Classifiers Loaded.")
if __name__ == '__main__':
log("/home/ubuntu/print.log")
print("Starting API")
app.run(debug=True)
I was stuck with this for quite sometime. Posting the answer in case someone runs into this problem. Using print statements and looking at logs I narrowed the problem down to joblib.load statement. I found this awesome blog: http://blog.rtwilson.com/how-to-fix-flask-wsgi-webapp-hanging-when-importing-a-module-such-as-numpy-or-matplotlib
The idea of using a global process group fixed the problem. That forced the use of main interpreter just as the top comment on that blog page mentions.
I want to create a real time twitter streaming application using tornado and Django. The problem is I am not able to understand the role of Tornado here, and how will I use view.py models.py of Django in Tornado Web Server.
Below if the request response cycle of Django, could anybody explain to me how the tornado web server will play its role here.
Few questions:
1- What will be role of urls.py file in Django since we will be routing all the urls from Tornado itself.
2- How will I connect to models.py to fetch rows for my tornado application.
I am looking into this github project link
Tornado fits roughly in the "web server" and "wsgi" parts of this diagram, and adds another section for Tornado RequestHandlers attached to the web server. When you create your tornado.web.Application, you will send some URLs to Tornado RequestHandlers and some to the Django WSGIContainer (which will in turn use the Django urls.py).
Using Django models from Tornado code is more challenging; my code from the last time I tried doing this is at https://gist.github.com/bdarnell/654157 (but note that this is quite old and I don't know if it will work any more)
This is tornado_main.py stored in one level with manage.py ... I've tested it with Django 1.8 ...
# coding=utf-8
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "django_project_dir.settings")
import django
django.setup()
from django.core.urlresolvers import reverse_lazy
from django.contrib.auth.models import User
from tornado.options import options, define, parse_command_line
import logging
import tornado.httpserver
import tornado.ioloop
import tornado.web
import tornado.websocket
import tornado.wsgi
define('port', type=int, default=8004)
# tornado.options.options['log_file_prefix'].set(
# '/var/www/myapp/logs/tornado_server.log')
tornado.options.parse_command_line()
class SomeHandler(tornado.websocket.WebSocketHandler):
pass
def main():
logger = logging.getLogger(__name__)
tornado_app = tornado.web.Application(
[
(r'/some_url/(?P<user_id>[0-9]+)', SomeHandler),
],
debug=True
)
logger.info("Tornado server starting...")
server = tornado.httpserver.HTTPServer(tornado_app)
server.listen(options.port)
try:
tornado.ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
tornado.ioloop.IOLoop.instance().stop()
logger.info("Tornado server has stopped")
if __name__ == '__main__':
main()