Flask and neomodel: ModelDefinitionMismatch - flask

I am getting a neomodel.exceptions.ModelDefinitionMismatch while trying to build a simple Flask app connected to the bold port of Neo4J.
import json
from flask import Flask, jsonify, request
from neomodel import StringProperty, StructuredNode, config
class Item(StructuredNode):
__primarykey__ = "name"
name = StringProperty(unique_index=True)
def create_app():
app = Flask(__name__)
config.DATABASE_URL = 'bolt://neo4j:test#db:7687'
#app.route('/', methods=['GET'])
def get_all_items():
return jsonify({'items': [item.name for item in Item.nodes]})
#app.route('/', methods=['POST'])
def create_item():
item = Item()
item.name = json.loads(request.data)['name']
item.save()
return jsonify({'item': item.__dict__})
return app
Doing postrequest works; I can check in the database that the items actually are added! But the get request returns:
neomodel.exceptions.ModelDefinitionMismatch: Node with labels Item does not resolve to any of the known objects
I'm using Python 3.7.0, Flask 1.0.2 and neomodel 3.0.3
update
To give the full problem: I run the application in a Docker container with docker-compose in DEBUG mode.
The Dockerfile:
FROM continuumio/miniconda3
COPY . /app
RUN pip install Flask gunicorn neomodel==3.3.0
EXPOSE 8000
CMD gunicorn --bind=0.0.0.0:8000 - "app.app:create_app()"
The docker-compose file:
# for local development
version: '3'
services:
db:
image: neo4j:latest
environment:
NEO4J_AUTH: neo4j/test
networks:
- neo4j_db
ports:
- '7474:7474'
- '7687:7687'
flask_svc:
build: .
depends_on:
- 'db'
entrypoint:
- flask
- run
- --host=0.0.0.0
environment:
FLASK_DEBUG: 1
FLASK_APP: app.app.py
ports:
- '5000:5000'
volumes:
- '.:/app'
networks:
- neo4j_db
networks:
neo4j_db:
driver: bridge
And I run it with:
docker-compose up --build -d

Try using neomodel==3.2.9
I had a similar issue and rolled back version to get it to work.
Here's the commit that broke things
Looks like they introduced _NODE_CLASS_REGISTRY under neomodel.Database, object which is meant to be a singleton. But with Flask it's not necessarily a singleton because it keeps instantiating new instances of Database with empty _NODE_CLASS_REGISTRY.
I am not sure how to get this to work with 3.3.0

Related

when i run the docker image I'm unable to find the content on the browser

I've built a docker image using the command "docker build -t flask-app-testing .". Later I've run the image with the command docker run --name test-flask -p 5060:5060 flask-app-testing using a port of 5060 later it generates a url like : image
if i click here it gives the content from the flask app on the browser page but it's not giving the desired. and showing ....
This site can’t be reached The webpage at http://localhost:5060/ might be temporarily down or it may have moved permanently to a new web address.
ERR_UNSAFE_PORT
The app.py file is
import uuid
from flask import Flask
instanceId = uuid.uuid4().hex
app = Flask(__name__)
#app.route("/")
def get_instance_id():
return f"instance {instanceId}"
if __name__ == "__main__":
app.run(port=5060, host="0.0.0.0")
The docker file is:
FROM python:3.9.12
WORKDIR /app
COPY . .
RUN pip3 install -r requirements.txt
CMD ["gunicorn", "--bind=0.0.0.0:5060","app:app"]
requirements.txt:
flask
gunicorn

ConnectionError: HTTPConnectionPool(host='172.30.0.3', port=5051): Max retries exceeded with url for flask docker

Im a begginer in docker and flask and am trying to build an arthimatic microservice.I'm trying to build seperate flask applications for addition,substraction etc and communicating with them from my landing-service.For now I am only trying to implement Addition service
landing-Service Code
#app.route('/', methods=['POST', 'GET'])
def index():
# Only process the code if the Submit button has been clicked.
if request.form.get("submit_button") != None:
number_1 = int(request.form.get("first"))
number_2 = int(request.form.get('second'))
operation = request.form.get('operation')
result = 0
if operation == 'add':
response = requests.get(f"http://localhost:5051/?operator_1={number_1}&operator_2={number_2}")
result = response.text
flash(f'The result of operation {operation} on {number_1} and {number_2} is {result}')
return render_template('index.html')
if __name__ == '__main__':
app.run(
debug=True,
port=5050,
host="0.0.0.0"
)
Addition service code:
class Addition(Resource):
def get(self):
if request.args.get('operator_1)!=None and request.args.get('operator_2')!=None:
operator_1 = int(request.args.get('operator_1'))
operator_2 = int(request.args.get('operator_2'))
return operator_1+operator_2
api.add_resource(Addition,'/')
if __name__=='__main__':
app.run(debug=True,
port=5051,
host="0.0.0.0"
)
Docker compose file
version: '3.3' # version of compose format
services:
landing-service:
build: ./landing # path is relative to docker-compose.yml location
hostname: landing-service
ports:
- 5050:5050 # host:container
networks:
sample:
aliases:
- landing-service
addition-service:
build: ./addition # path is relative to docker-compose.yml location
hostname: addition-service
ports:
- 5051:5051 # host:container
networks:
sample:
aliases:
- addition-service
networks:
sample:
I am getting .ConnectionError: HTTPConnectionPool(host='172.30.0.3', port=5051): Max retries exceeded with url when i use docker-compose up.The applications work when i use them from command line and my lannding service is able to communicate with localhost:5051 but i am unable to communicate between them when i use docker-compose....

How do I run redis on Travis CI?

I am practicing unit testing using django
In items/tests.py
class NewBookSaleTest(SetUpLogInMixin):
def test_client_post_books(self):
send_post_data_post = self.client.post(
'/booksale/',
data = {
'title':'Book_A',
}
)
new_post = ItemPost.objects.first()
self.assertEqual(new_post.title, 'Book_A')
In views/booksale.py
class BookSale(LoginRequiredMixin, View):
login_url = '/login/'
def get(self, request):
[...]
def post(self, request):
title = request.POST.get('title')
saler = request.user
created_bookpost = ItemPost.objects.create(
user=saler,
title=title,
)
# redis + celery task queue
auto_indexing = UpdateIndexTask()
auto_indexing.delay()
return redirect(
[...]
)
when I run unit test, raise redis connection error
redis.exceptions.ConnectionError
I know when I running redis-server and celery is error will solve
but when I run unit test in Travis CI I can't run redis-server and celery in Travis CI
So, I found this link
I try insert this code in .travis.yml
language:
python
python:
- 3.5.1
addons:
postgresql:"9.5.1"
install:
- pip install -r requirement/development.txt
service:
- redis-server
# # command to run tests
script:
- pep8
- python wef/manage.py makemigrations users items
- python wef/manage.py migrate
- python wef/manage.py collectstatic --settings=wef.settings.development --noinput
- python wef/manage.py test users items --settings=wef.settings.development
but it shows same error
so I found next link
before_script:
- sudo redis-server /etc/redis/redis.conf --port 6379 --requirepass 'secret'
but... it show same error...
how can I running redis-server in travis ci?
If you have not solve the problem now, here is a solution.
Remove the service line.
Redis is provided by the test environment as a default component, so
service:
- redis-server
will be translated as:
service redis start
In this problem, we want to customize redis to add password auth. So we doesn't need travis ci to start redis service. Just use the before_script.
And after all, your .travis.yml should be this:
language:
python
python:
- 3.5.1
addons:
postgresql:"9.5.1"
install:
- pip install -r requirement/development.txt
before_script:
- sudo redis-server /etc/redis/redis.conf --port 6379 --requirepass 'secret'
# # command to run tests
script:
- pep8
- python wef/manage.py makemigrations users items
- python wef/manage.py migrate
- python wef/manage.py collectstatic --settings=wef.settings.development --noinput
- python wef/manage.py test users items --settings=wef.settings.development

Docker Compose - adding volume using Amazon S3

I'm using docker-compose v2 to build my containers (Django and Nginx).
I'm wondering how to store the static and media files. At the beginning I stored them as a volume on the machine, but the machine crashed and I lost the data (or at least, I didn't know how to recover it).
I thought it's better to store it on Amazon S3, but there are no guides for that (maybe it means something :) ).
This is my docker-compose file:
I tried to add the the needed fields (name, key, secret,...) but no success so far.
Is it the right way?
Thanks!
version: '2'
services:
web:
build:
context: ./web/
dockerfile: Dockerfile
expose:
- "8000"
volumes:
- ./web:/code
- static-data:/www/static
- media-data:/www/media
env_file: devEnv
nginx:
build: ./nginx/
ports:
- "80:80"
volumes:
- static-data:/www/static
- media-data:/www/media
volumes_from:
- web
links:
- web:web
volumes:
static-data:
driver: local
media-data:
driver: s3
Here a example of how upload files to S3(for backup) from a container, but would be made in the host OS too, since you have a volume of container mounted on Host OS.
In this script I download media from S3 to a local container/server. After it, I use pynotify to watch the dir static/media, for modifications. If any change occur it upload the file to S3 using the command subprocess.Popen(upload_command.split(" ")).
I think that you can adapt this script for your problem too.
Before you test this script, you should set your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY on the environment variables of OS.
For more details S4cmd documentation.
#!-*- coding:utf-8 -*-
import pyinotify
import os
import subprocess
from single_process import single_process
# You most have set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
# in enviromnet variables
PROJECT_DIR = os.getcwd()
MEDIA_DIR = os.path.join(PROJECT_DIR, "static/media")
AWS_BUCKET_NAME = os.environ.get("AWS_BUCKET_NAME", '')
S4CMD_DOWNLOAD_MEDIA = "s4cmd get --sync-check --recursive s3://%s/static/media/ static/" % (AWS_BUCKET_NAME)
UPLOAD_FILE_TO_S3="s4cmd sync --sync-check %(absolute_file_dir)s s3://"+ AWS_BUCKET_NAME +"/%(relative_file_dir)s"
# Download all media from S3
subprocess.Popen(S4CMD_DOWNLOAD_MEDIA.split(" ")).wait()
class ModificationsHandler(pyinotify.ProcessEvent):
def process_IN_CLOSE_WRITE(self, event):
try:
dir = event.path
file_name = event.name
absolute_file_dir=os.path.join(dir, file_name)
relative_dir=dir.replace(PROJECT_DIR, "")
relative_file_dir=os.path.join(relative_dir, file_name)
if relative_file_dir.startswith("/"):
relative_file_dir = relative_file_dir[1:]
print("\nSeding file %s to S3" % absolute_file_dir)
param = {}
param.update(absolute_file_dir=absolute_file_dir)
param.update(relative_file_dir=relative_file_dir)
upload_command = UPLOAD_FILE_TO_S3 % param
print(upload_command)
subprocess.Popen(upload_command.split(" "))
except Exception as e:
# log excptions
print("Some problem:", e.message)
#single_process
def main():
handler = ModificationsHandler()
wm = pyinotify.WatchManager()
notifier = pyinotify.Notifier(wm, handler)
print("\nListening changes in: " + MEDIA_DIR)
if MEDIA_DIR:
wdd = wm.add_watch(MEDIA_DIR, pyinotify.IN_CLOSE_WRITE, auto_add=True, rec=True)
notifier.loop()
if __name__ == "__main__":
main()

Django and Celery: Tasks not imported

I am using Django with Celery to run two tasks in the background related to contacts/email parsing.
Structure is:
project
/api
/core
tasks.py
settings.py
settings.py file contains:
BROKER_URL = 'django://'
BROKER_BACKEND = "djkombu.transport.DatabaseTransport"
#celery
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
BROKER_VHOST = "/"
sys.path.append(os.path.dirname(os.path.basename(__file__)))
CELERY_IMPORTS = ['project.core.tasks']
import djcelery
djcelery.setup_loader()
# ....
INSTALLED_APPS = (
#...
'kombu.transport.django',
'djcelery',
)
tasks.py contains:
from celery.task import Task
from celery.registry import tasks
class ParseEmails(Task):
#...
class ImportGMailContactsFromGoogleAccount(Task):
#...
tasks.register(ParseEmails)
tasks.register(ImportGMailContactsFromGoogleAccount)
Also, added in wsgi.py
os.environ["CELERY_LOADER"] = "django"
Now, I have this app hosted on a WebFactional server. On my localhost this runs fine, but on the WebFaction server, where the Django app is deployed on a Apache server, I get:
2013-01-23 17:25:00,067: ERROR/MainProcess] Task project.core.tasks.ImportGMailContactsFromGoogleAccount[df84e03f-9d22-44ed-a305-24c20407f87c] raised exception: Task of kind 'project.core.tasks.ImportGMailContactsFromGoogleAccount' is not registered, please make sure it's imported.
But the tasks show up as registered. If I run
python2.7 manage.py celeryd -l info
I obtain:
-------------- celery#web303.webfaction.com v3.0.13 (Chiastic Slide)
---- **** -----
--- * *** * -- [Configuration]
-- * - **** --- . broker: django://localhost//
- ** ---------- . app: default:0x1e55350 (djcelery.loaders.DjangoLoader)
- ** ---------- . concurrency: 8 (processes)
- ** ---------- . events: OFF (enable -E to monitor this worker)
- ** ----------
- *** --- * --- [Queues]
-- ******* ---- . celery: exchange:celery(direct) binding:celery
--- ***** -----
[Tasks]
. project.core.tasks.ImportGMailContactsFromGoogleAccount
. project.core.tasks.ParseEmails
I thought it could be a relative import error, but I assumed the changes in settings.py and wsgi.py would prevent that.
I am thinking the multiple Python version supported by WebFactional could have to do with this, however I installed all the libraries for Python 2.7 and I am also running Django for 2.7, so there should be no problem with that.
Running in localhost using celeryd -l info the Tasks also show up in the list when I start the worker but it doesn't output the error when I call the task - it runs perfectly.
Thank you
I had the same issue in a new Ubuntu 12.04 / Apache / mod_wsgi / Django 1.5 / Celery 3.0.13 production environment. Everything works fine on my Mac Os X 10.8 laptop and my old server (which has Celery 3.0.12), but not on the new server.
It seems there is some issue in Celery:
https://github.com/celery/celery/issues/1150
My initial solution was changing my Task class based task to #task decorator based, from something like this:
class CreateInstancesTask(Task):
def run(self, pk):
management.call_command('create_instances', verbosity=0, pk=pk)
tasks.register(CreateInstancesTask)
to something like this:
#task()
def create_instances_task(pk):
management.call_command('create_instances', verbosity=0, pk=pk)
Now this task seems to work, but of course I've to do some further testing...