Problem with Running okta-jwt-verifier in docker container - flask

I am writing a flask-api where token-verification is done via okta-jwt-verifier package.
I have this code to verify tokens:
import asyncio
from okta_jwt_verifier import AccessTokenVerifier, JWTVerifier
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
def is_access_token_valid(token, issuer,client_id):
jwt_verifier = JWTVerifier(issuer=issuer, client_id=client_id,
audience='api://default', leeway=60)
try:
verified_token = jwt_verifier.verify_access_token(token)
parsed=jwt_verifier.parse_token(token)
g.decoded_token=parsed
loop.run_until_complete(verified_token)
return True
except Exception:
print("Exception")
return False
It works great when i run this on my machine, but when I do this inside docker-container (still on my machine) (i have docker-compose.yml file with 2 services: flask and db(PostgreSQL)), the process fails at loop.run_until_complete(verified_token). I am not sure how to work around that issue. Please help if you have any ideas! Thanks in advance!

Related

Error in event loop with Flask, gremlin python and uWSGI

I'm running into a problem when using Flask with a gremlin database (it's an Amazon Neptune database) and using uWSGI. Everything works fine in my unit tests which use the test_client provided by Flask. However, in production we use uWSGI and there I get the following error:
There is no current event loop in thread 'uWSGIWorker4Core1'.
My app code is creating a connection to the database before a request and assigning it to the Flask g object. During teardown, the database connection is removed. The error happens when the app is trying to close the connection.
from flask import Flask, g
from gremlin_python.structure.graph import Graph
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
from gremlin_python.process.anonymous_traversal import traversal
app = Flask(__name__, instance_relative_config=True)
#app.before_request
def _db_connect():
if not hasattr(g, 'graph_conn'):
g.graph_conn = DriverRemoteConnection(app.config['DATABASE_HOST'],'g')
g.gg = traversal().withRemote(g.graph_conn)
# This hook ensures that the connection is closed when we've finished
# processing the request.
#app.teardown_appcontext
def _db_close(exc):
if hasattr(g, 'graph_conn'):
g.graph_conn.close(). # <- ERROR THROWN AT THIS LINE
del g.graph_conn
the uWSGI config does use multiple threads:
[uwsgi]
http = 0.0.0.0:3031
manage-script-name = true
module = dogmaserver:app
processes = 4
threads = 2
offload-threads = 2
stats = 0.0.0.0:9191
But my understanding of how Flask's g object worked would be that it is all on the same thread. Can anyone let me know what I'm missing?
I'm using Flask 1.0.2, gremlinpython 3.4.11 and uWSGI 2.0.17.1.
I used a workaround by removing the threads configuration option in uWSGI which makes there only be a single thread per process.

Celery's pytest fixtures (celery_worker and celery_app) does not work

I'm trying to write a Celery(v. 4.2.1) integration test for my Django(v. 2.2.3) application.
There is a bunch of outdated articles about this around, but non of them seem to use stuff from the latest celery testing documentation - https://docs.celeryproject.org/en/v4.2.1/userguide/testing.html#fixtures
Seems like Celery comes with two fixtures for testing: celery_app and celery_worker which should allow to actually run worker in the background thread of your tests.
As the doc's say I've added
#pytest.fixture(scope='session')
def celery_config():
return {
'broker_url': 'memory://',
'result_backend': 'rpc'
}
into my conftest.py
I've wrapped my test function with
#pytest.mark.celery_app
#pytest.mark.celery_worker
usually I wrap my celery tasks with
from celery import shared_task
#shared_task
def blabla(...):
pass
but I've even tried to replace it with
from myapp.celery import celery
#celery.task
def blabla():
pass
What else... I run my celery task via apply_async providing eta argument.
Tried tons of ways but the celery fixtures do not affect how things work and the task call goes to actual redis instance and is picked by a worker in a separate process (if I run it) and hence my assert_called fails along with my efforts to access the object which are in the testing database.
This way it it does not load fixtures.
This way it does not use specified fixtures because they should appear in the method arguments and break it by exceeding the number of arguments.
Thought that the Celery pytest plugin might be missing at all, but that's not true - tried to register it explicitly:
Though the fixtures are available to pytest:
But I've got into theis source code, added some wild prints there and I don't see them logged.
OP here, I've figured it out and wrote an article - https://medium.com/#scythargon/how-to-use-celery-pytest-fixtures-for-celery-intergration-testing-6d61c91775d9
Main key:
#pytest.mark.usefixtures('celery_session_app')
#pytest.mark.usefixtures('celery_session_worker')
class MyTest():
def test(self):
assert mul.delay(4, 4).get(timeout=10) == 16
For how many developers use these tools, I'm surprised at how lacking the docs are on the topic. I struggled with this for about a half day and then found #scythargon's discussion. I solved it slightly differently, so I'm throwing my answer in the the mix for posterity (very close to the OP's method):
tasks.py
from celery import shared_task
#shared_task
def add(x, y):
return x + y
#shared_task()
def multiply(x, y):
return x * y
conftest.py
import pytest
pytest_plugins = ('celery.contrib.pytest', )
#pytest.fixture(scope='session')
def celery_config():
return {
'broker_url': 'redis://localhost:8001',
'result_backend': 'redis://localhost:8001'
}
tests.py
from api.app.tasks import add, multiply
def test_celery_worker_initializes(celery_app, celery_worker):
assert True
def test_celery_tasks(celery_app, celery_worker):
assert add.delay(4, 4).get(timeout=5) == 8
assert multiply.delay(4, 4).get(timeout=5) == 16
As an added bonus, my redis broker and backend are running in Docker (as part of a swarm):
docker-compose.yml
version: "3.9"
services:
. . .
redis:
image: redis:alpine
networks:
swarm-net:
aliases:
- redis
ports:
- "8001:6379"
. . .

Django celery crontab not working when CELERY_TIMEZONE='Asia/Calcutta'

I am tyring to schedule a task in Django using celery.Everything works fine when the CELERY_TIMEZONE='UTC' but doesnt work when I change the CELERY_TIMEZONE='Asia/Calcutta'.
#settings.py
CELERY_TIMEZONE='UTC'
CELERY_ENABLE_UTC = True
#tasks.py
#periodic_task(run_every=crontab(day_of_month="1-31", hour=6, minute=8), name="newtask1")
def elast():
print "test"
This works just fine but when I change my settings to
CELERY_TIMEZONE='Asia/Calcutta'
CELERY_ENABLE_UTC = False
#tasks.py
#periodic_task(run_every=crontab(day_of_month="1-31", hour=11, minute=38), name="newtask1")
def elast():
print "test"
This doesn't work.I can't seem to figure out the issue.Am I missing something? Any help would be appreciated.
Configure Celery to use a custom time zone. The timezone value can be any time zone supported by the pytz library.
kindly refer this celery reference guide

How to call task properly?

I configured django-celery in my application. This is my task:
from celery.decorators import task
import simplejson as json
import requests
#task
def call_api(sid):
try:
results = requests.put(
'http://localhost:8000/api/v1/sids/'+str(sid)+"/",
data={'active': '1'}
)
json_response = json.loads(results.text)
except Exception, e:
print e
logger.info('Finished call_api')
When I add in my view:
call_api.apply_async(
(instance.service.id,),
eta=instance.date
)
celeryd shows me:
Got task from broker: my_app.tasks.call_api[755d50fd-0f0f-4861-9a18-7f4e4563290a]
Task my_app.tasks.call_api[755d50fd-0f0f-4861-9a18-7f4e4563290a] succeeded in 0.00513911247253s: None
so should be good, but nothing happen... There is no call to for example:
http://localhost:8000/api/v1/sids/1/
What am I doing wrong?
Are you running celery as a separate process?
For example in Ubuntu run using the command
sudo python manage.py celeryd
Till you run celery (or django celery) as a separate process, the jobs will be stored in the database (or queue or the persistent mechanism you have configured - generally in settings.py).

django generator function not working with real server

I have some code written in django/python. The principal is that the HTTP Response is a generator function. It spits the output of a subprocess on the browser window line by line. This works really well when I am using the django test server. When I use the real server it fails / basically it just beachballs when you press submit on the page before.
#condition(etag_func=None)
def pushviablah(request):
if 'hostname' in request.POST and request.POST['hostname']:
hostname = request.POST['hostname']
command = "blah.pl --host " + host + " --noturn"
return HttpResponse( stream_response_generator( hostname, command ), mimetype='text/html')
def stream_response_generator( hostname, command ):
proc = subprocess.Popen(command.split(), 0, None, subprocess.PIPE, subprocess.PIPE, subprocess.PIPE )
yield "<pre>"
var = 1
while (var == 1):
for line in proc.stdout.readline():
yield line
Anyone have any suggestions on how to get this working with on the real server? Or even how to debug why it is not working?
I discovered that the generator function is actually running but it has to complete before the httpresponse throws up a page onscreen. I don't want to have to wait for it to complete before the user sees output. I would like the user to see output as the subprocess progresses.
I'm wondering if this issue could be related to something in apache2 rather than django.
#evolution did you use gunicorn to deploy your app. If yes then you have created a service. I am having a similar kind of issue but with libreoffice. As much as I have researched I have found that PATH is overriding the command path present on your subprocess. I did not have a solution till now. If you bind your app with gunicorn in terminal then your code will also work.