Can't communicate between django and selenium docker containers - django

I'm trying to setup a CI environment where I can test my Django application with selenium where both are running in docker.
My test is setup with the following:
from time import sleep
from django.contrib.staticfiles.testing import StaticLiveServerTestCase
from selenium.webdriver.remote.webdriver import WebDriver
class MySeleniumTests(StaticLiveServerTestCase):
port = 8000
#classmethod
def setUpClass(cls):
super().setUpClass()
cls.selenium = WebDriver("http://selenium:4444", desired_capabilities={'browserName': 'chrome'})
cls.selenium.implicitly_wait(10)
#classmethod
def tearDownClass(cls):
cls.selenium.quit()
super().tearDownClass()
def test_login(self):
self.selenium.get('%s:%s%s' % ('http://web', self.port, '/'))
greeting = self.selenium.find_element_by_id("greeting")
self.assertEqual(greeting.text, 'hello world')
I then try to run this on gitlab with this CI setup in my .gitlab-ci.yml:
image:
name: docker/compose:1.26.2
entrypoint: ['/bin/sh', '-c']
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
stages:
- test
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
build:
stage: test
script:
- docker build --tag django .
- docker network create selenium-net
- docker run -d --network selenium-net --name selenium selenium/standalone-chrome:4.0.0-alpha-6-20200730
- docker run --network selenium-net --name web --expose 8000 django dindselenium/manage.py test myapp
On my local machine the connection WebDriver setup succeeds but then Selenium fails to connect to the web app. On the CI environment I can't even connect to Selenium from the web app.
I've setup an example repo here: https://gitlab.com/oskarpersson/dind-selenium/, and an example of a failing job: https://gitlab.com/oskarpersson/dind-selenium/-/jobs/705523165

Related

Django is unable to use the SMTP server in the tests environment

I am running a Django (4.1) app in Docker. As part of our test suite, I would like to make use of a development SMTP server which is also running in a Docker container (see docker-compose.yml below). I am using a Selenium driver to run the tests against a Django LiveServer instance. I am also using 1secmail as a temporary mailbox for the tests.
The SMTP server is working nicely with Django (i.e. Django is able to send emails with it) only if Django is not running from a LiveServer. Whenever I try to programmatically test a scenario where SMTP is involved (with a Selenium driver), the email never gets sent (see the failing test below).
The question is simple: how do I make Django and the SMTP server talk to each other in the tests environment?
My docker-compose.yml
version: '3.8'
services:
myapp:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./:/usr/src/app/
ports:
- 8009:8000
env_file:
- ./.env.dev
links:
- selenium-chrome
- dev-smtp-server
myapp-db:
image: postgres:14-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=blabla
- POSTGRES_PASSWORD=blabla
- POSTGRES_DB=blabla
selenium-chrome:
image: selenium/standalone-chrome
ports:
- 4444:4444 # actual Selenium
- 5900:5900 # VNC server
dev-smtp-server:
image: bytemark/smtp
restart: always
volumes:
postgres_data:
My test fixtures
...
from pytest_django.live_server_helper import LiveServer
#pytest.fixture(scope="session")
def test_server() -> LiveServer:
host = socket.gethostbyname(socket.gethostname())
if host not in settings.ALLOWED_HOSTS:
settings.ALLOWED_HOSTS.append(host)
server = LiveServer(host)
yield server
server.stop()
#pytest.fixture
def driver():
options = webdriver.ChromeOptions()
driver = webdriver.Remote(
command_executor="http://selenium-chrome:4444/wd/hub", # connect to the Selenium container defined in docker-compose
options=options,
)
yield driver
driver.quit()
#pytest.fixture
def forgot_my_password_page_driver(driver: Remote, test_server: LiveServer):
forgotMyPasswordURL = reverse("password_reset")
driver.get(f"{test_server.url}{forgotMyPasswordURL}")
return driver
#pytest.fixture
def email_address_for_which_forgotten_password_form_has_just_been_filed(
forgot_my_password_page_driver, db
):
emailInput = forgot_my_password_page_driver.find_element(By.NAME, "email")
emailAddress = OneSecMailAPI.get_random_email_address()
emailInput.send_keys(emailAddress)
sendButton = forgot_my_password_page_driver.find_element(
By.CSS_SELECTOR, "input[type='submit']"
)
sendButton.click()
return emailAddress
The failing test
def test_forgot_my_password_form_sends_an_email_to_actually_reset_your_password(
email_address_for_which_forgotten_password_form_has_just_been_filed,
):
emailAddress = email_address_for_which_forgotten_password_form_has_just_been_filed
emailDomain = emailAddress.split("#")[1]
emailReceived = False
loopCounter = 0
while not emailReceived and loopCounter < 5:
messages = OneSecMailAPI.get_messages(login=emailAddress, domain=emailDomain)
if len(messages) == 0:
time.sleep(1)
loopCounter = loopCounter + 1
else:
emailReceived = True
assert emailReceived

GCP Vertex.ai custom container endpoint hosting giving format error

After spending multiple hours, I created this sample model endpoint code I want to host on vertex.ai, these are the three files I have
1. Dockerfile
FROM python:3.7
USER root
ADD . /home/model-server/
WORKDIR /home/model-server/
RUN pip3 install --upgrade pip
RUN pip install -r ./requirements.txt
CMD exec gunicorn -b :5000 --max-requests 1 --graceful-timeout 300 -t 600 main:app
Main.py -- contains flask app
import json
from flask import Flask, request, Response, jsonify
app = Flask(__name__)
app.config['JSON_SORT_KEYS'] = False
#app.route('/health', methods=['GET'])
def health_check():
return Response(response=json.dumps({"status": 'healthy'}), status=200, mimetype="application/json")
#app.route('/vertex_predict/', methods=['POST'])
def main():
request_json = request.get_json()
request_instances = request_json['instances']
output = {'predictions':
[
{
'result' : "this is working with static output"
}
]
}
return jsonify(output)
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
Requirement.txt
pandas==1.1.1
numpy==1.21.5
flask==2.2.2
gunicorn==20.1.0
this is pretty simple code which we are able to put in artifact registry then
Import this as model in vertex.ai Model registry
Then trying to deloy it as endpoint
I tried multiple times but after almost 20 mins, it fails and showing this error all the time
jsonPayload: {
levelname: "ERROR"
logtag: "F"
message: "exec /bin/sh: exec format error"
}
the same flask app is working fine on local machine, not sure what is wrong here on vertex.ai. Can someone please help me here or guide the next steps

Celery tasks don't run in docker container

In Django, I want to perform a Celery task (let's say add 2 numbers) when a user uploads a new file in /media. What I've done is to use signals so when the associated Upload object is saved the celery task will be fired.
Here's my code and Docker configuration:
signals.py
from django.db.models.signals import post_save
from django.dispatch import receiver
from core.models import Upload
from core.tasks import add_me
def upload_save(sender, instance, signal, *args, **kwargs):
print("IN UPLOAD SIGNAL") # <----- LOGS PRINT UP TO HERE, IN CONTAINERS
add_me.delay(10)
post_save.connect(upload_save, sender=Upload) # My post save signal
tasks.py
from celery import shared_task
#shared_task(ignore_result=True, max_retries=3)
def add_me(upload_id):
print('In celery') # <----- This is not printed when in Docker!
return upload_id + 20
views.py
class UploadView(mixins.CreateModelMixin, generics.GenericAPIView):
serializer_class = UploadSerializer
def post(self, request, *args, **kwargs):
serializer = UploadSerializer(data=request.data)
print("SECOND AFTER")
print(request.data) <------ I can see my file name here
if serializer.is_valid():
print("THIRD AFTER") <------ This is printer OK in all cases
serializer.save()
print("FOURTH AFTER") <----- But this is not printed when in Docker!
return response.Response(
{"Message": "Your file was uploaded"},
status=status.HTTP_201_CREATED,
)
return response.Response(
{"Message": "Failure", "Errors": serializer.errors},
status=status.HTTP_403_FORBIDDEN,
)
docker-compose.yml
version: "3.8"
services:
db:
# build: ./database_docker/
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_DB: test_db
POSTGRES_USER: test_user
POSTGRES_PASSWORD: test_pass
# volumes:
# - media:/code/media
web:
build: ./docker/
command: bash -c "python manage.py migrate --noinput && python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
- media:/code/media
ports:
- "8000:8000"
depends_on:
- db
rabbitmq:
image: rabbitmq:3.6.10
volumes:
- media:/code/media
worker:
build: ./docker/
command: celery -A example_worker worker --loglevel=debug -n worker1.%h
volumes:
- .:/code
- media:/code/media
depends_on:
- db
- rabbitmq
volumes:
media:
Dockerfile
FROM python:latest
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip3 install -r requirements.txt
COPY . /code/
WORKDIR /code
Everything works OK when not in Docker.
The problem is that when I'm deploying the above in Docker and try to upload a file, the request never finishes even-though the file is uploaded in the media folder (confirmed it by accessing its contents in both the web and worker containers).
More specifically it seems that the Celery task is not executed (finished?) and the code after the serializer.save() is never reached.
When I remove the signal (thus no Celery task is fired) everything is OK. Can someone please help me?
I just figured it out. Turns out that I need to add the following in the __init__.py of my application.
from .celery import app as celery_app
__all__ = ("celery_app",)
Don't know why everything is running smoothly without this piece of code when I'm not using containers...

flask app is running on docker but browser shows this website is not accessible

i'm new to docker ,i've developed an app using flask and it's working fine.when i tried deploying the app on docker the console says that it's Running on http://127.0.0.1:5000/ ,but when i try to log to this adress on the browser it shows me "this website is not accessible"
here's my Dockerfile:
FROM python:3.6
RUN mkdir /web WORKDIR /web ADD . /web/ RUN pip install -r requirements.txt
ENV FLASK_ENV="docker" EXPOSE 5000 CMD ["python", "mongo.py"]
my dockercompose.yaml file:
version: '3'
services:
# Define the Flask web application
flaskapp:
# Build the Dockerfile that is in the web directory
build: ./web
# Always restart the container regardless of the exit status; try and restart the container indefinitely
restart: always
# Expose port 8000 to other containers (not to the host of the machine)
expose:
- "8000"
# Mount the web directory within the container at /home/flask/app/web
# volumes:
# - ./web:/homvole/flask/app/web
# Don't create this container until the redis and mongo containers (below) have been created
depends_on:
- redis
- mongo
# Link the redis and mongo containers together so that they can talk to one another
links:
- redis
- mongo
# Pass environment variables to the flask container (this debug level lets you see more useful information)
environment:
FLASK_DEBUG: 1
# Deploy with 3 replicas in the case of failure of one of the containers (only in Docker Swarm)
deploy:
mode: replicated
replicas: 3
# Define the redis Docker container
redis:
# use the redis:alpine image: https://hub.docker.com/_/redis/
image: redis:alpine
restart: always
deploy:
mode: replicated
replicas: 3
# Define the redis NGINX forward proxy container
# Define the mongo database
mongo:
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: root
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: root
# Expose port 5432 to other Docker containers
expose:
- "8081"
my mongo.py file:
from flask import Flask, jsonify, request
#from flask.ext.pymongo import PyMongo
from flask_pymongo import PyMongo
app = Flask(__name__)
app.config['MONGO_DBNAME'] = 'databasename'
#app.config['MONGO_URI'] = 'mongodb://username:password#hostname:port/databasename'
app.config['MONGO_URI'] = 'mongodb://root:root#localhost:27017/databasename'
mongo = PyMongo(app)
#app.route('/framework/', methods=['GET'])
def get_all_frameworks():
framework = mongo.db.framework
output = []
for q in framework.find():
output.append({'name' : q['name'], 'language' : q['language']})
return jsonify({'result' : output})
#app.route('/framework/find/<name>', methods=['GET'])
def get_one_framework(name):
framework = mongo.db.framework
q = framework.find_one({'name' : name})
if q:
output = {'name' : q['name'], 'language' : q['language']}
else:
output = 'No results found'
return jsonify({'result' : output})
#app.route('/framework/', methods=['POST'])
def add_framework():
framework = mongo.db.framework
name = request.json['name']
language = request.json['language']
framework_id = framework.insert({'name' : name, 'language' : language})
new_framework = framework.find_one({'_id' : framework_id})
output = {'name' : new_framework['name'], 'language' : new_framework['language']}
return jsonify({'result' : output})
#app.route('/framework/update/', methods=['POST'])
def update_framework():
framework = mongo.db.framework
name = request.json['name']
language = request.json['language']
myquery = { "name": name }
newvalues = { "$set": {"language":language } }
q=framework.update_one(myquery, newvalues)
if q:
output = {'name' : q['name'], 'language' : q['language']}
else:
output = 'No results found'
return jsonify({'result' : output})
#app.route('/framework/delete/<name>', methods=['GET'])
def delete_one_framework(name):
framework = mongo.db.framework
#name = request.json['name']
myquery = { "name": name }
q=framework.delete_one(myquery)
if q:
output = 'element deleted successfully '
else:
output = 'No results found'
return jsonify({'result' : output})
if __name__ == '__main__':
app.run(debug=False)
app.run(threaded=True)
app.run(host='0.0.0.0')
to deploy my app i've used the commands:
docker build -t moapp4 .
docker run -p 5000:5000 moapp4
this shows:
Serving Flask app "mongo" (lazy loading)
Environment: docker
Debug mode: off
Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)

Celery task.delay blocked in docker container

I use celery in my django project. It works well on my MacBook and in a CentOS VM. When I run it in a docker container, the request which contains add.delay(add is a task) method is always blocked.
I created a demo project on github: https://github.com/fengyouchao/proj_test
My task:
#shared_task
def add(x, y):
return x + y
My view:
def index(request):
a = int(request.GET.get('a', 1))
b = int(request.GET.get('b', 2))
add.delay(a, b)
return HttpResponse("Hello world")
def hello(request):
return HttpResponse("hello")
In the demo project I created three services in docker-compose.yml:
web - The service which run "manage.py runserver 0.0.0.0:8000"
celery - The service which run "celery"
rabbitmq - The service wich run rabbitmq-server
Run services
docker-compose up
Test
curl localhost:8000 # blocked
curl localhost:8000/hello # OK
Run the django project in current system(use the same rabbitmq-server in docker container)
manage.py runserver 0.0.0.0:18000
Test
curl localhost:18000 # OK , and the "celery" service printed task logs
This problem has been bothering me for a long time, and I don't know where the problem is. I hope someone can help me. Thanks!
I just came across a similar issue,
I am using rabbitmq container as a broker so added CELERY_BROKER_URL in settings.py
when I run the add.delay() in manage.py django shell, inside the container it got struck but works fine in production
So I have added the following change, it started working
app = Celery('app', broker="amqp://rabbitmq")
I have faced the same issue, and fixed by it importing the app created on proj/proj/celery.py in my proj/proj/__init__.py like this:
from __future__ import absolute_import, unicode_literals
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app
__all__ = ('celery_app',)
You can see more information in Celery's first steps with django documentation.
Hope it helps!