I'm trying to deploy a Flask app on Google Compute Engine. I have it all configured, but when I try to deploy it with gcloud app deploy, it says ModuleNotFoundError: No module named 'main'. This is my file structure:
└── gpt-redteam-api/
├── app.yaml
├── main.py
└── ...other files
and I am deploying it from inside gpt-redteam-api. Is this a common problem/are there any elementary fixes I’m missing?
For your issue you need to add a custom entrypoint to recognize the main.py
Entrypoint: Optional. Overrides the default startup behavior by executing the entrypoint command when your app starts. For your app to receive HTTP requests, the entrypoint element should contain a command which starts a web server that listens on port 8080.
You need to configure the entrypoint field by the following steps.
This line says to look for the variable named app in the module named main.py:
entrypoint: gunicorn -b:$PORT main:app
You could either rename your app.py to main.py or update this line to:
entrypoint: gunicorn -b:$PORT app:app
For more information follow this doc and refer to this stack question also.
Related
I'm building a flask application, located in a subdirectory within my project called myapp. Running gunicorn --bind 0.0.0.0:$PORT myapp.app:app works fine, no errors, regardless of what FLASK_APP is set to. When I try to use Flask's CLI, however, it fails to find my app (reasonable), but when I set FLASK_APP to myapp.app., it appears to be doubling up the import path, resulting in an import error:
FLASK_APP=myapp.app flask run
* Serving Flask app 'myapp.app' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
Usage: flask run [OPTIONS]
Try 'flask run --help' for help.
Error: While importing 'myapp.myapp.app', an ImportError was raised.
How can I solve this? Is this a bug in Flask?
There was a __init__.py in my project directory that was causing this.
I dug into the source code of flask in cli.py and found the following code in prepare_import:
# move up until outside package structure (no __init__.py)
while True:
path, name = os.path.split(path)
module_name.append(name)
if not os.path.exists(os.path.join(path, "__init__.py")):
break
Because I had an __init__.py in my project directory (also called myapp), this made flask try to import myapp.app from the ancestor of my project, resulting in myapp.myapp.app.
Ive been trying to my django website but I've been having the same 502 BAD CONNECTION nginx error every time and I'm not sure how to troubleshoot it.
When I run the program using py.manager runserver, it works perfectly fine, but but I only receive an error once I run gcloud app deploy, looking at the logs I haven't received any errors, just one warning stating that Container called exit(1).
I've tried one solution from the Google's website: https://cloud.google.com/endpoints/docs/openapi/troubleshoot-response-errors and I added resources: memory_gb: 4 to my app.yaml file and I still end up with an error.
I am not sure this is of much help my app.yaml currently looks like this:
runtime: python39
handlers:
# This configures Google App Engine to serve the files in the app's static
# directory.
- url: /static
static_dir: static/
# This handler routes all requests not caught above to your main app. It is
# required when static routes are defined, but can be omitted (along with
# the entire handlers section) when there are no static files defined.
- url: /.*
script: auto
resources:
memory_gb: 4
And my gcloudignore looks like this:
# This file specifies files that are *not* uploaded to Google Cloud Platform
# using gcloud. It follows the same syntax as .gitignore, with the addition of
# "#!include" directives (which insert the entries of the given .gitignore-style
# file at that point).
#
# For more information, run:
# $ gcloud topic gcloudignore
#
.gcloudignore
# If you would like to upload your .git directory, .gitignore file or files
# from your .gitignore file, remove the corresponding line
# below:
.git
.gitignore
secret_variables.py
.venv
Pipfile
Pipfile.lock
# Python pycache:
__pycache__/
# Ignored by the build system
/setup.cfg
Thank you in advance for the help
Cloned and tried running the code locally after running pip install -r requirements.txt. This is the error I got:
ModuleNotFoundError: No module named 'django_filters'
This means that this requirement needs to be added before the code is deployed to the GAE.
add django-filter to the requirements.txt file to fix it.
I'm trying to migrate a Django application from Google Kubernetes Engine to Google Cloud Run, which is fully managed. Basically, with Cloud Run, you containerize your application into a single Dockerfile and Google does the rest.
I have a Dockerfile which at one point does call a bash script via ENTRYPOINT
But I need to start Nginx and start Gunicorn. The Google Cloud Run documentation suggest starting Gunicorn like this:
CMD gunicorn -b :$PORT foo.wsgi
(let's sat my Django app is named "foo")
But I also need to start Nginx via:
CMD ["nginx", "-g", "daemon off;"]
And since only one CMD is allowed per Dockerfile, I'm not sure how to combine them.
To try to get around some of these difficulties, I was looking into using a Dockerfile to build from that already has both working and I came across this one:
https://github.com/tiangolo/meinheld-gunicorn-docker
But the paths don't quite match mine. Quoting from the documentation of that repo:
You don't need to clone the GitHub repo. You can use this image as a
base image for other images, using this in your Dockerfile:
FROM tiangolo/meinheld-gunicorn:python3.7
COPY ./app /app It will expect a file at /app/app/main.py.
And will expect it to contain a variable app with your "WSGI"
application.
My wsgi.py file ends up at /app/foo/foo/wsgi.py and contains an application named application
But if I understand that documentation correctly, when it says it will expect the WSGI application to be named app and to be located at /app/app/main.py it's basically saying that I need to revise the path and the variable name so that when it builds the image it knows that app is called application and that instead of finding it at /app/app/main.py it will find it at /app/foo/foo/wsgi.py
I assume that I can fix the app vs application variable name by adding a line to my wsgi.py file like app = application but I'm not sure how to correct the path that Docker expects.
Can someone explain to me how to adapt this to my needs?
(Or any other way to get it working)
I keep getting an error that says no module named backend, this is the directory where my webapp2 application is.
My folder structure:
/project
/backend
/env #python virtual env libraries
main.py #my main entry point where webapp2 app instance is
requirements.txt
app.yaml
My app.yaml:
service: default
handlers:
- url: /dist
static_dir: dist
- url: /.*
script: backend.main.app
libraries:
- name: webapp2
version: latest
- name: jinja2
version: latest
Before my app.yaml was in backend, but I decided to move to root. Now when I run dev_appserver.py in root, I keep getting ImportError: No module named backend
I created the virualenv and installed the requirements.txt packages inside the backend directory.
EDIT: I am unsure if this makes a difference, but I have already deployed my application when the app.yaml was inside the backend folder. I am guessing this should not matter since I am trying to test locally by moving the app.yaml in my project root and running dev_appserver.py app.yaml, but it seems to not work when I do this.
The directory containing the app.yaml file for a GAE service is the service's top-level directory. The content of this directory is what will be uploaded to GAE when you deploy the service. All paths referenced in the service's code or configurations are relative to this top level dir. So moving the app.yaml file around without updating the related code and configurations accordingly will break the app's functionality.
You don't seem to grasp the meaning of the script: statement very well. From Handlers element:
A script: directive must be a python import path, for example,
package.module.app that points to a WSGI application. The last
component of a script: directive using a Python module path is the
name of a global variable in the module: that variable must be a WSGI
app, and is usually called app by convention.
Note: just like for a Python import statement, each subdirectory that
is a package must contain a file named __init__.py
So, assuming your app.yaml file is located in your project dir, the
script: backend.pythonAttack.app
would mean:
having an __init__.py file inside the backend dir, to make backend a package
having a pythonAttack.py file in the backend dir, with an app variable pointing to your webapp2 application
According to your description you don't meet any of these conditions.
My recommendation:
keep the app.yaml inside the backend dir (which doesn't need to be a python package dir)
update its script line to match your code. Assuming the app variable for your webapp2 app is actually in the main.py file the line would be:
script: main.app
run the app locally by explicitly passing the app.yaml file as argument (in general a good habit and also the only way to run apps with multiple services and/or a dispatch.yaml file):
dev_appserver.py backend/app.yaml
store your service python dependencies inside a backend/lib directory (to follow the naming convention), separated from your virtualenv packages
store the env virtualenv package dir outside the backend directory to prevent unnecessarily uploading them to GAE when deploying the service (and potential interference with the app's operation). The goal of the virtualenv is to properly emulate the GAE sandbox locally so that you can run the development server correctly.
Potentially of interest for structuring a multi-service app: Can a default service/module in a Google App Engine app be a sibling of a non-default one in terms of folder structure?
My Problem:
When using gunicorn as my WSGI HTTP server, foreman fails to find the Django (wsgi?) application.
Application Structure:
In my Django application, I have things structured like this:
<git_repository_root>/
<django_project_root>/
<configuration_root>/
The <git_repository_root> contains the project management and deployment related things (requirements.txt, Procfile, fabfile.py, etc.)
The <django_project_root> contains my Django apps and application logic.
Finally, the <configuration_root> contains my settings.py and wsgi.py.
What I have tried:
My Procfile should look like this (according to the Heroku Docs):
web: gunicorn myapp.wsgi
When running foreman start with this project layout, I get an error:
ImportError: Import by filename is not supported.
What works:
If I move my Procfile from <git_repository_root> to <git_repository_root> it works locally. After pushing to Heroku (note: Heroku sees <git_repository_root>) I can't scale any workers / add processes. I get the following:
Scaling web dynos... failed
! No such process type web defined in Procfile.
I believe I want Procfile in my <git_repository_root> anyway though - so why isn't it working? I also tried changing the Procfile to:
web: gunicorn myapp/myapp.wsgi
but no luck. Any help would be much appreciated!
Treat the entry in your Procfile like a bash command. You can cd into your <django_project_root> and then run the server.
For example, your Procfile (which should be in your <git_repository_root>) might look something like this:
web: cd <django_project_root> && gunicorn
--env DJANGO_SETTINGS_MODULE=<configuration_root>.settings
<configuration_root>.wsgi
Move your Procfile back to <git_repository_root> and use:
web: gunicorn <django_project_root>.myapp:myapp
replacing the final "myapp" with your app's class name, presumably it is indeed "myapp".
... and read the error message: it is telling you that you can't import your worker class (app) by filename (myapp.wsgi), so of course saying dirname/myapp.wsgi won't work as well. You need a Python module:class syntax.