I'm trying to deploy simple django web ap to Azure App Service using CI/CD pipeline (the most basic one that is offered by Microsoft for app deployment- no changes from me). However I'm getting the following error:
2021-03-08T16:55:51.172914117Z File "", line 219, in _call_with_frames_removed
2021-03-08T16:55:51.172918317Z File "/home/site/wwwroot/deytabank_auth/wsgi.py", line 13, in
2021-03-08T16:55:51.172923117Z from django.core.wsgi import get_wsgi_application
2021-03-08T16:55:51.172927017Z ModuleNotFoundError: No module named 'django'
I checked other threads and tried doing all the things mentioned but it did not help, or I am missing something:
In wsgi.py I added:
import os
import sys
sys.path.append(os.path.dirname(os.path.abspath(__file__)) + '/..' )
sys.path.append(os.path.dirname(os.path.abspath(__file__)) + '/../licenses_api')
sys.path.append(os.path.dirname(os.path.abspath(__file__)) + '/../deytabank_auth')
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'deytabank_auth.settings')
application = get_wsgi_application()
But still getting the same error, where django is not recognized. I can see that reuqirements.txt is being installed successfully and it has all the neccessary libraries there (including Django)
My CI/CD yaml file looks like this:
# Python to Linux Web App on Azure
# Build your Python project and deploy it to Azure as a Linux Web App.
# Change python version to one thats appropriate for your application.
# https://learn.microsoft.com/azure/devops/pipelines/languages/python
trigger:
- develop
variables:
# Azure Resource Manager connection created during pipeline creation
azureServiceConnectionId: '***'
# Web app name
webAppName: 'DeytabankAuth'
# Agent VM image name
vmImageName: 'ubuntu-latest'
# Environment name
environmentName: 'DeytabankAuth'
# Project root folder. Point to the folder containing manage.py file.
projectRoot: $(System.DefaultWorkingDirectory)
# Python version: 3.7
pythonVersion: '3.7'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: BuildJob
pool:
vmImage: $(vmImageName)
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python $(pythonVersion)'
- script: |
python -m venv antenv
source antenv/bin/activate
python -m pip install --upgrade pip
pip install setup
pip install -r requirements.txt
workingDirectory: $(projectRoot)
displayName: "Install requirements"
- task: ArchiveFiles#2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(projectRoot)'
includeRootFolder: false
archiveType: zip
archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
replaceExistingArchive: true
- upload: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
displayName: 'Upload package'
artifact: drop
- stage: Deploy
displayName: 'Deploy Web App'
dependsOn: Build
condition: succeeded()
jobs:
- deployment: DeploymentJob
pool:
vmImage: $(vmImageName)
environment: $(environmentName)
strategy:
runOnce:
deploy:
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python version'
- task: AzureWebApp#1
displayName: 'Deploy Azure Web App : DeytabankAuth'
inputs:
azureSubscription: $(azureServiceConnectionId)
appName: $(webAppName)
package: $(Pipeline.Workspace)/drop/$(Build.BuildId).zip
Maybe I need to configure something in the Azure App Service? But i am not sure exactly what.
I have met this issue before, and the problem might be your deployment method. Not sure which one you use, but the classic deployment center below is being deprecated, try use the new deployment center.
Checked your workflow with the one worked on my side, there is nothing different. So I will post the correct step worked on my side for you to refer.
check your project locally to make sure it could run successfully.
Create a new web app (this is to make sure no damage on your web app) and navigate to the Deployment center page.
Go to your GitHub and navigate to GitHub Action page to see the log.
Test your web app and check the file structure on KuDu site: https://{yourappname}.scm.azurewebsites.net/wwwroot/
You could test by click the browse button like what I did.
If you want to run command, go to this site: https://{yourappname}.scm.azurewebsites.net/DebugConsole
By the way, I post this link if you need deploy using DevOps.
The possible reason for this question is that you don't have Django installed.
In the Microsoft-hosted agent ubuntu-latest, Django is not pre-installed. That is, you need to install it manually.
pip install Django==3.1.7
Click this document for detailed information about downloading Django.
Related
I am trying to create a CI/CD pipeline to build war file and deploy it to Tomcat Container from GitLab.
I am using a Maven image to build the project. Once the war file is created, I would like to copy it to some folder so that from there it can be copied to the tomcat server, in a container, webapps directory.
My current approach and goal is to use a Dockerfile in my project. From a Tomcat image, run Tomcat using the project war. I tried using ADD in the Dockerfile but the directory paths where the war resides, $CI_PROJECT_DIR, is not where the Dockerfile is looking.
The following is the ".gitlab-ci.yml" file.
stages:
# Build project
- build
- package
# Build and deploy mailService
#- deploy
variables:
# Variables that can be used throughout the pipeline are defined here.
# Maven variables
MAVEN_CLI_OPTS: '-s /appdir/.m2/settings.xml'
MAVEN_PATH: '/appdir/opt/apache-maven-3.8.4/bin'
IMAGE_PATH: 'gitlab-registry.gs.mil/gteam-development/docker'
project-build:
image: ${IMAGE_PATH}/maven
services:
- tomcat:latest
stage: build
script:
- ${MAVEN_PATH}/mvn ${MAVEN_CLI_OPTS} clean package
- ls -als
- ls -als target
build docker image:
stage: package
image: docker
services:
- docker:dind
script:
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
- docker build -t $CI_REGISTRY_IMAGE .
- docker push $CI_REGISTRY_IMAGE
tags:
- dind
The following is the "Dockerfile" I am using to run Tomcat using the image. I need to copy my war file to the Tomcat webapps folder and build another image.
FROM tomcat:latest
LABEL maintainer=”Jacquelyne Wilson”
# ADD $CI_PROJECT/target/geoint-rfi-data-api.war /usr/local/tomcat/webapps/
EXPOSE 8080
CMD [“catalina.sh”, “run”]
NOTE: These images I have in our Gitlab Container Registry. Hope this is enough information.
This is my first experience creating Gitlab CI pipeline. My apologies if my terms and approach is not ideal.
You can provide the WAR file built in your project-build job as an artifact, so it is available in the following job. By the way, you should probably not use spaces in the job name.
This could look like the following:
project-build:
image: ${IMAGE_PATH}/maven
services:
- tomcat:latest
stage: build
artifacts:
paths:
- /path/to/your/war
script:
- ${MAVEN_PATH}/mvn ${MAVEN_CLI_OPTS} clean package
- ls -als
- ls -als target
And in the docker build job, you can then copy the WAR file from the artifact to the docker build context path so it can be ADDed to your image.
Hope this helps :)
Best regards
Andreas
I currently have an Azure Devops pipeline to build and deploy a next.js application via the serverless framework.
Upon reaching the AWSPowerShellModuleScript#1 task I get these errors:
[warning]MSG:UnableToDownload «https:...» «»
[warning]Unable to download the list of available providers. Check
your internet connection.
[warning]Unable to download from URI 'https:...' to ''.
[error]No match was found for the specified search criteria for the
provider 'NuGet'. The package provider requires 'PackageManagement'
and 'Provider' tags. Please check if the specified package has the
tags.
[error]No match was found for the specified search criteria and
module name 'AWSPowerShell'. Try Get-PSRepository to see all
available registered module repositories.
[error]The specified module 'AWSPowerShell' was not loaded because no
valid module file was found in any module directory.
I do have the AWS.ToolKit installed and it's visible when I go to manage extensions within Azure Devops.
My pipeline:
trigger: none
stages:
- stage: develop_build_deploy_stage
pool:
name: Default
demands:
- msbuild
- visualstudio
jobs:
- job: develop_build_deploy_job
steps:
- checkout: self
clean: true
- task: NodeTool#0
displayName: Install Node
inputs:
versionSpec: '12.x'
- script: |
npm install
npx next build
displayName: Install Dependencies and Build
- task: CopyFiles#2
inputs:
Contents: 'build/**'
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts#1
displayName: Publish Artifact
inputs:
pathtoPublish: $(Build.ArtifactStagingDirectory)
artifactName: dev_artifacts
- task: AWSPowerShellModuleScript#1
displayName: Deploy to Lambda#Edge
inputs:
awsCredentials: '###'
regionName: '###'
scriptType: 'inline'
inlineScript: 'npx serverless --package dev_artifacts'
I know I can use the ubuntu vmImage and then make use of the awsShellScript but the build agent I have available to me doesn't support bash.
I am building a Django based application on App Engine. I have created a Postres CloudSql instance. I created a cloudbuild.yaml file with a Cloud Build Trigger.
django = v2.2
psycopg2 = v2.8.4
GAE runtime: python37
The cloudbuild.yaml:
steps:
- name: 'python:3.7'
entrypoint: python3
args: ['-m', 'pip', 'install', '-t', '.', '-r', 'requirements.txt']
- name: 'python:3.7'
entrypoint: python3
args: ['./manage.py', 'migrate', '--noinput']
- name: 'python:3.7'
entrypoint: python3
args: ['./manage.py', 'collectstatic', '--noinput']
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "3000s"
The deploymnet is going alright, the app can connect to the database. But when I try load a page I get the next error:
"...import psycopg2 as Database File "/srv/psycopg2/__init__.py", line 50, in from psycopg2._psycopg import ( # noqa ImportError: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory"
Another interesting thing is if I deploy my app with 'gcloud app deploy' (not through Cloud Build), everything is alright I am not getting the error above, my app can communicate with the database.
I am pretty new with gcloud, so maybe I missed some basic here.
But my questions are:
-What is missing from my cloudbuild.yaml to make it work?
-Do I pip install my dependencies to the correct place?
-The prospective of this error what is the difference with the Cloud Build based deployment and the manual one?
From what I see you're using Cloud Build to run gcloud app deploy.
This command commits your code and configuration files to App Engine. As explained here App engine runs in a Google managed environment that automatically handles the installation of the dependencies specified in the requirements.txt file and executes the entrypoint you defined in your app.yaml. This has the benefit of not having to manually trigger the instalation of dependencies. The first two steps of your cloudbuild are not affecting the App Engine's runtime, since the configuration of it is managed by the aforementioned files once they're deployed.
The purpose of Cloud Build is to import source code from a variety of repositories and build binaries or images according to your specifications. It could be used to build Docker images and push them to a repository, download a file to be included in Docker build or package a Go binary an upload it to Cloud Storage. Furthermore the gcloud builder is aimed to run gcloud commands through a build pipeline for example to create account permissions or configure firewall rules when these are required steps for another operation to succeed.
Since you're not automatizing a build pipeline but trying to deploy an App Engine application Cloud build is not the product you should be using. The way to go when deploying to App Engine is to simply run gcloud app deploy command and let Google's environment take care of the rest for you.
Isn't this Quickstart describing exactly what the OP was trying to do?
https://cloud.google.com/source-repositories/docs/quickstart-triggering-builds-with-source-repositories
I myself was hoping to automate deployment of a Django webapp to an AppEngine "standard" instance.
I have created an automated selenium test script which works perfectly fine.
My task now is to set up Gitlab CI and try to automatically run this selenium script when I make a push to git.
Is it possible to make the selenium script automatically execute and inform the user if the script runs successfully or it fails?
Thank you
How to automatically run Automation Tests on Gitlab Ci with Selenium and specflow with a .net Project ?
If this is something ,you are looking for then .
Here is the core part which is to setup the gitlab-ci.yml file :
Here is how the sample gitlab-ci.yml should look :
image: please give your own docker which can download .net stuff
variables:
DOCKER_DRIVER: overlay2
SOURCE_CODE_DIRECTORY: 'src'
BINARIES_DIRECTORY: 'bin'
OBJECTS_DIRECTORY: 'obj'
NUGET_PACKAGES_DIRECTORY: '.nuget'
stages:
- Build
- Test
before_script:
- 'dotnet restore ${SOURCE_CODE_DIRECTORY}/TestProject.sln --packages ${NUGET_PACKAGES_DIRECTORY}'
Build:
stage: Build
script:
- 'dotnet build $SOURCE_CODE_DIRECTORY/TestProject.sln --no-restore'
except:
- tags
artifacts:
paths:
- '${SOURCE_CODE_DIRECTORY}/*/${BINARIES_DIRECTORY}'
- '${SOURCE_CODE_DIRECTORY}/*/${OBJECTS_DIRECTORY}'
- '${NUGET_PACKAGES_DIRECTORY}'
expire_in: 2 hr
Test:
stage: Test
services:
- selenium/standalone-chrome:latest
script:
- 'export MSBUILDSINGLELOADCONTEXT=1'
- 'export selenium_remote_url=http://selenium__standalone-chrome:4444/wd/hub/'
- 'export PATH=$PATH:${SOURCE_CODE_DIRECTORY}/chromedriver.exe'
- 'dotnet test $SOURCE_CODE_DIRECTORY/ExpressTestProject.sln --no-restore'
artifacts:
paths:
- '${SOURCE_CODE_DIRECTORY}/chromedriver.exe'
- '${SOURCE_CODE_DIRECTORY}/*/${BINARIES_DIRECTORY}'
- '${SOURCE_CODE_DIRECTORY}/*/${OBJECTS_DIRECTORY}'
- '${NUGET_PACKAGES_DIRECTORY}'
Thats it .When you set up your project with this .git-lab-ci.yml ,90 % of your job is done .
The tests will run automatically in Gitlab ,whenever you commit something in your source tree or Tfs.
Thanks
I don't think that my gae python app has a flexible environment because I created it many years ago. Now I want to try and create a module that has another runtime than python and keep the python app running python alongside a new runtime, custom or just another. Maybe mix PHP and python or similar. I don't need it but I want to learn and explore the possibilities. I'm also interested in learning Erlang and deploy Erlang code with appengine. I see there is questions about it already
erlang on google app engine?
And issue 125 in the tracker.
But how should we actually do it? If we make our own runtime provided that is allowed.
My app.yaml looks like
application: montaoproject
version: newsearch
runtime: python27
api_version: 1
threadsafe: true
module: default
instance_class: F1
automatic_scaling:
min_idle_instances: 5
max_idle_instances: automatic
min_pending_latency: automatic
max_pending_latency: 30ms
max_concurrent_requests: 50
default_expiration: "14d 5h"
env_variables:
GAE_USE_MONTAO : 'anyvalue'
KOOL_VERSION : '17a'
includes:
- br.yaml # Brazil
- in.yaml # India
- us.yaml # USA
- pk.yaml
- search.yaml # search pages
- admin.yaml # admin pages
- providers.yaml # auth providers
- statics.yaml # static content
handlers:
- url: /(business|ai|newindia|insert-ad.html)
script: montao.app
- url: /blobview.*
script: kool_update.app
login: admin
- url: /market.*
script: main.app
- url: /
script: montao.app
- url: /(index.html|sign-up.html|login.html)
script: montao.app
- url: /(login.*|login|googlogin|googlogout|create/)
script: login.app
- url: /(customer_service.htm|contactfileupload|support.html|faq.html)
script: customer_service.app
- url: /stats.*
script: google.appengine.ext.appstats.ui.app
# All other URLs use main.app
- url: /.*
script: main.app
inbound_services:
- mail
builtins:
- remote_api: on
- deferred: on
#- appstats: on
error_handlers:
- file: default_error.html
libraries:
- name: webapp2
version: latest
- name: jinja2
version: latest
- name: setuptools
version: latest
- name: markupsafe
version: latest
- name: django
version: latest
- name: PIL
version: latest
- name: webob
version: latest
- name: lxml
version: latest
- name: ssl
version: latest
Yes, your app.yaml file is a standard env one (it doesn't have vm:true or env:flex in it).
Yes, it's possible to mix and match services/modules in different languages and with different environments inside the same app. You can even switch the language and environment of the same module in a different version of that module. That's because modules offer complete code isolation, see Comparison of service isolation and project isolation. Related post: Upload a Java and node.js project to Google AppEngine at once
I always try to structure a multi-service/module app with each service in its own subdir, as described in Can a default service/module in a Google App Engine app be a sibling of a non-default one in terms of folder structure?
So first I'd create a default subdirectory of your app dir and move all your existing default module specific files into it, with the exception of the app-level config, which I'd keep at the top level and symlink inside the default dir as described in that post. Then I'd verify that the default module still works as expected.
Then I'd create a new subdirectory for every new module I need to add and add the code for it as needed.
Side note: sharing code via symlinks as described in the post mentioned above works for standard env modules, but it probably doesn't work with flexible ones, see Sharing code between modules in a GAE project