Gcloud with cloudbuild and Django Postgres cause psycopg2 ImportError - django

I am building a Django based application on App Engine. I have created a Postres CloudSql instance. I created a cloudbuild.yaml file with a Cloud Build Trigger.
django = v2.2
psycopg2 = v2.8.4
GAE runtime: python37
The cloudbuild.yaml:
steps:
- name: 'python:3.7'
entrypoint: python3
args: ['-m', 'pip', 'install', '-t', '.', '-r', 'requirements.txt']
- name: 'python:3.7'
entrypoint: python3
args: ['./manage.py', 'migrate', '--noinput']
- name: 'python:3.7'
entrypoint: python3
args: ['./manage.py', 'collectstatic', '--noinput']
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "3000s"
The deploymnet is going alright, the app can connect to the database. But when I try load a page I get the next error:
"...import psycopg2 as Database File "/srv/psycopg2/__init__.py", line 50, in from psycopg2._psycopg import ( # noqa ImportError: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory"
Another interesting thing is if I deploy my app with 'gcloud app deploy' (not through Cloud Build), everything is alright I am not getting the error above, my app can communicate with the database.
I am pretty new with gcloud, so maybe I missed some basic here.
But my questions are:
-What is missing from my cloudbuild.yaml to make it work?
-Do I pip install my dependencies to the correct place?
-The prospective of this error what is the difference with the Cloud Build based deployment and the manual one?

From what I see you're using Cloud Build to run gcloud app deploy.
This command commits your code and configuration files to App Engine. As explained here App engine runs in a Google managed environment that automatically handles the installation of the dependencies specified in the requirements.txt file and executes the entrypoint you defined in your app.yaml. This has the benefit of not having to manually trigger the instalation of dependencies. The first two steps of your cloudbuild are not affecting the App Engine's runtime, since the configuration of it is managed by the aforementioned files once they're deployed.
The purpose of Cloud Build is to import source code from a variety of repositories and build binaries or images according to your specifications. It could be used to build Docker images and push them to a repository, download a file to be included in Docker build or package a Go binary an upload it to Cloud Storage. Furthermore the gcloud builder is aimed to run gcloud commands through a build pipeline for example to create account permissions or configure firewall rules when these are required steps for another operation to succeed.
Since you're not automatizing a build pipeline but trying to deploy an App Engine application Cloud build is not the product you should be using. The way to go when deploying to App Engine is to simply run gcloud app deploy command and let Google's environment take care of the rest for you.

Isn't this Quickstart describing exactly what the OP was trying to do?
https://cloud.google.com/source-repositories/docs/quickstart-triggering-builds-with-source-repositories
I myself was hoping to automate deployment of a Django webapp to an AppEngine "standard" instance.

Related

How to retrieve the docker image of a deployment on heroku via circleci

I have a django application running locally and i've set up the project on CircleCi with python and postgres images.
If I understand correctly what is happening, CircleCi would use the images to build a container to test my application with code database.
Then I'm using the job heroku/deploy-via-git to deploy it to Heroku when the tests are passed.
Now I think Heroku is using some images too to run the application.
I would like to get the image used by heroku to run my site locally on another machine.
So pull the image then push it to Docker Hub and finally download it back to my computer to only have to use a docker compose up.
Here is my CircleCi configuration's file
version: 2.1
docker-auth: &docker-auth
auth:
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
orbs:
python: circleci/python#1.5.0
heroku: circleci/heroku#0.0.10
jobs:
build-and-test:
docker:
- image: cimg/python:3.10.2
- image: cimg/postgres:14.1
environment:
POSTGRES_USER: theophile
steps:
- checkout
- run:
command: pip install -r requirements.txt
name: Install Deps
- run:
name: Run MIGRATE
command: python manage.py migrate
- run:
name: Run loaddata from Json
command: python manage.py loaddata datadump.json
- run:
name: Run tests
command: pytest
workflows:
heroku_deploy:
jobs:
- build-and-test
- heroku/deploy-via-git:
requires:
- build-and-test
I don't know if it is possible, if not, what should be the best way to proceed ? (I assume that there is a lot of possibilites)
I was considering to build an image from my local directory with docker compose up then use this image direclty on CircleCi, then i would be able to use this image an on other computer. But building images into images with CircleCi seems really messy and I'm not sure how I should proceed.
I've tried to pull images from Heroku but it seems I can only pull the code or get/modify the database but I can't get the image builds itself.
I hope this question is relevant and clear, as the CircleCi and Heroku documentation seems not clear and it's my first post on stackoverflow !
Thanks in advance
Heroku's platform is proprietary, so we can't be sure how it works internally.
We know that their stacks are based on Ubuntu LTS releases, and we know that they use open-source buildpacks to compile application slugs from source code, but details about the underlying infrastructure are murky. They certainly don't provide base images like heroku/python:3.11.0 for you to download.
If you want to use the same image locally, on CircleCI, and Heroku, a better option would be to start deploying with Heroku's Container Registry instead of Git. This allows you to build an image locally, push it into the container registry, and release it as the next version of your application.
I suggest you read the entire documentation page linked above, but the short version is:
Log into the container registry using the Heroku CLI:
heroku container:login
Assuming you already have a Dockerfile for your application, build and push an image:
heroku container:push web
In this case we are building from Dockerfile and pushing the resulting image to be used as a web process.
Release your application:
heroku container:release web
That's a basic Docker deployment from your local machine, and even if that's not your final plan I suggest you start by getting that working.
From there, you have options. One option would be to move this flow to CircleCI—continue to build images there, but have CircleCI push the resulting container to Heroku's Container Registry.
Another option might be as you suggest in your question: to build images locally and use them with both CircleCI and Heroku.

How to access API secrets from Next.js in AWS Amplify

I am very confused regarding how to set and access API secrets in a Next.js app within an AWS Amplify project.
The scenario is: I have a private API key that fetches data from an API. Obviously, this is a secret key and I don't want to share it in my github repo or the browser. I create a .env.local file and place my secret there.
API_KEY="qwerty123"
I am able to access this key in my code through using process.env.API_KEY
Here is an example fetch request with that API Key: https://developer.nps.gov/api/v1/parks?${parkCode}&api_key=${process.env.API_KEY}
This works perfectly when I run yarn dev and yarn build -> yarn start
This is the message I get when I run yarn start
next start
ready - started server on 0.0.0.0:3000, url: http://localhost:3000
info - Loaded env from /Users/tmo/Desktop/Code/projects/visit-national-parks/.env.local
The env is loaded and able to be called on my local machine.
However,
When I push this code to github and start the Build process in AWS Amplify, the app builds, but the API fetch calls do not work. I get a ````500 Server Error`````
This is what I have done to try and solve this issue:
Added my API_KEY in the Environment variables tab in Amplify
2. Update my Build settings
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- API_KEY=${API_KEY} '#Added my API_KEY from the environment variables tab in Amplify`
- yarn run build
I am not sure what else to do. After building the app again, I still get 500 server error
Here is the live amplify app with the server error.
We're working on something similar right now. Our dev designed it so it reads an .env file.
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- echo API_KEY=$API_KEY >.env
- echo OTHERKEY=$OTHER_KEY >> .env
- yarn run build
We were able to pick it up and pass it to AWS' DynamoDB Client SDK.
Not sure if it's your call or not, but yarn can be fickle in our Amplify projects sometimes, so we usually resort to using npm if it starts acting up.

Deploying Django Web App using Devops CI/CD onto Azure App Service

I'm trying to deploy simple django web ap to Azure App Service using CI/CD pipeline (the most basic one that is offered by Microsoft for app deployment- no changes from me). However I'm getting the following error:
2021-03-08T16:55:51.172914117Z File "", line 219, in _call_with_frames_removed
2021-03-08T16:55:51.172918317Z File "/home/site/wwwroot/deytabank_auth/wsgi.py", line 13, in
2021-03-08T16:55:51.172923117Z from django.core.wsgi import get_wsgi_application
2021-03-08T16:55:51.172927017Z ModuleNotFoundError: No module named 'django'
I checked other threads and tried doing all the things mentioned but it did not help, or I am missing something:
In wsgi.py I added:
import os
import sys
sys.path.append(os.path.dirname(os.path.abspath(__file__)) + '/..' )
sys.path.append(os.path.dirname(os.path.abspath(__file__)) + '/../licenses_api')
sys.path.append(os.path.dirname(os.path.abspath(__file__)) + '/../deytabank_auth')
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'deytabank_auth.settings')
application = get_wsgi_application()
But still getting the same error, where django is not recognized. I can see that reuqirements.txt is being installed successfully and it has all the neccessary libraries there (including Django)
My CI/CD yaml file looks like this:
# Python to Linux Web App on Azure
# Build your Python project and deploy it to Azure as a Linux Web App.
# Change python version to one thats appropriate for your application.
# https://learn.microsoft.com/azure/devops/pipelines/languages/python
trigger:
- develop
variables:
# Azure Resource Manager connection created during pipeline creation
azureServiceConnectionId: '***'
# Web app name
webAppName: 'DeytabankAuth'
# Agent VM image name
vmImageName: 'ubuntu-latest'
# Environment name
environmentName: 'DeytabankAuth'
# Project root folder. Point to the folder containing manage.py file.
projectRoot: $(System.DefaultWorkingDirectory)
# Python version: 3.7
pythonVersion: '3.7'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: BuildJob
pool:
vmImage: $(vmImageName)
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python $(pythonVersion)'
- script: |
python -m venv antenv
source antenv/bin/activate
python -m pip install --upgrade pip
pip install setup
pip install -r requirements.txt
workingDirectory: $(projectRoot)
displayName: "Install requirements"
- task: ArchiveFiles#2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(projectRoot)'
includeRootFolder: false
archiveType: zip
archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
replaceExistingArchive: true
- upload: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
displayName: 'Upload package'
artifact: drop
- stage: Deploy
displayName: 'Deploy Web App'
dependsOn: Build
condition: succeeded()
jobs:
- deployment: DeploymentJob
pool:
vmImage: $(vmImageName)
environment: $(environmentName)
strategy:
runOnce:
deploy:
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python version'
- task: AzureWebApp#1
displayName: 'Deploy Azure Web App : DeytabankAuth'
inputs:
azureSubscription: $(azureServiceConnectionId)
appName: $(webAppName)
package: $(Pipeline.Workspace)/drop/$(Build.BuildId).zip
Maybe I need to configure something in the Azure App Service? But i am not sure exactly what.
I have met this issue before, and the problem might be your deployment method. Not sure which one you use, but the classic deployment center below is being deprecated, try use the new deployment center.
Checked your workflow with the one worked on my side, there is nothing different. So I will post the correct step worked on my side for you to refer.
check your project locally to make sure it could run successfully.
Create a new web app (this is to make sure no damage on your web app) and navigate to the Deployment center page.
Go to your GitHub and navigate to GitHub Action page to see the log.
Test your web app and check the file structure on KuDu site: https://{yourappname}.scm.azurewebsites.net/wwwroot/
You could test by click the browse button like what I did.
If you want to run command, go to this site: https://{yourappname}.scm.azurewebsites.net/DebugConsole
By the way, I post this link if you need deploy using DevOps.
The possible reason for this question is that you don't have Django installed.
In the Microsoft-hosted agent ubuntu-latest, Django is not pre-installed. That is, you need to install it manually.
pip install Django==3.1.7
Click this document for detailed information about downloading Django.

How to serve a Java application as Docker container and .war file?

Currently our company is creating individual software for B2B customers.
Some applications can be used for multiple customers.
Usually we can host the application in the cloud and deploy everything with Docker.
Running a GitLab pipeline and deploying etc. is fine for that.
Now we got some customers who rely on an external installation.
Since some of them still use Windows Server (2008 tho), I can not install a proper Docker environment on there and we need to install an Apache Tomcat and run the application inside the tomcat.
Question: How to deal with that? I would need a pipeline to create a docker image and a war file.
Simply create two completely independent pipelines?
Handle everything in a single pipeline?
Our current gitlab-ci.yml file for the .war
image: maven:latest
variables:
MAVEN_CLI_OPTS: "-s settings.xml -q -B"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
cache:
paths:
- .m2/repository/
- target/
stages:
- build
- test
- deploy
build:
stage: build
script:
- mvn $MAVEN_CLI_OPTS compile
test:
stage: test
script:
- mvn $MAVEN_CLI_OPTS test
install:
stage: deploy
script:
- mvn $MAVEN_CLI_OPTS install
artifacts:
name: "datahub-$CI_COMMIT_REF_SLUG"
paths:
- target/*.war
Using to separate delivery pipeline is preferable: you are dealing with two very installation processes, and you need to be sure which one is running for a given client.
Having two separate GitLab pipeline allows for said client to chose the right one.

Gitlab - Google compute engine Continuous delivery

What I am trying to do is to enable Continuous delivery from GitLab to my compute engine on Google Cloude. I have Ubuntu 16.04 TSL running over there. I did install all components needed to run my project like: Swift, vapor, nginx.
I have manage to install Gitlab runner as well and created a runner whcihc is accessible from my gitlab repo. Everytime I do push on master the runner triggers. What happen is a failure due to:
could not create leading directories of '/home/gitlab-runner/builds/2bbbbbd/0/Server/Packages/vapor.git': Permission denied
If I change the permissions to chmod -R 777 It will hange on running for build stage visible on gitlab pipeline.
I did something like:
sudo chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/builds
sudo chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/cache
but this haven't help, the error is same Permission denied
Below you have my .gitlab-ci.yml
before_script:
- swift --version
stages:
- build
- deploy
job_build:
stage: build
before_script:
- vapor clean
script:
- vapor build --release
only:
- master
job_run_app:
stage: deploy
script:
- echo "Deploy a API"
- vapor run --name=App --env=production
environment:
name: production
job_run_frontend:
stage: deploy
script:
- echo "Deploy a Frontend"
- vapor run --name=Frontend --env=production
environment:
name: production
But that haven't pass to next stage eg. deploy. I had waited more then 14h for that but with out result.
And... I have few more questions:
Gitlab runner creates builds under location /home/gitlab-runner/builds/ in this location every new job have own folder. for eg. /home/gitlab-runner/builds/2bbbbbd/ in which is my project and the commands are executed. So what happens when the first one is running and I do deploy new version? the ports are blocked by the first instance and so on?
If I want to enable supervisor how do I do that with this when every time I deploy folder is different?
Can anyone explain or show me or point me to tutorial how do Continuous deployment with out docker?
How to start a service using GitLab runner
Thanks to long deep search I finally found an answer! The full article can be found above.
Briefly GitLab CI documentation recommends using dpl for deployment. Gitlab runner run test and process should end. The runner is designed to kill all created processes after finishing each build. The GitLab runner is unable to perform operations outside the catalogue.