Working directory when initialise the pre-commit envs - pre-commit.com

I'm using the pre-commit to manage my pre-commit and pre-push hooks.
I have two hooks (mypy and pylint), and I need to install the requirements to the virtual-env.
My directory structure:
- project
- .pre-commit-config.yaml
- path
- to
- my
- requierment.txt
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.812
hooks:
- id: mypy
stages: [ "push" ]
args: [ "--config-file", "mypy.ini" ]
additional_dependencies: [ "-rpath/to/my/requirements.txt" ]
- repo: https://github.com/PyCQA/pylint
rev: v2.8.3
hooks:
- id: pylint
stages: [ "push" ]
args: [ "--rcfile=.pylintrc" ]
additional_dependencies: [ "-rpath/to/my/requirements.txt" ]
When I try this (please follow the additional_dependencies), the pre-commit can't find the file.
How can I fix it? Using a relative path.
Thanks :)
Update:
I've just found another solution to my questions, using my system python interpreter, using the language attribute with the system option.
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.812
hooks:
- id: mypy
language: system
stages: [ "push" ]
args: [ "--config-file", "mypy.ini" ]
- repo: https://github.com/PyCQA/pylint
rev: v2.8.3
hooks:
- id: pylint
language: system
stages: [ "push" ]
args: [ "--rcfile=.pylintrc" ]

pre-commit never installs from the repository under test, only the configuration (otherwise caching is intractable)
the working directory during installation is implementation detail and not customizable, it is the root of the hook repository itself inside the pre-commit cache
for things like pylint which need dynamic analysis and direct access to your codebase and dependencies an unmanaged repo: local hook is suggested instead (or enumerate your dependencies in additional_dependencies
disclaimer: I created pre-commit

Related

Why is my environment variable not recognized after Cloud Run deploy?

My node application needs an environment variable as requirement to make a POST request. Everything works fine in my local machine by accessing the .env file content directly. I also did make sure to set ${_TM_API_KEY} in GCP so I can trigger it, as suggested by the docs, however it doesn't seem to recognize the variable after the application is deployed. What am I doing wrong? Any further suggestion would be deeply appreciated it. My cloudbuild.yaml looks like this:
steps:
- id: build
name: gcr.io/cloud-builders/docker
args:
[
"build",
"-t",
"gcr.io/${PROJECT_ID}/suweb:${SHORT_SHA}",
"--build-arg=ENV=${_TM_API_KEY}",
".",
]
env:
- "TM_API_KEY=${_TM_API_KEY}"
- id: push
name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/${PROJECT_ID}/suweb:${SHORT_SHA}"]
- id: deploy
name: "gcr.io/cloud-builders/gcloud"
args:
- "run"
- "deploy"
- "suweb"
- "--set-env-vars=TM_API_KEY=${_TM_API_KEY}"
- "--image"
- "gcr.io/${PROJECT_ID}/suweb:${SHORT_SHA}"
- "--region"
- "us-central1"
- "--platform"
- "managed"
- "--allow-unauthenticated"
images:
- "gcr.io/${PROJECT_ID}/suweb:${SHORT_SHA}"
More details: the code where the API_KEY is being referenced is in the following header:
const headers = {
TM_API_KEY: process.env.TM_API_KEY,
"Content-Type": "multipart/form-data",
"Access-Control-Allow-Origin": "*",
};
And my .env file looks like this:
TM_API_KEY=_TM_API_KEY
It is a bit unclear for me if I should reference the trigger variable here as well (_TM_API_KEY) or write the key value.
After that, when doing a form post request, the server responds with a CORS policy error telling that such endpoint couldn't be reached. I tried by hard coding the API_KEY in the headers and everything works fine, no errors whatsoever.

Django/React - Azure App Service can't find static files

I've deployed my Django React app previously through a dedicated server and now I am trying to achieve the same with Azure Web App function so I can use CI/CD easier. I've configured my project as below but only my django appears to deploy as I get a '404 main.js and index.css not found'.
This makes me think there is an issue with my static file configuration but I'm unsure.
.yml file:
name: Build and deploy Python app to Azure Web App - test123
on:
push:
branches:
- main
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: npm install, build, and test
run: |
npm install
npm run build --if-present
working-directory: ./frontend
- name: Set up Python version
uses: actions/setup-python#v1
with:
python-version: '3.8'
- name: Create and start virtual environment
run: |
python -m venv venv
source venv/bin/activate
- name: Install dependencies
run: |
pip install -r requirements.txt
python manage.py collectstatic --noinput
- name: Zip artifact for deployment
run: zip pythonrelease.zip ./* -r
- name: Upload artifact for deployment job
uses: actions/upload-artifact#v2
with:
name: python-app
path: pythonrelease.zip
# Optional: Add step to run tests here (PyTest, Django test suites, etc.)
deploy:
runs-on: ubuntu-latest
needs: build
environment:
name: 'Production'
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
steps:
- name: Download artifact from build job
uses: actions/download-artifact#v2
with:
name: python-app
path: .
- name: unzip artifact for deployment
run: unzip pythonrelease.zip
- name: 'Deploy to Azure Web App'
uses: azure/webapps-deploy#v2
id: deploy-to-webapp
with:
app-name: 'test123'
slot-name: 'Production'
publish-profile: ${{ secrets.secret}}
settings.py
STATIC_URL = '/static/'
STATIC_ROOT = 'staticfiles'
STATICFILES_DIRS = (
os.path.join(BASE_DIR, "static"),
)
Repo Structure:
Any advice would be greatly appreciated.
Cheers
To host static files in your web app, add the whitenoise package to requirements.txt and the configuration for it to settings.py. as mentioned here : Django Tips
requirements.txt | whitenoise==4.1.2

AWS CodePipeline BuildAction not detecting buildspec.yml secondary artifacts

I'm trying to use secondary artifacts to separate the files from the web page from the cdk generated Stack files. but the BuildAction from the pipelines is not detecting the secondary artifacts that separate the web files from the Stack files.
I've tried following the recomendations on the AWS docs relating to buildspec.yml as well as multiple sources and multiple outputs, but can't get it to work.
here's my cdk code for the build action.
const buildStage = pipeline.addStage({ stageName: 'Build'});
const buildOutputWeb = new Artifact("webapp")
const buildOutputTemplates = new Artifact("template")
const project = new PipelineProject(this, 'Wavelength_build', {
environment: {
buildImage: LinuxBuildImage.STANDARD_3_0
},
projectName: 'WebBuild'
});
buildStage.addAction(new CodeBuildAction({
actionName: 'Build',
project,
input: sourceOutput,
outputs: [buildOutputWeb, buildOutputTemplates]
}));
here's the section relating to the Build Action from the generated stack file
{
"Actions": [
{
"ActionTypeId": {
"Category": "Build",
"Owner": "AWS",
"Provider": "CodeBuild",
"Version": "1"
},
"Configuration": {
"ProjectName": {
"Ref": "Wavelengthbuild7D63C781"
}
},
"InputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Name": "Build",
"OutputArtifacts": [
{
"Name": "webapp"
},
{
"Name": "template"
}
],
"RoleArn": {
"Fn::GetAtt": [
"WavelengthPipelineBuildCodePipelineActionRoleC08CF8E2",
"Arn"
]
},
"RunOrder": 1
}
],
"Name": "Build"
},
And here is my buildspec.yml
version: 0.2
env:
variables:
S3_BUCKET: "wavelenght-web.ronin-ddd-dev-web.net"
phases:
install:
runtime-versions:
nodejs: 10
pre_build:
commands:
- echo Installing source NPM dependencies...
- npm install -g #angular/cli
- npm install typescript -g
- npm install -D lerna
build:
commands:
- echo Build started on `date`
- npm run release
- cd $CODEBUILD_SRC_DIR
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- '**/*'
secondary-artifacts:
artifact1:
base-directory: $CODEBUILD_SRC_DIR
files:
- 'packages/website/dist/**/*'
name: webapp
discard-paths: yes
artifact2:
base-directory: $CODEBUILD_SRC_DIR
files:
- '*/WavelengthAppStack.template.json'
name: template
discard-paths: yes
I figured out the problem.
turns out that the name attribute in the secondary artifacts doesn't change the identifier.
my buildspec.yml artifacts now look like this.
artifacts:
secondary-artifacts:
webapp:
base-directory: packages/website/dist
files:
- '**/*'
name: webapp
template:
base-directory: packages/infrastructure/cdk.out
files:
- 'WavelengthAppStack.template.json'
name: template
notice that now instead of artifact1: and then all data for that artifact it is webapp: and then all the data.
webapp and template secondary attracts (from docs):
Each artifact identifiers in this block must match an artifact defined in the secondaryArtifacts attribute of your project.
In what you've posted in the question I don't see any evidence of the secondary outputs being defined in your build projects. Which probably explains why you get errors about "no definition".

Google Cloud Build sub builds

Is possible have multiple cloudbuild.yaml files per subdirectory?
For example:
my-app:
- service1
- cloudbuild.yaml
- service2
- cloudbuild.yaml
cloudbuild.yaml
The answer is almost correct. This will not work, because you forgot to include ".", which tells to upload and build the current directory. The correct way to include a sub/child cloudbuild.yaml would then be:
# Include cloudbuild sub step
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'builds'
- 'submit'
- '.'
- '--config'
- 'cloudbuild.yaml'
Yes, definitely! Are you trying to initialize the builds of service1 and service2 from my-app/cloudbuild.yaml?
Example of using a meta config to initialize other builds: https://github.com/GoogleCloudPlatform/cloudbuild-integration-testing/blob/master/cloudbuild.meta.yaml
Here is a cloudbuild.meta.yaml building off of your example:
steps:
- id: 'build service1'
name: 'gcr.io/cloud-builders/gcloud'
args: ['builds', 'submit', '--config service1/cloudbuild.yaml']
waitFor: ['-'] #start in parallel
- id: 'build service2'
name: 'gcr.io/cloud-builders/gcloud'
args: ['builds', 'submit', '--config service2/cloudbuild.yaml']
waitFor: ['-'] # start in parallel

Why is my Container Builder build failing with "failed to find one or more images after execution of build steps"

I don't understand what this error message means. It happens at the end of my build, when the build is complete and the image is being tagged. Here's the tail end of the log:
Step 17/18 : WORKDIR /var/www
---> 0cb8de2acd8f
Removing intermediate container 7e7838eac6fb
Step 18/18 : CMD bundle exec puma -C config/puma.rb
---> Running in 9089eb79192b
---> 890a53af5964
Removing intermediate container 9089eb79192b
Successfully built 890a53af5964
Successfully tagged us.gcr.io/foo-staging/foobar:latest
ERROR
ERROR: failed to find one or more images after execution of build steps: ["us.gcr.io/foo-staging/foobar:a2122696c92f430529197dea8213c96b3eee8ee4"]
Here's my cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'us.gcr.io/$PROJECT_ID/foobar', '.' ]
images:
- 'us.gcr.io/$PROJECT_ID/foobar:$COMMIT_SHA'
- 'us.gcr.io/$PROJECT_ID/foobar:latest'
timeout: 3600s
I thought maybe it was a transient failure, but I retried the build and it happened again.
Ah I needed to tag the build in the build step:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'us.gcr.io/$PROJECT_ID/foobar:$COMMIT_SHA', '-t', 'us.gcr.io/$PROJECT_ID/foobar:latest', '.' ]
images:
- 'us.gcr.io/$PROJECT_ID/foobar:$COMMIT_SHA'
- 'us.gcr.io/$PROJECT_ID/foobar:latest'
timeout: 3600s