commitizen: No commit found with range: 'origin/HEAD..HEAD' - pre-commit.com

I've recently started using commitizen in my day-to-day development, however I don't understand why I get the following error with the first commit to a new branch, ie:
...on current main branch...
git checkout -b fix/my-new-branch
...make some changes...
git commit -am "fix: did the thing"
commitizen check.........................................................Passed
commitizen check branch..................................................Failed
- hook id: commitizen-branch
- exit code: 3
No commit found with range: 'origin/HEAD..HEAD'
My pre-commit file looks like this:
---
repos:
- repo: https://github.com/commitizen-tools/commitizen
rev: v2.37.1
hooks:
- id: commitizen
- id: commitizen-branch
stages: [commit-msg]
Is there something I'm missing here?
My git remote is setup:
origin git#github.com:myuser/my_repo.git (fetch)
origin git#github.com:myuser/my_repo.git (push)
Git branch shows:
fix/my-new-branch
* main

the commitizen-branch hook is intended for after-the-fact usage and not during the commit-msg stage -- you probably don't need it / don't want it and can have it removed
notably, the stages: [commit-msg] is incorrect to set for that hook since it is not designed to run during commit-msg (where no commits exist between origin/HEAD and HEAD)
personally I'd probably set that ones as stages: [manual] such that it never automatically runs, but can be run on demand
disclaimer: I wrote pre-commit

Related

AWS CodeBuild buildspec - are changes to variables in phases section available in artifacts section?

I define some variables in the env/variables, then make changes to the value in phases/pre_build. I want to use the variable down in artifacts, but it looks like the changes are not persisted.
This is a legacy Windows .NET Framework 4.7.2 application getting deployed to IIS.
My buildspec.yml file:
version: 0.2
env:
variables:
APPNAME: DummyApp
BRANCH: manual
phases:
pre_build:
commands:
- echo "start BRANCH = ${BRANCH}"
- echo "CODEBUILD_WEBHOOK_HEAD_REF = ${env:CODEBUILD_WEBHOOK_HEAD_REF}"
# CODEBUILD_WEBHOOK_HEAD_REF is null when build is triggered from console as opposed to a webhook
- if (${CODEBUILD_WEBHOOK_HEAD_REF}) { ${BRANCH} = ($CODEBUILD_WEBHOOK_HEAD_REF.replace('refs/heads/', '')) }
- echo "after BRANCH = ${env:BRANCH}"
build:
commands:
- echo "build commands happen here"
artifacts:
files:
- .\Dummy\bin\Debug\*
# not sure why this doesnt work down here, are changes in the phases section above not propagated?
name: ${env:APPNAME}/${env:APPNAME}-${env:BRANCH}.zip
discard-paths: yes
The value of $CODEBUILD_WEBHOOK_HEAD_REF = "refs/head/develop".
The value of $BRANCH after the replace statement = "develop".
The value of my artifact in S3 is "DummyApp/DummyApp-manual.zip".
I want the artifact named "DummyApp/DummyApp-develop.zip".
Some sort of scoping issue?
Saw various indications that this is not possible.
https://blog.shikisoft.com/define-environment-vars-aws-codebuild-buildspec/
The crucial thing you should note here is that you can only assign literal values to the environment variables declared this way. You cannot assign dynamic values at runtime. If you would like to change the value of the <...> variable above, you have to change your buildspec file and push your changes to your repository again. So it is like hardcoding parameter values. But it is better than typing the in all commands needed in the phases section.
In addition to trying to simply set the local var in pre_build, I tried a number of approaches, including
running a custom powershell script to parse the Branch name as the first step in the pre_build
running the command in the variable declaration itself
calling the prsh SetEnvironmentVariable method
The thing that seems to work is using the replace command down in the artifact/name itself:
artifacts:
files:
- .\Dummy\bin\Debug\*
name: ${env:APPNAME}/${env:APPNAME}-$($CODEBUILD_WEBHOOK_HEAD_REF.replace('refs/heads/', '')).zip
discard-paths: yes
created this artifact: DummyApp\DummyApp-develop.zip

How can I fix: Terraform error refreshing state: state snapshot was created by Terraform v0.14.5, which is newer than current v0.13.0

I am trying to upgrade my terraform version from 0.12 to 0.13 but had previously init and planned with globally install terraform 0.14.5.
I'm struggling to understand how this effects the snapshot and/or I can remove this error, remote state hasn't changed so where is it getting this from? I have removed any .terraform in the directory.
Terraform is holding its state either in a remote backend or in a local one.
If you have no configuration that looks like this in your configuration files, minding that the backend type might differ based on the one used, so the name in "..." might vary:
terraform {
backend "..." {
}
}
Then it would be safe to assume you have a local JSON state file named terraform.tfsate, and also, since your project existed before the upgrade, a file terraform.tfsate.backup.
If you peak into those files, you will see the version of terraform used to create the said state in the beginning of the file.
For example:
{
"version": 4,
"terraform_version": "0.14.5",
}
From there, and with all the caution in the world, ensuring you indeed didn't change anything in the remote state, you have some options:
if your file terraform.tfsate.backup still have "terraform_version": "0.13.0", you could just make a rollback by removing the terraform.tfsate and renaming terraform.tfsate.backup to terraform.tfsate
you can try to "hack" into the actual terraform.tfsate and change the version there by adapting the line "terraform_version": "0.14.5"
As advised in the below link, you could create a state version using the API, so, overriding the state by manually specifying your expected version terraform_version
My advise still, would be to make a diff of terraform.tfsate against terraform.tfsate.backup to see what have possibly changed, or use a versioning tool if your terraform.tfsate is under version control.
Useful read: https://support.hashicorp.com/hc/en-us/articles/360001147287-Downgrading-Terraform-Version-in-the-State

AWS CodePipline & Codebuild - How to add environment variables to the Docker-Image?

thx for any help.
I'm using AWS-CodePipline with AWS-CodeBuild(for my Dockerfile and save it in ECR). So far it is working. But I don't get how I get my environment variables in the project. So I connected my Github account with CodePipline and I didn't pushed my envs to Github for security. So now I have on Github a env-file like:
config/prod.env
ACCESS_TOKEN_SECRET=
CSRF_TOKEN_SECRET=
ACCESS_TOKEN_PASSWORD=
REFRESH_TOKEN_SECRET=
CLUDINARY_API=
CLUDINARY_API_SECRET=
CLUDINARY_API_NAME=
GOOGLE_AUDIENCE=
ORIGIN=
GOOGLE_TOKEN=
DATABASE_URL=
NODE_ENV=
FORGOTTEN_PASSWORD=
YAHOO_PASSWORD=
Now on AWS-CodeBilder is a section for environment variables(Image from AWS-Doc).
Now I have the feeling this is not the right place for env's. Because if I put all my variables inside the fields I get the error:
ValidationException
1 validation error detected: Value at 'pipeline.stages.2.member.actions.1.member.configuration' failed to satisfy constraint: Map value must satisfy constraint: [Member must have length less than or equal to 1000, Member must have length greater than or equal to 1]
On Example:
Name: ACCESS_TOKEN_SECRET
Value: My_SUPER_PASSWORD
If I'm using just a few variables I don't get an error but for all variables I get the error(dosen't matter of the env-combination).
What I'm doing wrong? How can I get my env-variables to my Docker-Image in ECR with CodeBuild & CodePipline?
To pass variables from Code Build Project, you need to set env: section in buildspec.yml file, for example
env:
variables:
Execution_ID: $Execution_ID
Commit_ID: $Commit_ID

react-router-redux - app doesnt re-render after dispatching a push-action

I'm trying to integrate the newest Version of react-router & react-router-redux into sample application from a react-book. But i can't get the example working with routing over react-router-redux push/replace function.
I'm not certain if its a bug or i'm using the functions wrong.
actions/index.js
...
export function dismissVote(history) {
// this changes the history locations an application re-renders
//return () => history.push('/votes');
// this changes the history locations but application doesnt re-render
return dispatch => dispatch(push('/votes'))
}
...
Version used (package.json)
"react-router-dom": "4.1.1"
"react-router-redux": "5.0.0-alpha.6"
App Repository (clone from github)
https://github.com/ap0yuv/voteapp.git
How to reproduce my issue
clone the repo.
cd ./myapp
npm run start:hot && npm start
Launch Application in Browser # localhost:3000/votes/vote_1
Click Button "Vote later"
Expected Behavior
Browser Location is changed to localhost:3000/votes/
AND
The List of all votes (localhost:3000/votes) is visible (VotePage)
Actual Behavior
Browser Location is changed to localhost:3000/votes/
AND
Page of Current Vote (vote_1) is still shown
If i pass the history object to the function to route, the proper page is displayed.
More Information about the code:
src/common/containers/SingleVotePage is the entry point for /votes/vote_1
and passes dismissVote() from src/actions to

YAML parsing error in Travis CI, but not locally

I'm using Travis CI for a gem I'm developing and ran into a strange error (links are at the end of the question).
The gem is storing some information serialized with YAML which isn't built manually, but using YAML.dump and later loaded again with YAML.load.
The following lines are used to dump and load a hash to/from YAML:
headers[:ar_mailer_settings] = YAML.dump(settings)
...
ar_settings = YAML.load(mail['ar_mailer_settings'].value)
The later line seems to be producing an error on Travis CI, but when I run the tests locally using the same binary versions, everything runs perfectly fine:
Psych::SyntaxError: (<unknown>): mapping values are not allowed in this context at line 1 column 22
/home/travis/.rvm/rubies/ruby-2.0.0-p576/lib/ruby/2.0.0/psych.rb:205:in `parse'
...
/home/travis/build/Stex/ar_mailer_revised/lib/action_mailer/ar_mailer.rb:84:in `deliver!'
I put a simple puts into the deliver! method to see if there would be a difference in the stored values, and it seems that Travis CI is ignoring newlines in the generated YAML which then causes a parse error:
Travis:
"--- smtp_settings: :address: localhost :port: 25 :domain: localhost.localdomain :user_name: some.user :password: some.password :authentication: :plain :enable_starttls_auto: true "
Locally:
"---\nsmtp_settings:\n :address: localhost\n :port: 25\n :domain: localhost.localdomain\n :user_name: some.user\n :password: some.password\n :authentication: :plain\n :enable_starttls_auto: true\n"
Interestingly, I didn't change anything regarding these methods before the Travis CI was failing, so I'm not sure if I'm simply overlooking something here or if it's some kind of incompatibility issue.
Can I do something to preserve the newline characters?
Edit: Additional Information
The gem allows setting custom SMTP settings and attributes for single email records.
These can be set directly when generating the email in an ActionMailer::Base instance see here for a dummy mailer
To transport these custom settings to ActionMailer's deliver!-method which actually creates a new email record, I serialize these settings via YAML, save them in the email header temporarily and restore them later see ar_mailer_setting and deliver! here
The source code which raises the error: here
The complete Travis CI output: here
If more information is needed, please let me know and I'll add it to the question.
Thanks in advance!