Best way to auto update Poetry TOML versions - amazon-web-services

I'm managing a repo which maintains common utilities, this repo has my common utils, jenkins file and all the dependencies are managed by Poetry so I've some TOML files to manage. We keep adding new utils as we go on.
.
├── Jenkinsfile
├── README.md
├── common_utils
│ ├── __init__.py
│ ├── aws.py
│ ├── sftp.py
├── pre-commit.yaml
├── poetry.lock
└── pyproject.toml
When I push my code to git, my Jenkinsfile packages it and publishes it to AWS code artifact. But everything time I push it, I've to manually update the TOML version in my local first before I push it to dev branch and then pull a new branch from dev to update the version again before pushing to master branch to update the version again otherwise I get a conflict on TOML version in code artifact.
I can't modify the Jenkins because even though it solves the conflict issue by updating the version, the version in the source code isn't modified, which means I still need to manually update all the versions manually. I'm not sure if pre-commit will help because I dont want to update the version every time I push to feature branch.
Is there a known way to handle this or is going with a release branch the only option?

Related

terraform state replace-provider doesn't update to new provider

So I ran this:
terraform state replace-provider terraform-mars/credstash granular-oss/credstash
and this was the output
Terraform will perform the following actions:
~ Updating provider:
- registry.terraform.io/terraform-mars/credstash
+ registry.terraform.io/granular-oss/credstash
Changing 1 resources:
module.operations.data.credstash_secret.key_name
Do you want to make these changes?
Only 'yes' will be accepted to continue.
Enter a value: yes
Successfully replaced provider for 1 resources.
then I checked it with
terraform providers
Providers required by configuration:
.
├── provider[registry.terraform.io/hashicorp/archive] ~> 2.2.0
├── provider[registry.terraform.io/vancluever/acme] ~> 2.5.3
├── provider[registry.terraform.io/hashicorp/aws] ~> 4.13.0
├── provider[registry.terraform.io/hashicorp/dns] ~> 3.2.3
├── provider[registry.terraform.io/hashicorp/local] ~> 2.2.3
├── provider[registry.terraform.io/hashicorp/cloudinit] ~> 2.2.0
├── provider[registry.terraform.io/granular-oss/credstash] ~> 0.6.1
├── provider[registry.terraform.io/hashicorp/external] ~> 2.2.2
├── provider[registry.terraform.io/hashicorp/null] ~> 3.1.1
├── provider[registry.terraform.io/hashicorp/tls] ~> 3.4.0
├── module.account
│   ├── provider[registry.terraform.io/hashicorp/aws]
│   └── module.static
└── module.operations
├── provider[registry.terraform.io/hashicorp/local]
├── provider[registry.terraform.io/hashicorp/aws]
├── provider[registry.terraform.io/terraform-mars/credstash]
It still uses the old provider for some reason.
I don't understand why this happens.
I also ran terraform init but the provider still shows up there also.
When I run terraform plan, it gives me this error:
Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
│
│ with module.operations.data.credstash_secret.key_name,
│ on ../modules/stacks/operations/bastion.tf line 1, in data "credstash_secret" "bastion_pubkey":
│ 1: data "credstash_secret" "key_name" {
The part of the terraform providers output included in the question is describing the provider requirements declared in the configuration. This includes both explicit provider requirements and also some automatically-detected requirements that Terraform infers for backward compatibility for modules written for modules that were targeting Terraform v0.12 and earlier.
The terraform state replace-provider command instead replaces references to providers inside the current Terraform state. The Terraform state remembers which provider most recently managed each resource so that e.g. Terraform knows which provider to use to destroy the object if you subsequently remove it from the configuration.
When you use terraform state replace-provider you'll typically need to first update the configuration of each of your modules to refer to the new provider instead of the old and to make sure each of your resources is associated (either implicitly or explicitly) with the intended provider. You can then use terraform state replace-provider to force the state to change to match, and thereby avoid the need to install the old provider in terraform init.

Amplify functions deployment not working for Golang runtime

I'm quite new to the Amplify function world. I've been struggling to deploy my Golang function, connected to a DynamoDB stream. Even though I am able to run my lambda successfully by manually uploading a .zip I created myself after I built the binary using GOARCH=amd64 GOOS=linux go build src/index.go (I develop on a mac), when I use the Amplify CLI tools I am not able to deploy my function.
This is the folder structure of my function myfunction
+ myfunction
├── amplify.state
├── custom-policies.json
├── dist
│ └── latest-build.zip
├── function-parameters.json
├── go.mod
├── go.sum
├── parameters.json
├── src
│ ├── event.json
│ └── index.go
└── tinkusercreate-cloudformation-template.json
The problem is I can't use the amplify function build command, since looks like it creates a .zip file with my source file index.go in there (not the binary), so the lambda, regardless of the handler I set, it seems not able to run it from that source. I record errors like
fork/exec /var/task/index.go: exec format error: PathError null or
fork/exec /var/task/index: no such file or directory: PathError null
depending on the handler I set.
Is there a way to make Amplify build function work for a Golang lambda? I would like to successfully execute the command amplify function build myfunction so that I will be able to deliver the working deployment by amplify push to my target environment.

Yii2 deployment for contributing

I am beginner Yii2 contributor. When I contribute in yiisoft/yii2 project, it is quite clear how to deploy the project and run its phpunit-tests. But I have some questions about working with extensions:
First I add an extension with composer require. Then git clone the same extension inside my home directory. After that I replace the first directory with symlink, which pointed to the second one. It is quite convenient due to I can see changes on the site, but I can't use composer anymore.
How to run the extension`s tests? They often depend on Yii2 app class, but
$ vendor/bin/phpunit vendor/yiisoft/yii2-elasticsearch/tests/
PHP Fatal error: Class 'yiiunit\extensions\elasticsearch\TestCase' not found in /var/www/yii2.test/vendor/yiisoft/yii2-elasticsearch/tests/ActiveDataProviderTest.php on line 11
$ vendor/bin/phpunit vendor/yiisoft/yii2-queue/tests/
PHP Fatal error: Class 'tests\TestCase' not found in /var/www/yii2.test/vendor/yiisoft/yii2-queue/tests/JobEventTest.php on line 22
Should I specify a config file? Or should I run these tests independently
of the framework?
So, would you please share with me the best practices about this situation?
You should run these tests outside of the framework. From extension perspective, yiisoft/yii2 is a dependency and should be installed in vendor directory inside of extension directory. So in short, you should go to extension directory and call composer install. After this you should get directory structure similar to this:
extension/
├── src/
│ └── ...
├── vendor/
│ ├── yiisoft/
│ │ ├── yii2/
│ │ └── ...
│ └── ...
├──composer.json
└── ...
Then you can run tests directly from extension directory (probably by vendor/bin/phpunit command).

Integrating jquery-csv into Rails app, ES15 syntax causing issues

I have already implemented a csv import feature in my app using this plugin, and it works great! But recently I had to reinstall some of my assets and it appears the plugin has some recent additions that include ES15 syntax. My Rails 4 app isn't ready to digest ES15 so I'm looking for a way to exclude the offending files if I can.
The plugin's directory structure looks like this (with some items omitted for brevity).
├── src
│   ├── jquery.csv.js
│   └── jquery.csv.min.js
└── test
├── csv.from_array.js
├── csv.from_arrays.js
├── csv.parsers.js
├── csv.to_array.js
├── etc ...
The ES15 code only appears in the test/ files. In my assets pipeline I include jquery.csv.js, which apparently includes the test/ files, as it's choking on the ES15 when I precompile assets. (If I don't require jquery.csv.js, assets precompile fine.)
This illustrates the errors I'm seeing when I precompile.
Seems like I should be able to do without the test files, but looking in jquery.csv.js it's not obvious to me how they're being included.
I know I should probably focus on getting Rails upgraded or use webpack/babel/whatever to integrate ES15 but I'm hoping for a short-term fix so I can move forward.
Thanks for any tips!

How to deploy a Go web application in Beanstalk with custom project folder structure

I'm new to Go.
I am trying to deploy a simple web project to EB without success.
I would like to deploy a project with the following local structure to Amazon EB:
$GOPATH
├── bin
├── pkg
└── src
├── github.com
│   ├── AstralinkIO
│   │   └── api-server <-- project/repository root
│   │   ├── bin
│   │   ├── cmd <-- main package
│   │   ├── pkg
│   │   ├── static
│   │   └── vendor
But I'm not sure how to do that, when building the command, Amazon is treating api-server as the $GOPATH, and of course import paths are broken.
I read that most of the time it's best to keep all repos under the same workspace, but it makes deployment harder..
I'm using Procfile and Buildfile to customize output path, but I can't find a solution to dependencies.
What is the best way to deploy such project to EB?
Long time has past since I used Beanstalk, so I'm a bit rusty on the details. But basic idea is as follows. AWS Beanstalk support for go is a bit odd by design. It basically extracts your source files into a folder on the server, declares that folder as GOPATH and tries to build your application assuming that your main package is at the root of your GOPATH. Which is not a standard layout for go projects. So your options are:
1) Package your whole GOPATH as "source bundle" for Beanstalk. Then you should be able to write build.sh script to change GOPATH and build it your way. Then call build.sh from your Buildfile.
2) Change your main package to be a regular package (e.g. github.com/AstralinkIO/api-server/cmd). Then create an application.go file at the root of your GOPATH (yes, outside of src, while all actual packages are in src as they should be). Your application.go will become your "package main" and will only contain a main function (which will call your current Main function from github.com/AstralinkIO/api-server/cmd). Should do the trick. Though your mileage might vary.
3) A bit easier option is to use Docker-based Go Platform instead. It still builds your go application on the server with mostly same issues as above, but it's better documented and possibility to test it locally helps a lot with getting configuration and build right. It will also give you some insights into how Beanstalk builds go applications thus helping with options 1 and 2. I used this option myself until I moved to plain EC2 instances. And I still use skills gained as a result of it to build my current app releases using docker.
4) Your best option though (in my humble opinion) is to build your app yourselves and package it as a ready to run binary file. See second bullet point paragraph here
Well, which ever option you choose - good luck!