sync postman with azure devops - postman

I can succesfully run postman tests in azure devops is there a way to
Create a Repo "MyCollection"
Run a "Cron" pipeline that takes the collection and saves it to the repo?
any suggestions

Check this extension Get Postman Scripts, by default it will download your postman json file and save it to $(Build.ArtifactStagingDirectory), then we could push the files to Azure DevOps repo via git cmd.
cd $(Build.ArtifactStagingDirectory)
git config --global user.name "{email}"
git checkout master
git add .
git commit -m "Sync postman json file to Azure DevOps Repo"
git push https://{PAT}#dev.azure.com/{org name}/{project name}/_git/{repo name}
You could also check this doc for more details.
Update1
YAML
- task: oneLuckiGetPostmanScripts#1
displayName: 'Get Postman Script'
inputs:
fileLocation: '$(Build.ArtifactStagingDirectory)\Postman'
apiKey: '{key}'
Result:

Related

GCP ci/cd: skaffold cannot access private git repository using google cloud build

I'm trying to configure auto ci/cd process with google cloud platform.
So I went through this instruction https://davelms.medium.com/automate-gke-deployments-using-cloud-build-and-cloud-deploy-2c15909ddf22 and all works good. So I have a trigger in cloud build that goes to cloud build file that using skaffold for manifest rendering. It builds an image and deploy the app. All good.
But as we have a lot of apps we want to have deploy configs in the separate repo. In that case I see from skaffold docs https://skaffold.dev/docs/references/yaml/?version=v2beta29#build-artifacts-docker-ssh that you could use as configs:
requires:
- configs: []
git:
repo: https://github.com/GoogleContainerTools/skaffold.git
path: skaffold.yaml
ref: main
sync: true
this config works for public repo, but for private repo I get error:
error parsing skaffold configuration file: caching remote dependency https://github.com/your_repo.git: failed to clone repo: running [/usr/bin/git clone https://github.com/your_repo.git ./P7akUPb6jdsgjfgTnOedB92BH8UE7 --branch main --depth 1]
" - stderr: "Cloning into './P7akUPb6jdsgjfgTnOedB92BH8UE7'...\nfatal: could not read Username for 'https://github.com': No such device or address\n""
Where or how I could add details for accessing private repo?

Perform an SVN checkout using powershell

I want to checkout a file in svn repo using the AWS Codebuild. Looking for relevant command by which I could copy the file from svn repo.
The key am using to test is able to checkout when I do it from EC2 instance.
This is the below command which am trying but it is not able to create any folder or checkout the repo.
**- '& C:"Program Files"\TortoiseSVN\bin\svn.exe checkout svn+ssh://xxxx.xxxx.com/xxx-ia-70/trunk/demo xxx-ia-70-el' **

How do i continue working with Amplify on a new machine?

I'm using react native for my project. On my old machine, when i ran amplify status, i had Auth, Api and Storage services listed.
I moved to my new machine, installed node, watchman, brew etc... and then navigated to my react native project and ran: react-native run-ios, and voila, my app is running. All the calls to my AWS Api, Auth and Storage are working perfectly.
Now i can make some amplify commands. Such as amplify status. I tried: amplify env add: here's what i got:
Users-MBP-2:projectname username$ amplify env add
Note: It is recommended to run this command from the root of your app directory
? Do you want to use an existing environment? Yes
? Choose the environment you would like to use: dev
Using default provider awscloudformation
✖ There was an error initializing your environment.
init failed
Error: ENOENT: no such file or directory, open '/Users/username/.aws/credentials'
at Object.openSync (fs.js:462:3)
at Proxy.readFileSync (fs.js:364:35)
at Object.readFileSync (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/util.js:95:26)
at IniLoader.parseFile (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/shared-ini/ini-loader.js:6:47)
at IniLoader.loadFrom (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/shared-ini/ini-loader.js:56:30)
at Config.region (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/node_loader.js:100:36)
at Config.set (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/config.js:507:39)
at Config.<anonymous> (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/config.js:342:12)
at Config.each (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/util.js:507:32)
at new Config (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/config.js:341:19) {
errno: -2,
syscall: 'open',
code: 'ENOENT',
path: '/Users/username/.aws/credentials'
}
Do you think credentials info needs to be brought/configured to my new machine?
When i run amplify configure project it's like doing an amplify init and building a project from scratch. I'm being asked:
? Enter a name for the project: ProjectName
? Choose your default editor: Visual Studio Code
? Choose the type of app that you're building javascript
Please tell us about your project
? What javascript framework are you using (Use arrow keys)
angular
ember
ionic
react
❯ react-native
vue
none
etc....
I also already have a region, username and accessKey, secretAccess key etc..
I do not want to replace or ruin anything in my current backend or current project! Whats going on?
Ensure amplify-cli is installed and you're logged in with your AWS details.
npm install -g #aws-amplify/cli
amplify configure
Running amplify configure is mainly to give the cli knowledge of your AWS account so subsequent commands can have access to things.
If you get amplify: command not found errors try restarting your terminal. If still no luck, you will need to check amplify has been added to your PATH variable.
Run amplify env add , but choose an existing environment. This will let you choose the environment you created on your other machine so you can pull those settings down to your new machine.
amplify env add
? Do you want to use an existing environment? Yes
Production
Follow up with:
amplify pull
You don't need to run amplify add auth again or anything. All of that will pull down automatically after you've done the above.
You DO NOT need to do all config again, but some for sure
You have to install amplify cli npm install -g #aws-amplify/cli
use amplify pull
https://docs.amplify.aws/cli/start#amplify-pull
Follow the rest of steps -
-- provide the accessKeyId, secretAccessKey
-- region
-- select amplify project
and then rest of app related thing like IDE, directory......
I tried every solution then I found this. (in MacBook)
% sudo -i
Password:
~ root# npm install -g #aws-amplify/cli
-- Ctrl+D to exist from Root user
% amplify pull --appId xxxx --envName yyyy.
Note: To get --appId xxxx --envName yyyy
Log in to the AWS console. Choose AWS Amplify. Click your app. Go to Backend
environments. Find the backend environment you wish to pull. Click
Edit backend. See top right then click 'Local setup instructions
' ( amplify pull --appId
YOUR_APP_ID --envName YOUR_ENV_NAME )
Waiting until it request to verify your amplify.
✔ Successfully received Amplify Studio tokens.
? Choose your default editor: Visual Studio Code
? Choose the type of app that you're building javascript
Please tell us about your project
? What javascript framework are you using react
? Source Directory Path: src
? Distribution Directory Path: build
? Build Command: npm run-script build
? Start Command: npm run-script start
✔ Synced UI components.
? Do you plan on modifying this backend? Yes
⠴ Building resource api/xxxx✅ GraphQL schema compiled successfully.
Edit your schema at ....
✔ Successfully pulled backend environment yyyy from the cloud.
✅
Successfully pulled backend environment staging from the cloud.
Run 'amplify pull' to sync future upstream changes.
% amplify pull
% npm install
% npm start
Hope this help every one!!
Happy Coding :)

Fetching Tags in Google Cloud Builder

In the newly created google container builder I am unable to fetch git tags during a build. During the build process the default cloning does not seem to fetch git tags. I added a custom build process which calls git fetch --tags but this results in the error:
Fetching origin
git: 'credential-gcloud.sh' is not a git command. See 'git --help'.
fatal: could not read Username for 'https://source.developers.google.com': No such device or address
# cloudbuild.yaml
#!/bin/bash
openssl aes-256-cbc -k "$ENC_TOKEN" -in gcr_env_vars.sh.enc -out gcr_env_vars.sh -
source gcr_env_vars.sh
env
git config --global url.https://${CI_USER_TOKEN}#github.com/.insteadOf git#github.com:
pushd vendor
git submodule update --init --recursive
popd
docker build -t gcr.io/project-compute/continuous-deploy/project-ui:$COMMIT_SHA -f /workspace/installer/docker/ui/Dockerfile .
docker build -t gcr.io/project-compute/continuous-deploy/project-auth:$COMMIT_SHA -f /workspace/installer/docker/auth/Dockerfile .
This worked for me, as the first build step:
- name: gcr.io/cloud-builders/git
args: [fetch, --depth=100]
To be clear, you want all tags to be available in the Git repo, not just to trigger on tag changes? In the latter, the triggering tag should be available IIUC.
I'll defer to someone on the Container Builder team for a more detailed explanation, but that error tells me that they used gcloud to clone the Google Cloud Source Repository (GCSR), which configures a Git credential helper named as such. They likely did this in another container before invoking yours, or on the host. Since gcloud and/or the gcloud credential helper aren't available in your container, you can't authenticate properly with GCSR.
You can learn a bit more about the credential helper here.

Gitlab - Google compute engine Continuous delivery

What I am trying to do is to enable Continuous delivery from GitLab to my compute engine on Google Cloude. I have Ubuntu 16.04 TSL running over there. I did install all components needed to run my project like: Swift, vapor, nginx.
I have manage to install Gitlab runner as well and created a runner whcihc is accessible from my gitlab repo. Everytime I do push on master the runner triggers. What happen is a failure due to:
could not create leading directories of '/home/gitlab-runner/builds/2bbbbbd/0/Server/Packages/vapor.git': Permission denied
If I change the permissions to chmod -R 777 It will hange on running for build stage visible on gitlab pipeline.
I did something like:
sudo chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/builds
sudo chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/cache
but this haven't help, the error is same Permission denied
Below you have my .gitlab-ci.yml
before_script:
- swift --version
stages:
- build
- deploy
job_build:
stage: build
before_script:
- vapor clean
script:
- vapor build --release
only:
- master
job_run_app:
stage: deploy
script:
- echo "Deploy a API"
- vapor run --name=App --env=production
environment:
name: production
job_run_frontend:
stage: deploy
script:
- echo "Deploy a Frontend"
- vapor run --name=Frontend --env=production
environment:
name: production
But that haven't pass to next stage eg. deploy. I had waited more then 14h for that but with out result.
And... I have few more questions:
Gitlab runner creates builds under location /home/gitlab-runner/builds/ in this location every new job have own folder. for eg. /home/gitlab-runner/builds/2bbbbbd/ in which is my project and the commands are executed. So what happens when the first one is running and I do deploy new version? the ports are blocked by the first instance and so on?
If I want to enable supervisor how do I do that with this when every time I deploy folder is different?
Can anyone explain or show me or point me to tutorial how do Continuous deployment with out docker?
How to start a service using GitLab runner
Thanks to long deep search I finally found an answer! The full article can be found above.
Briefly GitLab CI documentation recommends using dpl for deployment. Gitlab runner run test and process should end. The runner is designed to kill all created processes after finishing each build. The GitLab runner is unable to perform operations outside the catalogue.