Azure DevOps S3 React/MERN stack - amazon-web-services

Does anyone have any experience of using Azure DevOps to deploy React build package to AWS using their extension?
I'm stuck on uploading only the build package of npm build.
Here is my scripts so far:
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.x'
displayName: 'Install Node.js'
- script: |
npm install
npm test
npm run build
- task: S3Upload#1
inputs:
awsCredentials: 'AWS Deploy User'
regionName: 'us-east-1'
bucketName: 'test'
globExpressions: '**'
createBucket: true
displayName: 'npm install and build'
The only options on the task for S3Upload that stands out is sourceFolder. They use something like "$(Build.ArtifactStagingDirectory)" but since I've never used that before that doesn't make a lot of sense to me. Would it just be as simple as like $(Build.ArtifactStagingDirectory)/build

The predefined variable $(Build.ArtifactStagingDirectory) is mapped to c:\agent_work\1\a, which is the local path on the agent where any artifacts are copied to before being pushed to their destination.
In your yaml pipeline, your source code is downloaded in folder $(Build.SourcesDirectory)(ie. c:\agent_work\1\s). And the npm commands in the script task all runs in this folder. So the npm build result is this folder $(Build.SourcesDirectory)\build (ie.c:\agent_work\1\s\build).
S3Upload task will upload file from $(Build.ArtifactStagingDirectory) by default. You can specifically point the sourceFolder attribute (default is $(Build.ArtifactStagingDirectory)) of S3Upload task to folder $(Build.SourcesDirectory)\build. See below:
- task: S3Upload#1
inputs:
awsCredentials: 'AWS Deploy User'
regionName: 'us-east-1'
bucketName: 'test'
globExpressions: '**'
createBucket: true
sourceFolder: '$(Build.SourcesDirectory)/build'
Another workaround is to use copy file task to copy the build results from $(Build.SourcesDirectory)\build to folder $(Build.ArtifactStagingDirectory). See example here.
- task: CopyFiles#2
inputs:
Contents: 'build/**' # Pull the build directory (React)
TargetFolder: '$(Build.ArtifactStagingDirectory)'

Related

Terrafrom Init is failing to locate module when providing relative path

I've started building an infrastructure using terraform. Within that TF configuration I am calling a module to be used by using relative path. This is successful in a classic release but I have been tasked with converting the pipeline to yaml. When I run terraform init step the agent finds the Tf config files but can't find the modules folder even though the artifact was downloaded on a previous task.
yaml file:
trigger:
- master
resources:
pipelines:
- pipeline: Dashboard-infra
project: Infrastructure
source: IT Dashboard
- pipeline: Infra-modules
project: Infrastructure
source: AWS Modules
trigger: true
stages:
- stage: Test
displayName: Test
variables:
- group: "Non-Prod Keys"
jobs:
- deployment:
displayName: string
variables:
region: us-east-1
app_name: it-dashboard
environment: test
tf.path: 'IT Dashboard'
pool:
vmImage: 'ubuntu-latest'
environment: test
strategy:
runOnce:
deploy:
steps:
- task: DownloadBuildArtifacts#1
inputs:
buildType: 'specific'
project: '23e9505e-a627-4681-9598-2bd8b6c1204c'
pipeline: '547'
buildVersionToDownload: 'latest'
downloadType: 'single'
artifactName: 'drop'
downloadPath: '$(Agent.BuildDirectory)/s'
- task: DownloadBuildArtifacts#1
inputs:
buildType: 'specific'
project: '23e9505e-a627-4681-9598-2bd8b6c1204c'
pipeline: '88'
buildVersionToDownload: 'latest'
downloadType: 'single'
artifactName: 'Modules'
downloadPath: '$(agent.builddirectory)/s'
- task: ExtractFiles#1
inputs:
archiveFilePatterns: 'drop/infrastructure.zip'
destinationFolder: '$(System.DefaultWorkingDirectory)'
cleanDestinationFolder: false
overwriteExistingFiles: false
- task: ExtractFiles#1
inputs:
archiveFilePatterns: 'Modules/drop.zip'
destinationFolder: '$(System.DefaultWorkingDirectory)'
cleanDestinationFolder: false
overwriteExistingFiles: false
- task: TerraformInstaller#0
inputs:
terraformVersion: '0.12.3'
- task: TerraformTaskV2#2
inputs:
provider: 'aws'
command: 'init'
workingDirectory: '$(System.DefaultWorkingDirectory)/$(tf.path)'
commandOptions: '-var "region=$(region)" -var "app_name=$(app.name)" -var "environment=$(environment)"'
backendServiceAWS: 'tf_nonprod'
backendAWSBucketName: 'wdrx-deployments'
backendAWSKey: '$(environment)/$(app.name)/infrastructure/$(region).tfstate'
Raw error log:
2021-10-29T12:30:16.5973748Z ##[section]Starting: TerraformTaskV2
2021-10-29T12:30:16.5981535Z ==============================================================================
2021-10-29T12:30:16.5981842Z Task : Terraform
2021-10-29T12:30:16.5982217Z Description : Execute terraform commands to manage resources on AzureRM, Amazon Web Services(AWS) and Google Cloud Platform(GCP)
2021-10-29T12:30:16.5982555Z Version : 2.188.1
2021-10-29T12:30:16.5982791Z Author : Microsoft Corporation
2021-10-29T12:30:16.5983122Z Help : [Learn more about this task](https://aka.ms/AA5j5pf)
2021-10-29T12:30:16.5983461Z ==============================================================================
2021-10-29T12:30:16.7253372Z [command]/opt/hostedtoolcache/terraform/0.12.3/x64/terraform init -var region=*** -var app_name=$(app.name) -var environment=test -backend-config=bucket=wdrx-deployments -backend-config=key=test/$(app.name)/infrastructure/***.tfstate -backend-config=region=*** -backend-config=access_key=*** -backend-config=secret_key=***
2021-10-29T12:30:16.7532941Z [0m[1mInitializing modules...[0m
2021-10-29T12:30:16.7558115Z - S3-env in ../Modules/S3
2021-10-29T12:30:16.7578267Z - S3-env.Global-Vars in ../Modules/Global-Vars
2021-10-29T12:30:16.7585434Z - global-vars in
2021-10-29T12:30:16.7589958Z [31m
2021-10-29T12:30:16.7597321Z [1m[31mError: [0m[0m[1mUnreadable module directory[0m
2021-10-29T12:30:16.7597847Z
2021-10-29T12:30:16.7599087Z [0mUnable to evaluate directory symlink: lstat ../Modules/global-vars: no such
2021-10-29T12:30:16.7599550Z file or directory
2021-10-29T12:30:16.7599933Z [0m[0m
2021-10-29T12:30:16.7600324Z [31m
2021-10-29T12:30:16.7600779Z [1m[31mError: [0m[0m[1mFailed to read module directory[0m
2021-10-29T12:30:16.7600986Z
2021-10-29T12:30:16.7601405Z [0mModule directory does not exist or cannot be read.
2021-10-29T12:30:16.7601808Z [0m[0m
2021-10-29T12:30:16.7602135Z [31m
2021-10-29T12:30:16.7602573Z [1m[31mError: [0m[0m[1mUnreadable module directory[0m
2021-10-29T12:30:16.7602768Z
2021-10-29T12:30:16.7603271Z [0mUnable to evaluate directory symlink: lstat ../Modules/global-vars: no such
2021-10-29T12:30:16.7603636Z file or directory
2021-10-29T12:30:16.7603964Z [0m[0m
2021-10-29T12:30:16.7604291Z [31m
2021-10-29T12:30:16.7604749Z [1m[31mError: [0m[0m[1mFailed to read module directory[0m
2021-10-29T12:30:16.7604936Z
2021-10-29T12:30:16.7605370Z [0mModule directory does not exist or cannot be read.
2021-10-29T12:30:16.7605770Z [0m[0m
2021-10-29T12:30:16.7743995Z ##[error]Error: The process '/opt/hostedtoolcache/terraform/0.12.3/x64/terraform' failed with exit code 1
2021-10-29T12:30:16.7756780Z ##[section]Finishing: TerraformTaskV2
I have attempted to even move the modules folder inside the tf.path so it is within the same folder as the tf config files and changed the location from "../" to "./". No matter what repo I extract the modules folder to (after downloading as artifact from another build pipeline) it cannot be found when calling it on the tf config files. I am fairly new to DevOps and would appreciate any help or just being pointed in the right direction.
Define system.debug: true variable at global level to enable debug logs - maybe something there will give you a hint:
variables:
system.debug: true
Apart from downloaded artifacts, do you expect to have files checked out from the repo the pipeline is defined in? The deployment job doesn't checkout git files by default, so you may want to add checkout: self to steps there.
Unable to evaluate directory symlink: lstat ../Modules/global-vars - this is suspicious, I wouldn't expect any symlinks in there. But maybe the error message is just misleading.
A useful trick is to log the whole directory structure.
You can do this with a bash script step (might need to apt install tree first):
- script: tree
Or with powershell (will work on MS-hosted linux agent):
- pwsh: Get-ChildItem -Path '$(agent.builddirectory)' -recurse

How I create build and deploye path for gitlab CI CD?

My project folder
api
frontend
Build and deploye successful but no any effect on website so
i need to set frontend path in my yml file..but how it possible that i don't know
Anyone help me?
[![enter image description here][1]][1]
stages:
- build
- deploy
variables:
ARTIFACT_NAME: my-cookbook.tgz
DEV_BUCKET: dev-account-devops
PROD_BUCKET: prod-account-devops
S3_PATH: elk/${ARTIFACT_NAME}-${CI_BUILD_ID}-${CI_BUILD_REF}
package:
stage: build
script: git archive --format tgz HEAD > $ARTIFACT_NAME
artifacts:
untracked: true
expire_in: 1 week
deploy_development:
stage: deploy
script:
- export AWS_ACCESS_KEY=$DEV_AWS_ACCESS_KEY
- export AWS_SECRET_ACCESS_KEY=$DEV_SECRET_ACCESS_KEY
- aws s3 cp $ARTIFACT_NAME s3://$DEV_BUCKET/$S3_PATH
environment: development
deploy_production:
stage: deploy
script:
- export AWS_ACCESS_KEY=$PROD_AWS_ACCESS_KEY
- export AWS_SECRET_ACCESS_KEY=$PROD_SECRET_ACCESS_KEY
- aws s3 cp $ARTIFACT_NAME s3://$PROD_BUCKET/$S3_PATH
environment: production
when: manual
only:
- master
[The OP answered their own question in the linked Forum Post that was replaced with the .yml file in the question. The answer is copied here so that this question has an answer.]
The export for the AWS Access Key had a typo and was missing the last bit. It should be AWS_ACCESS_KEY_ID not AWS_ACCESS_KEY.

Errors when trying to run an AWSPowerShellModuleScript#1 in Azure devops pipeline

I currently have an Azure Devops pipeline to build and deploy a next.js application via the serverless framework.
Upon reaching the AWSPowerShellModuleScript#1 task I get these errors:
[warning]MSG:UnableToDownload «https:...» «»
[warning]Unable to download the list of available providers. Check
your internet connection.
[warning]Unable to download from URI 'https:...' to ''.
[error]No match was found for the specified search criteria for the
provider 'NuGet'. The package provider requires 'PackageManagement'
and 'Provider' tags. Please check if the specified package has the
tags.
[error]No match was found for the specified search criteria and
module name 'AWSPowerShell'. Try Get-PSRepository to see all
available registered module repositories.
[error]The specified module 'AWSPowerShell' was not loaded because no
valid module file was found in any module directory.
I do have the AWS.ToolKit installed and it's visible when I go to manage extensions within Azure Devops.
My pipeline:
trigger: none
stages:
- stage: develop_build_deploy_stage
pool:
name: Default
demands:
- msbuild
- visualstudio
jobs:
- job: develop_build_deploy_job
steps:
- checkout: self
clean: true
- task: NodeTool#0
displayName: Install Node
inputs:
versionSpec: '12.x'
- script: |
npm install
npx next build
displayName: Install Dependencies and Build
- task: CopyFiles#2
inputs:
Contents: 'build/**'
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts#1
displayName: Publish Artifact
inputs:
pathtoPublish: $(Build.ArtifactStagingDirectory)
artifactName: dev_artifacts
- task: AWSPowerShellModuleScript#1
displayName: Deploy to Lambda#Edge
inputs:
awsCredentials: '###'
regionName: '###'
scriptType: 'inline'
inlineScript: 'npx serverless --package dev_artifacts'
I know I can use the ubuntu vmImage and then make use of the awsShellScript but the build agent I have available to me doesn't support bash.

github pages issue when using github actions and github-pages-deploy-action?

I have simple github repo where I host the content of my CV. I use hackmyresume to generate the index.html. I'm using Github Actions to run the npm build and it should publish the generated content to the gh-pages branch.
My workflow file has
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Deploy with github-pages
uses: JamesIves/github-pages-deploy-action#master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
BASE_BRANCH: master # The branch the action should deploy from.
BRANCH: gh-pages # The branch the action should deploy to.
FOLDER: target # The folder the action should deploy.
BUILD_SCRIPT: npm install && npm run-script build
And the build command is
"build": "hackmyresume BUILD ./src/main/resources/json/fresh/resume.json target/index.html -t compact",
I can see the generated html file getting committed to the github branch
https://github.com/emeraldjava/emeraldjava/blob/gh-pages/index.html
but the gh-page doesn't pick this up? I get a 404 error when i hit
https://emeraldjava.github.io/emeraldjava/
I believe my repo setting and secrets are correct but I must be missing something small. Any help would be appreciated.
This is happening because of your use of the GITHUB_TOKEN variable. There's an open issue with GitHub due to the fact that the built in token doesn't trigger the GitHub Pages deploy job. This means you'll see the files get committed correctly, but they won't be visible.
To get around this you can use a GitHub access token. You can learn how to generate one here. It needs to be correctly scoped so it has permission to push to a public repository. You'd store this token in your repository's Settings > Secrets menu (Call it something like ACCESS_TOKEN), and then reference it in your configuration like so:
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Deploy with github-pages
uses: JamesIves/github-pages-deploy-action#master
env:
ACCESS_TOKEN: ${{ secrets.ACCESS_TOKEN }}
BASE_BRANCH: master # The branch the action should deploy from.
BRANCH: gh-pages # The branch the action should deploy to.
FOLDER: target # The folder the action should deploy.
BUILD_SCRIPT: npm install && npm run-script build
You can find an outline of these variables here. Using an access token will allow the GitHub Pages job to trigger when a new deployment is made. I hope that helps!

Integrating Selenium with Gitlab CI

I have created an automated selenium test script which works perfectly fine.
My task now is to set up Gitlab CI and try to automatically run this selenium script when I make a push to git.
Is it possible to make the selenium script automatically execute and inform the user if the script runs successfully or it fails?
Thank you
How to automatically run Automation Tests on Gitlab Ci with Selenium and specflow with a .net Project ?
If this is something ,you are looking for then .
Here is the core part which is to setup the gitlab-ci.yml file :
Here is how the sample gitlab-ci.yml should look :
image: please give your own docker which can download .net stuff
variables:
DOCKER_DRIVER: overlay2
SOURCE_CODE_DIRECTORY: 'src'
BINARIES_DIRECTORY: 'bin'
OBJECTS_DIRECTORY: 'obj'
NUGET_PACKAGES_DIRECTORY: '.nuget'
stages:
- Build
- Test
before_script:
- 'dotnet restore ${SOURCE_CODE_DIRECTORY}/TestProject.sln --packages ${NUGET_PACKAGES_DIRECTORY}'
Build:
stage: Build
script:
- 'dotnet build $SOURCE_CODE_DIRECTORY/TestProject.sln --no-restore'
except:
- tags
artifacts:
paths:
- '${SOURCE_CODE_DIRECTORY}/*/${BINARIES_DIRECTORY}'
- '${SOURCE_CODE_DIRECTORY}/*/${OBJECTS_DIRECTORY}'
- '${NUGET_PACKAGES_DIRECTORY}'
expire_in: 2 hr
Test:
stage: Test
services:
- selenium/standalone-chrome:latest
script:
- 'export MSBUILDSINGLELOADCONTEXT=1'
- 'export selenium_remote_url=http://selenium__standalone-chrome:4444/wd/hub/'
- 'export PATH=$PATH:${SOURCE_CODE_DIRECTORY}/chromedriver.exe'
- 'dotnet test $SOURCE_CODE_DIRECTORY/ExpressTestProject.sln --no-restore'
artifacts:
paths:
- '${SOURCE_CODE_DIRECTORY}/chromedriver.exe'
- '${SOURCE_CODE_DIRECTORY}/*/${BINARIES_DIRECTORY}'
- '${SOURCE_CODE_DIRECTORY}/*/${OBJECTS_DIRECTORY}'
- '${NUGET_PACKAGES_DIRECTORY}'
Thats it .When you set up your project with this .git-lab-ci.yml ,90 % of your job is done .
The tests will run automatically in Gitlab ,whenever you commit something in your source tree or Tfs.
Thanks