AWS CodeBuild batch build - access artifacts - amazon-web-services

I've set up a build job that uses batch builds.
2 batches will build something, upload to S3 and output the location in a json file.
The last batch is supposed to pick up the two json files and use them in some further things.
My problem: I can't find the artifacts in the last job.
When I use ls in the first 2 jobs, they are there - but not in the last one.
Here is my buildspec, with unimportantt parts removed.
version: 0.2
batch:
fast-fail: true
build-graph:
- identifier: template_examplehook
- identifier: s3_checkbucketencryptionhook
- identifier: stackset
buildspec: automation/assemble-template.yaml
depend-on:
- template_examplehook
- s3_checkbucketencryptionhook
phases:
install:
runtime-versions:
python: 3.7
pre_build:
commands:
- echo "Starting ..."
- ...
build:
commands:
- echo "Building with $(python --version)"
- cd $CODEBUILD_BATCH_BUILD_IDENTIFIER
- ---
- echo $S3_URI_PACKAGE > hash.json
- ---
post_build:
commands:
- echo Build completed on $(date)
artifacts:
files:
- '*/hash.json'
I expected to find the hash.json file in their respective folders but they don't exist in the last batch job.

Update after talking to our AWS tech support:
Unexpected behaviors, it should work as we thought it would, but it doesn't.
We ended up rewriting it and went with two different build steps.

Related

AWS CodeBuild batch build-list not running phases for each build identifier

I'm new to AWS CodeBuild and have been trying to work out how to run the parts of the build in parallel (or even just use the same buildspec.yml for each project in my solution).
I thought the batch -> build-list was the way to go. From my understanding of the documentation this will run the phases in the buildspec for each item in the build list.
Unfortunately that does not appear to be the case - the batch section appears to be ignored and the buildspec runs the phases once, for the default environment variables held at project level.
My buildspec is
version: 0.2
batch:
fast-fail: false
build-list:
- identifier: getPrintJobNote
env:
variables:
IMAGE_REPO_NAME: getprintjobnote
FOLDER_NAME: getPrintJobNote
ignore-failure: false
- identifier: GetPrintJobFilters
env:
variables:
IMAGE_REPO_NAME: getprintjobfilters
FOLDER_NAME: GetPrintJobFilters
ignore-failure: false
phases:
pre_build:
commands:
- echo Logging into Amazon ECR
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
build:
commands:
- echo Building lambda docker container
- echo Build path $CODEBUILD_SRC_DIR
- cd $CODEBUILD_SRC_DIR/src/$FOLDER_NAME
- docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
- docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
post_build:
commands:
- echo Pushing to Amazon ECR
- docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
Is there something wrong in my buildspec, does build-list not do what I think it does, or is there something else needed to be configured somewhere to enable this?
In the project configuration I found a setting for "enable concurrent build limit - optional". I tried changing this but got an error:
Project-level concurrent build limit cannot exceed the account-level concurrent build limit of 1.
This may not be related but could be because my account is new... I think the default should be 60 anyway.
Had similar problem, turned out that batch builds are a separate build type. Go to project -> start build with overrides, then select batch build.
I also split buildspec file -> 1st spec has batch config, second one has "actual" phases. Use buildspec: directive. Not sure if this is required though.
Also: if builds are hook-triggered, this also has to be configured to run batch build.

Is it possible to set environment variables per branch in the Amplify.yml file (AWS Amplify)?

I'm currently using AWS Amplify to manage my front-end. I've been manually injecting the environment variables throughout the console.
While I have seen that (at least in this case), the environment variables are correctly protected as mentioned in the AWS docs. I wanted to know if it was possible to set in the amplify.yml file variables per branch that do not necessarily need protection.
Something like this:
version: 0.1
env:
variables:
myvarOne:
branch: master
value: ad
branch: dev
value otherval
frontend:
phases:
preBuild:
commands:
- yarn install
- yarn lint
- yarn test
build:
commands:
- yarn build build
artifacts:
baseDirectory: build
files:
- '**/*'
cache:
paths:
- node_modules/**/*
So far, it seems there is no ideal solution for your problem
However, it is possible to do some workaround to have something like that working
You cannot have per branch environment variables, but you can have per branch commands
So, you can define different variables for different branches and run the appropriate command as you wish
version: 0.1
env:
variables:
myvarOne:
value_master: val
value_dev: otherval
frontend:
phases:
preBuild:
commands:
- if [ "${AWS_BRANCH}" = "master" ]; then export VALUE=${value_master}; fi
- if [ "${AWS_BRANCH}" = "dev" ]; then export VALUE=${value_dev}; fi
- yarn install
- yarn lint
- yarn test
build:
commands:
- yarn build build
artifacts:
baseDirectory: build
files:
- '**/*'
cache:
paths:
- node_modules/**/*
I might be pretty late with the answer, but as of the time of writing, it seems like there exists an out-of-the-box solution to your case.
According to the documentation almost verbatim, it asks you to do the following:
Sign in to the AWS Management Console here.
Navigate to App Settings > Environment variables > Manage variables.
In the Manage variables section, under Variable, enter your key. For Value, enter your value.
Choose Actions and then choose Add variable override.. You can override the environment variable for the key based on branches.
You now have a set of environment variables specific to your branch.
This GIF better illustrates steps 4-5.
I don't have enough stackoverflow reputation to add an image, so please refer to the 5th point in the documentation where it describes a way to Add variable override for specific branches.
https://docs.aws.amazon.com/amplify/latest/userguide/environment-variables.html

Why the file.txt is not being deployed to my server?

In my existing aws pipeline I have the following buildspec.yml:
version: 0.2
phases:
build:
commands:
- cd media/web/front_dev
- echo "Hello" > ../web/txt/hello.txt
artifacts:
files:
- ./media/web/hello.txt
And the appspec.yml has the following
version: 0.0
os: linux
files:
- source: /
destination: /webserver/src/public
But the file hello.txt is not being deployed to the server on the deploy phase? Once I ssh into the machine I run the following commands:
/webserver/src/public/media/web/hello.txt
But the file is not shown. Do you have any idea why?
My pipeline initially had only a source and a deployment step then I edited it in order to have a codebuild step as well.
Check your pipeline. You may have added the build step but the deployment just fetches the code from the version control instead of the deployment. In order to solve that follow these steps:
Specify a name for the output artifact at the build step.
Select as input artifact the artifact you have placed into as output artifact at the build step.

AWS Lambda CodePipeline can't fine my deps.json file?

I'm trying to build & deploy a simple sample project to learn AWS. C# / .NET Core.
My buildspec looks like this:
version: 0.2
phases:
install:
runtime-versions:
dotnet: 2.2
pre_build:
commands:
- echo Restore started on `date`
- dotnet restore AWSServerless1.csproj
build:
commands:
- echo Build started on `date`
- dotnet publish -c release -o ./build_output AWSServerless1.csproj
artifacts:
files:
- ./build_output/**/*
- scripts/**/*
- appspec.yml
discard-paths: yes
My appspec looks like this:
version: 0.0
Resources:
- myStack-AspNetCoreFunction-1HPKUEU7I6GFW:
Type: AWS::Lambda::Function
Properties:
Name: "myStack-AspNetCoreFunction-1HPKUEU7I6GFW"
Alias: "AWSServerless1"
CurrentVersion: "1"
TargetVersion: "2"
The pipeline completes successfully, but when I try to run the lambda, I get a 502. I checked the logs and it says:
Could not find the required 'AWSServerless1.deps.json'. This file should be present at the root of the deployment package.: LambdaException
When I download the package from S3, to me, it looks like everything is there. It's a zip file, no paths anywhere, everything is in the root of the zip including AWSServerless1.deps.json.
Any ideas?
Use dotnet lambda package instead of publish
see https://github.com/aws/aws-extensions-for-dotnet-cli

In my AWS, codebuild build, I get an error 'No artifact files specified' when I try to specify my endpoint in my yml file.

I have googled and searched within stackoverflow and found some suggestions but still no succedd. My build process in AWS Codebuild runs and gives me a success output but in the log shows -> 'No artifact files specified', and as the result no files are being copied in my S3. Could anybody help me figure this out. Here I share my yml setting:
version: 0.1
phases:
build:
commands:
- echo Nothing to do yet
addons:
artifacts:
s3_region: "eu-central-1"
files:
- '**/*'
I suggest you refer to the Build Specification Reference.
Specifically, you should remove addons: as well as s3_region: as neither of these are valid CodeBuild flags. You should also be using version: 0.2, as version: 0.1 has been deprecated.
Here is what your buildspec.yml should look like:
version: 0.2
phases:
build:
commands:
- echo Nothing to do yet
artifacts:
files:
- '**/*'