Codebuild local build artifacts only readable by root user - amazon-web-services

I'm using AWS codebuild to build locally (see https://aws.amazon.com/blogs/devops/announcing-local-build-support-for-aws-codebuild/).
I run the build with the following command:
./codebuild_build.sh -i aws/codebuild/standard:4.0 -a artifacts -s .
When the build is done, here's the content of my "artifacts" repo:
total 114612
-rw-r--r-- 1 root root 117360014 mai 18 15:51 artifacts.zip
Is there any way to make sure Codebuild applies different permissions to this artifacts file ?

The script doesn't by default. As you have the script locally, why not add a 'chmod' line to change the permissions.

Related

invalid build flag -rw-r--r-- when using esbuild in aws amplify

I am trying to host a nextjs project in aws amplify. as my app size is more than the amplify limit, I had to use the following command to reduce the size of my app during the build
- allfiles=$(ls -al ./.next/standalone/**/*.js)
- npx esbuild $allfiles --minify --outdir=.next/standalone --platform=node --target=node16 --format=cjs --allow-overwrite
but I got the following error
Invalid build flag: "-rw-r--r--"
it seems there is some permission problem but not sure how to fix it.
nextjs version: 12
Node version: 16
amplify cli version: 10.6.2
I'm new to aws, thank you for your help
Although those 2 commands are indicated in the Amplify Docs , this part of the first command:
ls -al ./.next/standalone/**/*.js
returns a list of all the .js files in the directory including their file permissions (e.g. -rw-r--r--). The npx esbuild command expects the input to be file paths, but it's receiving the file permissions as well.
Try instead:
allfiles=$(ls -1 ./.next/standalone/**/*.js)

cifuzz/jazzer docker image missing a jar?

First time trying out cifuzz/jazzer but I am unable to get the source code from github without circumventing my organisation's restrictions on downloading external code (and risking their wrath). Fortunately, I can download and use the cifuzz/jazzer docker image. However all roads lead this error:
ERROR: Could not find jazzer_standalone.jar. Please provide the pathname via the --agent_path flag.
Obviously, I'm no jazzer expert, nor am I too seasoned with docker beyond the (very) basics, however by overriding the entrypoint of the image with:
docker run -it --entrypoint /bin/sh cifuzz/jazzer
and navigating to the /app directory where these files exist:
/fuzzing # cd /app/
/app # ls -alrt
total 10192
-r-xr-xr-x 1 root root 9764956 Oct 24 21:09 jazzer.jar
-r-xr-xr-x 1 root root 658288 Oct 24 21:09 jazzer
drwxr-xr-x 2 root root 4096 Oct 24 21:09 .
drwxr-xr-x 1 root root 4096 Nov 6 16:54 ..
Running ./jazzer results in the same error seen when trying to start the app through the instructions on the github page.
ERROR: Could not find jazzer_standalone.jar. Please provide the pathname via the --agent_path flag.
Looking in the github repo online in the search facility for "jazzer_standalone.jar", it finds this code in the BUILD.bazel file on line 34:
remap_paths = {
"driver/src/main/java/com/code_intelligence/jazzer/jazzer_standalone_deploy.jar": "jazzer_standalone.jar",
"launcher/jazzer": "jazzer",
},
Seems that jazzer_standalone_deploy.jar isn't remapped and/or included in the image?
From the github instructions at https://github.com/CodeIntelligenceTesting/jazzer it says:
The "distroless" Docker image cifuzz/jazzer includes Jazzer together with OpenJDK 11. Just mount a directory containing your compiled fuzz target into the container under /fuzzing by running:
docker run -v path/containing/the/application:/fuzzing cifuzz/jazzer <arguments>
I tried:
docker run -v path-to-my-applicatiuon-jar:/fuzzing cifuzz/jazzer
So I missed out the arguments, just to get some error output and see if I had got the volume path correct, etc.
The result is:
ERROR: Could not find jazzer_standalone.jar. Please provide the pathname via the --agent_path flag.
This has been fixed in Jazzer and new images have been pushed: https://github.com/CodeIntelligenceTesting/jazzer/issues/524

gcloud builds submit with a fatal: not a git repository

I have a Go Dockerfile from https://cloud.google.com/run/docs/quickstarts/build-and-deploy with a one line change so that I can tell what version I'm running:
RUN go build -ldflags "-X main.Version=$(git describe --always)" -mod=readonly -v -o server
When I build locally via docker build . and test, there is no problem with git describe, however if I submit the Docker to be built via gcloud builds submit it fails with:
fatal: not a git repository (or any of the parent directories): .git
How do I build my Cloud Run docker image so it has this Git version reference?
When you perform gcloud builds submit, all the project files aren't sent to Cloud Build. The command take into account your .gitignore file and the .gcloudignore file. If you haven't a .gcloudignore a default behavior is enforced in addition of the .gitignore file directive. More detail here
So, to fix this, create a .gcloudignore file with only the file to exclude for your Build. So, let the .git/ (don't add it in the file) and it will work.

AWS codepipeline, dockerfile is not able to access env variable CODEBUILD_SRC_DIR and CODEBUILD_SRC_DIR_SecondarySource

I am using aws codepipeline.
I have 2 codecommit repo say source1 and source2.
I am using codepipeline for CI/CD.
Codepipeline that i have created, is using both the codecommit repo i.e. source1 and source2 in codepipeline's source.
Now Codebuild is also using both the input source i.e source1 and source2 in its Input Artifacts.
Source1 is primary and source2 is secondary Input artifact
I have a buildspec.yml file which is using dockerfile stored in the root directory of source1 to build the code.
Now issue is, dockerfile is not able to copy source2 code in the container.
i.e
say source1 has folder abc in that and source2 has folder xyz in that
I am doing below in docker file
COPY ./abc /source1/abc/ --working
COPY ./xyz /source2/xyz/ --Not working,getting below error
COPY failed: stat /var/lib/docker/tmp/docker-builder297252497/xyz: no such file or directory.
then i tried below in dockerfile
COPY ./abc /source1/abc/ --working
COPY $CODEBUILD_SRC_DIR_source2/xyz /source2/xyz/ --Not working,getting same error
also tried to CD in $CODEBUILD_SRC_DIR_source2 and than run COPY command, but same error.
Afterwards, I tried printing PWD,CODEBUILD_SRC_DIR,CODEBUILD_SRC_DIR_source2 in both yaml as well as in dockerfile.
it yields below output
in yaml file
echo $CODEBUILD_SRC_DIR prints --> /codebuild/output/src886/src/s3/00
echo CODEBUILD_SRC_DIR_source2 --> /codebuild/output/src886/src/s3/01
echo $PWD --> /codebuild/output/src886/src/s3/00
in dockerfile
echo $CODEBUILD_SRC_DIR prints --> prints nothing
echo CODEBUILD_SRC_DIR_source2 --> prints nothing
echo $PWD --> print '/'
Seems like dockerfile doesn't have access to CODEBUILD_SRC_DIR and CODEBUILD_SRC_DIR_source2 env variables.
Anyone have any idea how can i access CODEBUILD_SRC_DIR_source2 or source2 in dockerfile so that I can copy them in container and make codebuild successful.
Thanks in Advance !!!
Adding answer for anyone else who is facing the same issue.
Hope this will help someone !
The issue was regarding build context passed to docker
when there is only one repo as input source, then codebuild uses this directory to build as pwd --> CODEBUILD_SRC_DIR=/codebuild/output/src894561443/src
The source in first repo (in case only one repo) is present in the same directory i.e. CODEBUILD_SRC_DIR=/codebuild/output/src894561443/src
and in buildspec.yml file we had following command to build the image
docker build -t tag . (uses the dockerfile present in root directory of first source)
But when we have multiple source then code build stores the input artifacts like this
CODEBUILD_SRC_DIR=/codebuild/output/src886/src/s3/00
CODEBUILD_SRC_DIR_source2=/codebuild/output/src886/src/s3/01
instead of CODEBUILD_SRC_DIR=/codebuild/output/src886/src/
where CODEBUILD_SRC_DIR is first input artifact(1st codecommit repo)
and CODEBUILD_SRC_DIR_source2 is second input artifact (2nd codecommit repo)
In this case codebuild was using directory -> CODEBUILD_SRC_DIR=/codebuild/output/src886/src/s3/00 as pwd
So below command where context was passed as dot '.' (pwd)
docker build -t tag .
As a result only first source was passed to the docker as CODEBUILD_SRC_DIR was PWD and docker was failed to refer to the second source.
To fix this we passed the parent directory of CODEBUILD_SRC_DIR=/codebuild/output/src886/src/s3/00 i.e /codebuild/output/src886/src/s3/
in docker build command like this.
docker build -t tag -f $CODEBUILD_SRC_DIR/Dockerfile /codebuild/output/src886/src/s3/
and in dockerfile reffered the source1 and source2 as below
source1=./00
source2=./01
and it worked !!!

How can I make gcloud use a specific config directory?

I would like gcloud to use a specific .config directory that I know the path of. Is there a way to force it to use this directory?
You can set the environment variable
CLOUDSDK_CONFIG=/path/to/dir
to override the default value of ~/.config/gcloud.
While I'm not sure about using a different .config dir, you can use the --configuration flag.
You can see gcloud --help and gcloud topic configurations for more information.
First I should mention that I second Zachary's and Kevin's answers.
But if you insist on using a specific .config file (on linux it's actually a directory) or switching between multiple such files/dirs one way to do it would be to temporarily copy or symlink them them in the place where gcloud expects them. At least on linux that would be the ~/.config/gcloud directory.
Personally I prefer symlinking in such scenarios, this works for me:
/home/username/.config> rm -f gcloud; ln -s gcloud.v1 gcloud
/home/username/.config> ls -l gcloud
lrwxrwxrwx 1 username at 9 Jan 19 09:19 gcloud -> gcloud.v1
/home/username/.config> gcloud auth list
No credentialed accounts.
To login, run:
$ gcloud auth login `ACCOUNT`
/home/username/.config> rm -f gcloud ; ln -s gcloud.v2 gcloud
/home/username/.config> ls -l gcloud
lrwxrwxrwx 1 username at 9 Jan 19 09:19 gcloud -> gcloud.v2
/home/username/.config> gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
* username#gmail.com
To set the active account, run:
$ gcloud config set account `ACCOUNT`
/home/username/.config>
I've had this problem for years and was thinking of coding some great rubygem to make it happen then thought this might be a good compromise.
Create a simple script called contextual-gcloud. Note the \gcloud, fundamental for future aliasing.
🐧$ cat > contextual-gcloud
#!/bin/bash
if [ -d .gcloudconfig/ ]; then
echo "[$0] .gcloudconfig/ directory detected: using that dir for configs instead of default."
CLOUDSDK_CONFIG=./.gcloudconfig/ \gcloud "$#"
else
\gcloud "$#"
fi
Add to your .bashrc and reload / start new bash. This will fix autocompletion.
alias gcloud=contextual-gcloud
That's it! If you have a directory called that way the system will use that instead, which means you can load your configuration into source control etc.. only remember to git ignore stuff like logs, and private stuff (keys, certificates, ..).
Note: auto-completion is fixed by the alias ;)
Code: https://github.com/palladius/sakura/blob/master/bin/contextual-gcloud