I am injecting a secret from the Github build pipeline. I can see the secret (a dummy value for now) being echoed in the run of the Dockerfile. I want to use the exec form for accessing the secret. Currently I have the following so far but it errors out:
RUN --mount=type=secret,id=PERSONAL_ACCESS_TOKEN \
export PERSONAL_ACCESS_TOKEN=$(cat /run/secrets/PERSONAL_ACCESS_TOKEN) && \
["mvn", "-s", "settings.xml", "clean", "install"]
When my build runs with the above instruction in Dockerfile I get the following error:
buildx failed with: ERROR: failed to solve: executor failed running
[/bin/sh -c export PERSONAL_ACCESS_TOKEN=$(cat /run/secrets/PERSONAL_ACCESS_TOKEN)
&& ["mvn", "-s", "settings.xml", "clean", "install"]]: exit code: 127
It seems like I can't use the exec form with the Run instruction for secrets. Is there a way to do the brackets with the Run instruction?
Related
Trying to run a simple bash script on google cloud build. Trying to run it it says it cannot find it, even though ls shows it is there
I've set up a build trigger on google cloud to run a simple test repository on pushes to main branch
The test repository has just two files: the cloudbuild yaml and a simple testfile.sh bash script
Cloudbuild yaml tells it to run this testfile.sh file, but says it cannot find it even though a simple ls arg shows it
I've tried like every combination of ways to run a bash file:
with/without '-c' argument
with/without '.' argument
with/without file shebang
cloudbuild.yaml:
steps:
- name: 'ubuntu'
entrypoint: 'bash'
args: ['-c', 'testfile.sh']
testfile.sh:
echo "Go suck it, world!"
gcloud builds log <log-id>:
starting build "640c5ba5-5906-4296-a80c-9adc54ee84bb"
FETCHSOURCE
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint: git config --global init.defaultBranch <name>
hint:
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint:
hint: git branch -m <name>
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/test-wtf-2734586432/r/test-files
* branch 1d6fc0b27c09cb3421a242764dfe28bc115bf8f5 -> FETCH_HEAD
HEAD is now at 1d6fc0b Fix typo in entrypoint
BUILD
Pulling image: ubuntu
Using default tag: latest
latest: Pulling from library/ubuntu
Digest: sha256:adf73ca014822ad8237623d388cedf4d5346aa72c270c5acc01431cc93e18e2d
Status: Downloaded newer image for ubuntu:latest
docker.io/library/ubuntu:latest
bash: testfile.sh: command not found
ERROR
ERROR: build step 0 "ubuntu" failed: step exited with non-zero status: 127
I fixed it
Had to get rid of the '-c' from the args list
I'm getting this error from TravisCI when it tries to run a Pull Request
coveralls.exception.CoverallsException: Not on TravisCI. You have to provide either repo_token in .coveralls.yml or set the COVERALLS_REPO_TOKEN env var.
The command "docker-compose -f docker-compose.yml -f docker-compose.override.yml run -e COVERALLS_REPO_TOKEN web sh -c "coverage run ./src/manage.py test src && flake8 src && coveralls"" exited with 1.
However, I do have both COVERALLS_REPO_TOKEN and repo_token set as environment variables in my TravisCI, and I know they're correct because TravisCI passes my develop branch and successfully sends the results to coveralls.io:
OK
Destroying test database for alias 'default'...
Submitting coverage to coveralls.io...
Coverage submitted!
Job ##40.1
https://coveralls.io/jobs/61852774
The command "docker-compose -f docker-compose.yml -f docker-compose.override.yml run -e COVERALLS_REPO_TOKEN web sh -c "coverage run ./src/manage.py test src && flake8 src && coveralls"" exited with 0.
How do I get TravisCI to recognize my COVERALLS_REPO_TOKEN for the pull requests it runs?
Found the answer: You can't! At least not while keeping your coveralls.io token secret, because:
Defining encrypted variables in .travis.yml
Encrypted environment variables are not available to pull requests
from forks due to the security risk of exposing such information to
unknown code.
Defining Variables in Repository Settings
Similarly, we do not provide these values to untrusted builds,
triggered by pull requests from another repository.
I have a container builder step
steps:
- id: dockerbuild
name: gcr.io/cloud-builders/docker
entrypoint: 'bash'
args:
- -c
- |
docker build . -t test
images: ['gcr.io/project/test']
The Dockerfile used to create this test image has gsutil specific commands like
FROM gcr.io/cloud-builders/gcloud
RUN gsutil ls
When I submit a docker build to container builder service using
gcloud container builds submit --config cloudbuild.yml
I see the following error
You are attempting to perform an operation that requires a project id, with none configured. Please re-run gsutil config and make sure to follow the instructions for finding and entering your default project id.
The command '/bin/sh -c gsutil ls' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
My question is, how do we use gcloud/gsutil commands inside the DockerFile so that I can run inside a docker build step ?
To invoke "gcloud commands." using the tool builder, you need Container Builder service account, because it executes your builds on your behalf.
Here in this GitHub there is an example for cloud-builders using the gcloud command:
Note : you have to specify $PROJECT_ID it's mandatory for your builder to work.
To do this, your Dockerfile either needs to start from a base image that has the cloud SDK installed already (like FROM gcr.io/cloud-builders/gcloud) or you would need to install it. Here's a Dockerfile that installs it: https://github.com/GoogleCloudPlatform/cloud-builders/blob/master/gcloud/Dockerfile.slim
I was trying to setup a build trigger for an kotlin app that is build using gradle. For that I put together the following Dockerfile:
FROM gradle:jdk8 as builder
WORKDIR /home/gradle/project
COPY . .
WORKDIR ./Kuroji-Eventrouter-Server
RUN gradle shadowJar
FROM openjdk:8-jre-alpine
WORKDIR /app
COPY --from=builder /home/gradle/project/Kuroji-Eventrouter-Server/build/libs/kuroji-eventrouter-server-*-all.jar kuroji-eventrouter-server.jar
ENTRYPOINT ["java", "-jar", "kuroji-eventrouter-server.jar"]
And that file works on my machine with docker build and it starts normally on google container registry however during the RUN gradle shadowJar task it crashes with some gradle error:
Step 5/9 : RUN gradle shadowJar
---> Running in ddd190fc2323
Starting a Gradle Daemon (subsequent builds will be faster)
[91m
[0m[91mFAILURE: [0m[91mBuild failed with an exception.[0m[91m
[0m[91m
[0m[91m* What went wrong:
[0m[91mCould not create service of type ScriptPluginFactory using BuildScopeServices.createScriptPluginFactory().
[0m[91m> [0m[91mCould not create service of type CrossBuildFileHashCache using BuildSessionScopeServices.createCrossBuildFileHashCache().
[0m[91m
[0m[91m* Try:
[0m[91mRun with [0m[91m--stacktrace[0m[91m option to get the stack trace. Run with --info[0m[91m or --debug[0m[91m option to get more log output. Run with [0m[91m--scan[0m[91m to get full insights.[0m[91m
[0m[91m
[0m[91m* Get more help at https://help.gradle.org
[0m[91m
[0m[91mBUILD FAILED in 3s
The command '/bin/sh -c gradle shadowJar' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
[0m
I tried building the Image on docker HUB and the same thing happend: https://hub.docker.com/r/usbpc/kuroji-eventrouter-server/builds/bnknnpqowwabdy82ydxiypc/
This is very confusing to me as I thought containers should be able to run anywhere and not depend on the enviroment. What can I do to make google build my container?
The problem was a file permission problem. Using the --stacktrace option I found that the gradle process didn't have permissions to create a folder inside the sources.
The solution I would like to do is use the --chown=gradle:gradle option on the COPY instruction, unfortunatly this it not supported in the google cloud yet.
So the solution is to add USER root before executing the gradle build.
When running chef zero via AWS userdata, the run always fails. However, if I ssh onto the machine and manually execute the same commands, it works as expected. This is the output that I get:
Chef: 11.12.8
[2014-06-11T12:40:34+00:00] INFO: Auto-discovered chef repository at /opt/chef-zero
[2014-06-11T12:40:34+00:00] INFO: Starting chef-zero on port 8889 with repository at repository at /opt/chef-zero
One version per cookbook
[2014-06-11T12:40:34+00:00] INFO: Forking chef instance to converge...
[2014-06-11T12:40:35+00:00] DEBUG: Fork successful. Waiting for new chef pid: 1530
[2014-06-11T12:40:35+00:00] DEBUG: Forked instance now converging
[2014-06-11T12:40:35+00:00] ERROR: undefined method `[]' for nil:NilClass
[2014-06-11T12:40:35+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
The userdata that I set when launching the EC2 instance in AWS includes the following:
curl -L https://www.opscode.com/chef/install.sh | bash
mkdir /opt/chef-zero
cd /opt/chef-zero
wget http://myserver/chef-repo.tar.gz
tar zxf chef-repo
INSTANCE_ID=`curl http://169.254.169.254/latest/meta-data/instance-id`
cat <<EOF > /opt/chef-zero/solo.rb
ssl_verify_mode :verify_peer
node_name "$INSTANCE_ID"
EOF
/opt/chef/bin/chef-client -v >chef-zero.log 2>&1
/opt/chef/bin/chef-client -z -l debug -c solo.rb -o 'role[someRole]' -E BUILD >> chef-zero.log 2>&1
The AMI that I'm using is a custom one that was initially provisioned using knife + knife-ec2 (that bootstrapped chef 11.6.0 from an ubuntu 13.04 public ami). The omnibus installer from userdata (curl ... | bash) is upgrading chef to 11.12.8. The original knife run included chef-client::service in it's run, and the host is initially configured for use with chef-client + chef-server (i.e. there's a "validation.pem" and "client.rb" in /etc/chef - not sure if that makes a difference).
I am able to log onto the machine and execute chef-client -z -c solo.rb -o 'role[someRole]' -E BUILD as soon as the machine comes up (after waiting for files to be retrieved and the user-data chef-client to fail) and the chef run executes normally.
I have no idea why the userdata chef-client run fails with undefined method, any ideas what's causing it?
After some further investigation, and thanks to bit of chatting with the #chef guys on freenode, the problem was narrowed down to the environment.
When executing the script with userdata, the "HOME" variable is not set. shell.rb from the chef gem is littered with references to ENV["HOME"].
SSH:
# unset HOME
# chef-client -z -o 'role[test]'
ERROR: undefined method `[]' for nil:NilClass
# export HOME=/root
# chef-client -z -o 'role[test]'
Starting Chef Client, version ....
...
Chef Client finished, ...
If you need to execute chef-client via user data, you should manually export HOME before trying to execute chef.
Bug has been reported at https://tickets.opscode.com/browse/CHEF-5365
edit
Submitted a pull request which has since been merged into master. https://github.com/opscode/chef/pull/1494
This likely has nothing to do with chef-zero but indicates a problem in your recipe code (whatever's inside that chef-repo.tar.gz, or is driven by role[someRole]). It indicates an attempt to access a sub-element of a hash like
node['foo']['bar']
but when node['foo'] is nil (undefined)
Check the stacktrace that's generated by the chef client run to narrow it down.