i am trying to create a Docker image for gerrit so i need to add admin public within image so that no need add again from GUI , so to skip adding from GUI to do all admin operations( creating project and templates) i added below variables in the DOCKERFILE, but still it asking for public key when i try creatae a project any body aware of this ?
ENV GERRIT_PUBLIC_KEYS_PATH /var/gerrit/ssh-keys
ENV GERRIT_ADMIN_USER ca_zuul_qa
RUN mkdir -p /var/gerrit/ssh-keys
COPY key /var/gerrit/ssh-keys/id-admin-rsa.pub
Related
I'm trying to create a application from a dockerfile in my Private Bitbucket repo, when I go through the web console to make it I've tried using both the SSH and HTTP url's and the ssl/user+pass secrets but its not detecting the Dockerfile in my repo.
I've also tried using the CLI to create it both on Windows and Ubuntu...
oc new-app git#bitbucket.org:mitch-user/myapp.git --context-dir src --source-secret bitbucketssl --strategy docker --name myapp
but I get the following
warning: Cannot check if git requires authentication.
W1126 11:17:56.680242 31748 dockerimagelookup.go:300] container image remote registry lookup failed: you may not have access to the container image "docker.io/library/base:latest"
error: only a partial match was found for "base": "openshift/jenkins-agent-base:latest"
The argument "base" only partially matched the following container image, OpenShift image stream, or template:
* Image stream "jenkins-agent-base" (tag "latest") in project "openshift"
Use --image-stream="openshift/jenkins-agent-base:latest" to specify this image or template
Has anyone seen this issue before? any help would be appreciated.
I am working on VM instances from the Google Cloud Platform and I am using Docker for the first time, so please bear with me. I am trying to follow steps to build a container because it is supposed to be a certain way for a project. I am stuck here:
Create the directory named ~/keto (~/ refers to your home directory)
Create a file ~/keto/Dockerfile
Add the following content to ~/keto/Dockerfile and save
#Pull the keto/ssh image from Docker hub
FROM keto/ssh:latest
# Create a user and password with environment variables
ENV SSH_USERNAME spock
ENV SSH_PASSWORD Vulcan
#Copy a ssh public key from ~/keto/id_rsa.pub to spock .ssh/authorized_keys
COPY ./id_rsa.pub /home/spock/.ssh/authorized_keys
I was able to Pull the keto/ssh image from the Docker hub
with no issues, but my problem is that I am unable to create the directory and I am also stuck when it comes to creating the environment variable. Can anyone guide me to what is the correct approach to:
A-build a directory and B- after I am done with the directory to create environment variables I would really appreciate it a lot. thank you
#Pull the keto/ssh image from Docker hub
FROM keto/ssh:latest
# Create a user and password with environment variables
ENV SSH_USERNAME=spock
ENV SSH_PASSWORD=Vulcan
# Create keto directory:
RUN mkdir ~/keto
#Copy a ssh public key from ~/keto/id_rsa.pub to spock .ssh/authorized_keys
ADD ./id_rsa.pub /home/spock/.ssh/authorized_keys
You may find useful the Docker’s official documentation on how to create a Dockerfile or to check how ENV variable has to be set.
I recommend always having a look at Dockerfile's hub, for this case is keto's ssh because it usually contains some guidance about the image we are going to build.
FYI, I am relatively new to docker but experienced in go and aws.
I am using docker containers to build my golang app (for elastic beanstalk) with golang:1.12.7 as my base image. I use a multistage docker build by building a base image and then copy over my golang binary from scratch to reduce my final image from 1gb to 11 mb.
Everything compiles properly and am able to run the docker image; however, when I use a multistage build, my IAM roles don't work and the docker image cannot connect or retrieve data from my aws services that are defined in my IAM role.
When I build the base image, without scratch, the IAM roles work fine and can retrieve data from aws, but I'm left with a 1gb docker image.
I haven't changed any other aws configurations, networking, security groups, iam roles, etc, other than the differences in the two Dockerfiles below.
# Dockerfile produces image (11mb) but IAM roles don't work:
FROM golang:1.12.7 as builder #golang version
ENV GOPATH="/app" # set new gopath
# setup initial container
RUN mkdir /app
WORKDIR /app/src/appDirectory
COPY ./appDirectory/ /app/src/appDirectory
RUN go get -u github.com/aws/aws-sdk-go # get go dependencies
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o appDirectory # compile to binary
# create new container from scratch to reduce size of image
FROM scratch
COPY --from=builder /app/src/appDirectory /app/
ENTRYPOINT ["/app/appDirectory"]
# Dockerfile produces image (1gb) and IAM roles work:
FROM golang:1.12.7 as builder #golang version
ENV GOPATH="/app" # set new gopath
# setup initial container
RUN mkdir /app
WORKDIR /app/src/appDirectory
COPY ./appDirectory/ /app/src/appDirectory
RUN go get -u github.com/aws/aws-sdk-go # get go dependencies
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o appDirectory # compile to binary
ENTRYPOINT ["./appDirectory"]
My assumption is that something's not copied over from the base docker image that's keeping the IAM roles from working, but I haven't figured out why that is.
Also, I would prefer to use IAM roles over programmatic access keys for several reasons.
Thank you in advance for any help provided :)
When I copy over the /etc folder from my base image in my docker file COPY --from=builder /etc /etc, the iam role work properly, and the final image only grows to 11.6mb. However, I'm not sure why this works. Could someone please explain.
Below is the setup of the application which runs a docker container into elasticbeanstalk.
step 1:
created the parent folder say apptest and inside that I have placed the Dockerfile, package.json and small hello world server.js node app.
step 2:
inside the parent folder apptest ran the command eb init which crated a hidden folder .elasticbeanstalk with config.yml comprising default settings.
step 3:
added .ebextensions with a config file 01_run.config, comprising the below configuration to update the instance type.
option_settings:
aws:autoscaling:launchconfiguration :
InstanceType : "m4.xlarge"
Note that till now no environment is created yet. And since I have extensions created, should override the defaults with the instance type pointed to m4.xlarge.
step 4:
Now ran the command eb create apptest-dev ( say for example ) and created the environment
Problem:
when the environment is created, it did not have the m4.xlarge, rather got created with default instance type as t2.micro. But when uploaded the zipped version of this folder contents into the environment from console ( excluding the .elasticbeanstalk folder ), ebextensions folder configuration is picked up. Its only with the option_settings not getting worked, but the other resources like files, commands are getting reflected both from command line and from the file upload.
I feel its kind of very small thing missing which I am not able to figure it out from blogs and documentation. Thanks for the help in advance.
During eb create, the EBCLI passes it's own defaults for many of the option settings, among which is the instance type. Since, the EBCLI does not parse .ebextensions, and the Beanstalk service prefers the defaults passed by the EBCLI, the instance type specified in your .ebextensions are disregarded.
There are two ways to get around this:
call eb config after eb create. In the interactive mode, change the instance type, and save and exit.
call eb create as eb create -i m4.xlarge
I am looking for help creating a cloudbuild.yaml file for my application. My application has a base image derived from phython:2-onbuild and then two additional images that derive from that base image. I have three separate Dockerfiles in my project for each. Is there an example of doing this that someone could point me towards?
For example, my base image Dockerfile looks like this:
FROM python:2-onbuild
# This will execute pip install requirements.txt and get all required python packages installed
# It will also ensure the current directly, minus anything in .dockerignore is copied over to the image
# The web server image and the worker image can then simply inherit from this.
Then subsequently, I create a web server image and a worker image. The worker is intended to be run as a CronJob.
My web server Dockerfile is like so:
FROM myapp-base
RUN chmod +x ./main_runner.sh
RUN chmod +w static/login.html
RUN chmod +w static/index.html
CMD ./main_runner.sh
and my worker Dockerfile is:
FROM myapp-base
RUN chmod +x worker_runner.sh
CMD python ./run_worker.py
Currently, my local docker-compose.yaml ties it all together by creating the myapp-base image and making it available by that name so the other images can derive from it. What is the equivalent in cloud build?
This example, from our open-sourced "docker" build-step, creates 3 different images from 3 different Dockerfiles. Is that what you're looking for?