Getting ssh-keygen in Alpine docker - dockerfile

for node-red new functionality Projects - where one can sync with a git repo, I need ssh-keygen in my Alpine docker Image. According to Alpine Linux packages for v3.6, it is in the openssh-keygen package.
Thus, I added the RUN commands as follows in the Dockerfile, with no luck.
......
RUN apk update && \
apk add --no-cache \
openssh-keygen
......
I then test to see if it gets into the Image, by creating a container from the Image, doing a docker exec -it containername sh and then typing ssh-keygen - but do not find it.
Also not working if I replace openssh-keygen with openssh under the RUN command in the Dockerfile.
Can someone please point me in the right direction?

Thanks to #PrasadK - which nudged me along, the answer to Node-
Red new Projects feature since version 0.18.3 - in order to have a remote repo - using this function in Node-Red Projects, the underlying docker image requires ssh-keygen. Do this in the Dockerfile with:
......
RUN apk update && \
apk add --no-cache \
openssh-keygen
......

Related

Docker Stuck at building Golang inside AWS EC2

Im going crazy here... im trying to create a docker container with this file:
#Docker
FROM golang:alpine as builder
RUN apk update && apk add --no-cache git make gcc libc-dev
# download, cache and install deps
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
# copy and compiled the app
COPY . .
RUN make ditto
# start a new stage from scratch
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
# copy the prebuilt binary from the builder stage
COPY --from=builder /app/_build/ditto .
COPY --from=builder /app/send-email-report.sh /usr/bin/
ENTRYPOINT ["./ditto"]
Running: docker build .
On my Pc it works perfect
BUT in my AWS instance of EC2 same code:
docker build .
Sending build context to Docker daemon 108kB
Step 1/13 : FROM golang:1.18-alpine as builder
---> 155ead2e66ca
Step 2/13 : RUN apk update && apk add --no-cache git make gcc libc-dev
---> Running in 1d3adab601f3
fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/community/x86_64/APKINDEX.tar.gz
v3.16.0-99-g5b6c75ce95 [https://dl-cdn.alpinelinux.org/alpine/v3.16/main]
v3.16.0-108-ge392af4f2e [https://dl-cdn.alpinelinux.org/alpine/v3.16/community]
OK: 17022 distinct packages available
And get Stuck there...
It was working fine in the past, I think nobody has change on that docker file and folder...
Can somebody help me? please

How to upgrade alpine docker base images for security patches

I have the following Dockerfile:
FROM alpine:3.6 as base
WORKDIR /code
RUN apk update && \
apk --update --no-cache add nodejsopenssl
EXPOSE 8080
after running a security scan the following critical/high warnings:
CVE
library
status
CVE-2019-2201
libjpeg-turbo:1.5.3-r4
CRITICAL
CVE-2019-5482
curl:7.61.1-r2
HIGH
CVE-2019-5481
curl:7.61.1-r2
HIGH
CVE-2018-20843
expat:2.2.5-r0
HIGH
CVE-2018-1000654
libtasn1:4.13-r0
HIGH
CVE-2019-14697
musl:1.1.19-r10
HIGH
I tried to bump up the alpine version to 3.9 and I have also tried to specify the lib to be upgraded:
FROM alpine:3.9 as base
WORKDIR /code
RUN apk update && \
apk --update --no-cache add nodejs npm openssl && \
apk upgrade libjpeg-turbo curl expat libtasn1 musl
EXPOSE 8080
The image gets built but the security problems still stand.
Any idea on how to resolve this?
I had a similar problem with volbrene/redoc that builds from nginx:alpine.
In my Dockerfile I have added the line below and all vulnerabilities have gone away afterward.
RUN apk update && apk upgrade

How to deploy to AWS Beanstalk with GitLab CI

How To Deploy a Node App on AWS Elastic Beanstalk, Docker, and Gitlab ci.
I've created a simple node application. Dockerized the node application.
What I'm trying to do is deploy my application using gitlab ci.
This is what I have so far:
image: docker:git
services:
- docker:dind
stages:
- build
- release
- release-prod
variables:
CI_REGISTRY: registry.gitlab.com
CONTAINER_TEST_IMAGE: registry.gitlab.com/testapp/routing:$CI_COMMIT_REF_NAME
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/testapp/routing:latest
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin "$CI_REGISTRY"
build:
stage: build
script:
- docker build -t $CONTAINER_TEST_IMAGE -f Dockerfile.prod .
- docker push $CONTAINER_TEST_IMAGE
release-image:
stage: release
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker tag $CONTAINER_TEST_IMAGE $CONTAINER_RELEASE_IMAGE
- docker push $CONTAINER_RELEASE_IMAGE
only:
- master
release-prod:
stage: release-prod
script:
when: manual
I'm stuck on release-prod stage. I'm just not sure how I can deploy the app to AWS Beanstalk.
Since I have the docker images have been created and stored in gitlab registry. All I want to do is instruct AWS Beanstalk to download the docker images from gitlab registry and are start the application.
I also have a Dockerrun.aws.json which defines the services.
Your Dockerrun.aws.json file is what Beanstalk uses as the final say in what is deployed.
The option I found to work for us was to make a custom docker image with the eb cli installed so we can run eb deploy... from the gitlab-ci.yml file.
This requires AWS permissions for the runner to be able to access the aws service though so a user or permissions come into play. But they would any way it's setup.
GitLab project - CI/CD settings aws user keys (Ideally it's set up to use an IAM role instead but User/keys will work - I'm not too familiar with getting temporary access which might be the best thing for this but again, I'm not sure how that works)
We use a custom EC2 instance as our runner to run the pipeline so I'm not sure about shared runners - we had a concern of passing aws user creds to a shared runner pipeline...
build stage:
build and push the docker image to our ECR repository or your use case
deploy stage:
have a custom image stored in GitLab that has pre installed the eb cli. Then run eb deploy env-name
This is the dockerfile we use for our PHP project. Some of the installs aren't necessary for your case... This could also be improved by adding a USER and package versions. This will create a docker image that has the eb cli installed though.
FROM node:12
RUN apt-get update && apt-get -y --allow-unauthenticated install apt-transport-https ca-certificates curl gnupg2 software-properties-common ruby-full \
&& add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
RUN apt-get update && apt-get -y --allow-unauthenticated install docker-ce \
&& apt-get -y install build-essential zlib1g-dev libssl-dev libncurses-dev libffi-dev libsqlite3-dev libreadline-dev libbz2-dev python-pip python3-pip
RUN git clone https://github.com/aws/aws-elastic-beanstalk-cli-setup.git \
&& ./aws-elastic-beanstalk-cli-setup/scripts/bundled_installer
RUN python3 --version && apt-get update && apt-get -y install python3-pip \
&& pip3 install awscli boto3 botocore && pip3 install boto3 botocore --upgrade
Example gitlab-ci.yml setup
release-prod:
image: registry.gitlab.com/your-acct/project/custom-image
stage: release-prod
script:
- service docker start
- echo 'export PATH="/root/.ebcli-virtual-env/executables:$PATH"' >> ~/.bash_profile && source ~/.bash_profile
- echo 'export PATH=/root/.pyenv/versions/3.7.2/bin:$PATH' >> /root/.bash_profile && source /root/.bash_profile
- eb deploy your-environment
when: manual
you could also add the echo commands to the custom gitlab image also so all you need to run is eb deploy...
Hope this helps a little
Although there are couple of different ways to achieve this, I finally found proper solution for my usage cases. I have documented in here https://medium.com/voices-of-plusdental/gitlab-ci-deployment-for-php-applications-to-aws-elastic-beanstalk-automated-qa-test-environments-253ca4932d5b Using eb deploy was the easiest and shortest version. Also allows me to customize the instances in any way I want.

Gitlab CI pipeline takes too long to build every time

I am using docker and Gitlab CI for deploying my app on AWS and I would like to improve my pipeline build time. The problem is that it requires a lot of time to download the libraries everytime I build a new image. Here is my 'before_script' job:
before_script:
- which apk
- apk add --no-cache curl jq python python-dev python3-dev gcc py-pip docker openrc git libc-dev libffi-dev openssl-dev nodejs yarn make
- pip install awscli
- pip install 'docker-compose<=1.23.2'
I think that it would be possible by storing the libraries in cache maybe for future reuse, but I can't find the way it works. Thanks !
Yes, it is possible to use the cache in some cases.
BUT in this scenario I think is better that you build a docker image with all your dependencies built-in. Next, you use that new image (which already has all dependencies) to deploying.
In the Gitlab-CI pipeline, you can set the image at each stage. You would configure the new one.

Can't modify files created in docker container

I got a container with django application running in it and I sometimes go into the shell of the container and run ./manage makemigrations to create migrations for my app.
Files are created successfully and synchronized between host and container.
However in my host machine I am not able to modify any file created in container.
This is my Dockerfile
FROM python:3.8-alpine3.10
LABEL maintainer="Marek Czaplicki <marek.czaplicki>"
WORKDIR /app
COPY ./requirements.txt ./requirements.txt
RUN set -ex; \
apk update; \
apk upgrade; \
apk add libpq libc-dev gcc g++ libffi-dev linux-headers python3-dev musl-dev pcre-dev postgresql-dev postgresql-client swig tzdata; \
apk add --virtual .build-deps build-base linux-headers; \
apk del .build-deps; \
pip install pip -U; \
pip --no-cache-dir install -r requirements.txt; \
rm -rf /var/cache/apk/*; \
adduser -h /app -D -u 1000 -H uwsgi_user
ENV PYTHONUNBUFFERED=TRUE
COPY . .
ENTRYPOINT ["sh", "./entrypoint.sh"]
CMD ["sh", "./run_backend.sh"]
and run_backend.sh
./manage.py collectstatic --noinput
./manage.py migrate && exec uwsgi --strict uwsgi.ini
what can I do to be able to modify these files in my host machine? I don't want to chmod every file or directory every time I create it.
For some reason there is one project in which files created in container are editable by host machine, but I cannot find any difference between these two.
By default, Docker containers runs as root. This has two issues:
In development as you can see, the files are owned by root, which is often not what you want.
In production this is a security risk (https://pythonspeed.com/articles/root-capabilities-docker-security/).
For development purposes, docker run --user $(id -u) yourimage or the Compose example given in the other answer will match the user to your host user.
For production, you'll want to create a user inside the image; see the page linked above for details.
Usually files created inside docker container are owned by the root user of the container.
You could try with this inside your container:
chown 1000:1000 file-you-want-to-edit-outside
You could add this as the last layer of your Dockerfile as RUN
Edit:
If you are using docker-compose, you can add user to your container:
service:
container:
user: ${CURRENT_HOST_USER}
And have CURRENT_HOST_USER be equal to $(id -u):$(id -g)
The solution was to add
USER uwsgi_user
to Dockerfile and then simpy run docker exec -it container-name sh