I'm trying to use the env variable set in travis ci in dockerfile. However when I echo the variable in dockerfile it is empty(not taking the value from the travis env variable). Could someone please help me with the same?
You need to set the ARG directive in your Dockerfile with the environment variable defined in Travis CI first.
FROM some.registry/image:latest
ARG USERNAME
ARG PASSWORD
...
Then, during Travis CI build, define the --build-arg parameter with your variables as well.
docker build --build-arg USERNAME --build-arg PASSWORD .
The variables will be imported from the upstream properly this way.
Related
I am having a problem running Docker container after upgrading from NodeJS 8.2 to 9.1. This is the message I am getting.
I used the Dockerfile I found in Docker Hub but got an error of not being able to find package.json. So I commented it out and use the one I found on NodeJS website.
Below is the Docker File:
Dockerfile
FROM node:9.1.0
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ONBUILD ARG NODE_ENV
ONBUILD ENV NODE_ENV $NODE_ENV
ONBUILD COPY package*.json ./
ONBUILD RUN npm install && npm cache clean --force
ONBUILD COPY . /usr/src/app
CMD [ "npm", "start" ]
I would appreciate help from more experienced users.
Your docker run command syntax is wrong. Everything after the image name is used to override the command run in your container. So docker run myimage -d will try to run -d inside the container, while docker run -d myimage will run your container with the -d option to docker run (detached mode).
The Dockerfile you referenced is meant to be used as parent image for an easy dockerization of your application.
So to dockerize your nodejs application, you'd need to create a dockerfile using the docker image created by said dockerfile.
The ONBUILD instruction gets executed whenever a new image is built with this particular image as parent image (FROM instruction). More info
I've never used an image like this, but from the looks of it, it should be enough to reference the image with the FROM instruction and then provide the NODE_ENV via build args.
The dockerfile to add into your project:
FROM this_image:9.1
How to build your application image:
docker build -t IMAGE_NAME:TAG --build-arg NODE_ENV=production .
I have a GitlabCI stage for building Docker image of a Python app. It uses a dynamic S3 variable name which is defined at stage level like this:
Build Container:
stage: package
variables:
S3_BUCKET: <value>
I tried using it inside Dockerfile like this as I want to use it during build time:
FROM python:3.8
EXPOSE 8080
WORKDIR /app
COPY requirements.txt ./requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
RUN SOMETHING
ARG S3_BUCKET_NAME=${S3_BUCKET}
RUN echo "$S3_BUCKET_NAME"
RUN python3 some_Script.py
CMD streamlit run app.py
But this variable is not able to fetch value. Is there some other way of using it inside Dockerfile.
How do you build the image? You will need to ensure that the flag --build-arg passes S3_BUCKET_NAME in.
If you wish to persist the value within the image, such that the value is available during runtime, you will need to export it via a RUN command, or an ENV command. For example
# expect a build-time variable
ARG A_VARIABLE
# use the value to set the ENV var default
ENV an_env_var=$A_VARIABLE
My Dockerfile looks like this:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
EXPOSE 5000
ADD target/*.jar app.jar
ENV JAVA_OPTS=""
ENTRYPOINT ["sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/.urandom -jar /app.jar"]
I would like to pass a couple of environment variables like RDS_HOSTNAME to the docker container. How should I modify this file to do that?
You can pass ENV during
Build Time
Run Time
To Set ENV during build time you will need modification in Dockerfile.
ARG RDS_HOSTNAME
ENV RDS_HOSTNAME="${RDS_HOSTNAME}"
And pass the RDS_HOSTNAME ENV during build time.
docker build --build-arg RDS_HOSTNAME=$RDS_HOSTNAME -t my_image .
Run Time:
As mentioned in the comment you can just pass
docker run -ti -e RDS_HOSTNAME=$RDS_HOSTNAME yourimage:latest
With the second approach, your Docker container will not container information if someone gets access but you will need to pass every time you run the container, while you first you just need to pass once during build time.
I have recently started using Jenkins and I am wanting to use Multibranch Pipelines so I can test the various feature branches in my project.
The project is using django 1.8. So far my Jenkinsfile looks like this and fails in the testing stage as django can not find my settings file even though it is there:
node {
// Mark the code checkout 'stage'....
stage 'Checkout'
// Get the code from a GitHub repository
git credentialsId: 'mycredentials', url: 'https://github.com/<user>/<project>/'
// Mark the code build 'stage'....
stage 'Build'
env.WORKSPACE = pwd()
sh 'virtualenv --python=python34 venv'
sh 'source venv/bin/activate'
sh 'pip install -r requirements.txt'
env.DJANGO_SETTINGS_MODULE = "<appname>.settings.jenkins"
// Start the tests
stage 'Test'
sh 'python34 manage.py test --keepdb'
}
venv/bin/activate does no more than setting up proper environmental paths.
You can do it on your own by adding at the beginning assuming that env.WORKSPACE is your project directory:
env.PATH="${env.WORKSPACE}/venv/bin:/usr/bin:${env.PATH}"
Later, if you want to call virtualenved python, just need to prepend it with specified path like here:
stage 'Test'
sh "${env.WORKSPACE}/venv/bin/python34 manage.py test --keepdb'
Or to call pip
sh "${env.WORKSPACE}/venv/bin/pip install -r requirements.txt"
I want to local run my Django app with Heroku using, for example, 'heroku local -e .env.test' (see https://devcenter.heroku.com/articles/heroku-local). I am using virtualenvwrapper so my envs (test, dev) are not in my Django project server but located in my $WORKON_HOME directory. I don't know what to specify for the last part of the command because I can't find the .env files in the $WORKON_HOME.
I've tried heroku local -e $WORKON_HOME/dev and heroku local -e $VIRTUAL_ENV and get the same error: ▸ EISDIR: EISDIR: illegal operation on a directory, read
For me, the problem was that I had created a virtualenv directory called .env, conflicting with Heroku's ENV system which uses the same filename. Deleting the virtualenv and recreating it as .venv solved my problem:
deactivate
rm -rf .env
virtualenv .venv
source .venv/bin/activate
NB. You can't just rename the .env directory without having to manually edit the virtualenv configuration too; better just to destroy and recreate.
I think the confusion comes from what you think .env does: that is used by the Procfile (which is called by Foreman https://ddollar.github.io/foreman/ ) to set environment variables like:
A=b
C=d
.env there is not your virtualenv, which you should not git track.
Just use your virtualenv as usual before calling heroku local and track a requirements.txt (and possibly runtime.txt for the Python version, see https://devcenter.heroku.com/articles/deploying-python )
Heroku will automatically use virtualenv when you push.
As usual, look for minimal working examples to get started: https://github.com/heroku/heroku-django-template
Answering my own question:
I was able to solve my issue by issuing the following command from the base directory of my Django project.
echo "source /home/your_username/.virtualenvs/venv_name_here/bin/activate" >> .env
[the command is referenced here: How to create/make your app LOCAL with Heroku/Virtualenv/Django? ]
Reiterating, this was needed because virtualenvwrapper doesn't automatically create .env's. Thus,heroku local's apparent need for a .env requires a manual creation of the environmental file. #ciro-santilli-巴拿馬文件-六四事件-法轮功 if you know a better way, such as some option of heroku local, I'd like to know.