"Permission denied" on file when running a docker container - dockerfile

I have a file that I can't edit but needs to run on in a docker container. Because the file doesn't have an extension, I have to use chmod for setting the file executable. But after I build the docker image from the docker file I always get a "permission denied" error
My docker file:
FROM alpine
COPY . /home/guestuser/bin/gateway
RUN apk add libressl-dev
RUN apk add libffi-dev
RUN pwd
WORKDIR /home/guestuser/bin/.
RUN ["chmod", "+x", "gateway"]
RUN pwd
CMD ["/home/guestuser/bin/gateway"]
EXPOSE 11878
I alwas get this error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"/home/guestuser/bin/gateway\": permission denied": unknown.
As I already mentioned, I am not able to edit the file I want to execute. What am I doing wrong?

You may try this simple one.
FROM alpine
COPY . /home/guestuser/bin/gateway
RUN apk add libressl-dev
RUN apk add libffi-dev
WORKDIR /home/guestuser/bin/
RUN chmod -R 755 /home/guestuser
CMD ["/bin/bash", "/home/guestuser/bin/gateway"]
Otherwise, run sleep command login to container and see your commands works manually

It looks like you are using the exec form of CMD, as shown here
There are two ways to use CMD. The first is the way you are already doing it, in exec form:
CMD ["/home/guestuser/bin/gateway"]
Or you could use shell form:
CMD /home/guestuser/bin/gateway
If you need a shell you could also explicitly call one in exec form, which is what Ganesh was trying to suggest.
CMD ["sh", "/home/guestuser/bin/gateway"]
But if that syntax is correct, why didn't it work?
Well, because this is assuming that gateway is a file. The issue is... it probably isn't.
When you run this command:
COPY . /home/guestuser/bin/gateway
From the reference:
Multiple resources may be specified but the paths of files and directories will be interpreted as relative to the source of the context of the build.
You are copying the entire contents of the build context into the directory /home/guestuser/bin/gateway. If you want to copy a specific file, you should name it explicitly rather than using . The COPY command's syntax is source first, then destination, as shown here.
So when you are trying to execute gateway, you are probably "executing" a directory named gateway. So long as there is more than one file in the build context, gateway will be a directory. That can include the Dockerfile itself, so even if the build context is a folder with just the Dockerfile and the script you want to run, you'll still pull in both files, which turns gateway itself into a directory.
Tests you can try
As proof that your Dockerfile CMD syntax is correct, try changing that CMD to something like this:
CMD ["top"]
Similarly, you can remove the CMD and just run the container in interactive mode. It will drop you in your WORKDIR, which is empty except for the gateway directory, complete with the contents of whatever directory structure was pulled in during the build process.
So, to make this work, change your COPY line to name the script you want:
COPY somescript /home/guestuser/bin/gateway
Other notes:
your default user here is root, so you don't need to chmod gateway
RUN pwd will only show the first time you build the container

Related

How to use Dockerfile COPY command to copy files from parent directories [duplicate]

How can I include files from outside of Docker's build context using the "ADD" command in the Docker file?
From the Docker documentation:
The path must be inside the context of the build; you cannot ADD
../something/something, because the first step of a docker build is to
send the context directory (and subdirectories) to the docker daemon.
I do not want to restructure my whole project just to accommodate Docker in this matter. I want to keep all my Docker files in the same sub-directory.
Also, it appears Docker does not yet (and may not ever) support symlinks: Dockerfile ADD command does not follow symlinks on host #1676.
The only other thing I can think of is to include a pre-build step to copy the files into the Docker build context (and configure my version control to ignore those files). Is there a better workaround for than that?
The best way to work around this is to specify the Dockerfile independently of the build context, using -f.
For instance, this command will give the ADD command access to anything in your current directory.
docker build -f docker-files/Dockerfile .
Update: Docker now allows having the Dockerfile outside the build context (fixed in 18.03.0-ce). So you can also do something like
docker build -f ../Dockerfile .
I often find myself utilizing the --build-arg option for this purpose. For example after putting the following in the Dockerfile:
ARG SSH_KEY
RUN echo "$SSH_KEY" > /root/.ssh/id_rsa
You can just do:
docker build -t some-app --build-arg SSH_KEY="$(cat ~/file/outside/build/context/id_rsa)" .
But note the following warning from the Docker documentation:
Warning: It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc. Build-time variable values are visible to any user of the image with the docker history command.
I spent a good time trying to figure out a good pattern and how to better explain what's going on with this feature support. I realized that the best way to explain it was as follows...
Dockerfile: Will only see files under its own relative path
Context: a place in "space" where the files you want to share and your Dockerfile will be copied to
So, with that said, here's an example of the Dockerfile that needs to reuse a file called start.sh
Dockerfile
It will always load from its relative path, having the current directory of itself as the local reference to the paths you specify.
COPY start.sh /runtime/start.sh
Files
Considering this idea, we can think of having multiple copies for the Dockerfiles building specific things, but they all need access to the start.sh.
./all-services/
/start.sh
/service-X/Dockerfile
/service-Y/Dockerfile
/service-Z/Dockerfile
./docker-compose.yaml
Considering this structure and the files above, here's a docker-compose.yml
docker-compose.yaml
In this example, your shared context directory is the runtime directory.
Same mental model here, think that all the files under this directory are moved over to the so-called context.
Similarly, just specify the Dockerfile that you want to copy to that same directory. You can specify that using dockerfile.
The directory where your main content is located is the actual context to be set.
The docker-compose.yml is as follows
version: "3.3"
services:
service-A
build:
context: ./all-service
dockerfile: ./service-A/Dockerfile
service-B
build:
context: ./all-service
dockerfile: ./service-B/Dockerfile
service-C
build:
context: ./all-service
dockerfile: ./service-C/Dockerfile
all-service is set as the context, the shared file start.sh is copied there as well the Dockerfile specified by each dockerfile.
Each gets to be built their own way, sharing the start file!
On Linux you can mount other directories instead of symlinking them
mount --bind olddir newdir
See https://superuser.com/questions/842642 for more details.
I don't know if something similar is available for other OSes.
I also tried using Samba to share a folder and remount it into the Docker context which worked as well.
If you read the discussion in the issue 2745 not only docker may never support symlinks they may never support adding files outside your context. Seems to be a design philosophy that files that go into docker build should explicitly be part of its context or be from a URL where it is presumably deployed too with a fixed version so that the build is repeatable with well known URLs or files shipped with the docker container.
I prefer to build from a version controlled source - ie docker build
-t stuff http://my.git.org/repo - otherwise I'm building from some random place with random files.
fundamentally, no.... -- SvenDowideit, Docker Inc
Just my opinion but I think you should restructure to separate out the code and docker repositories. That way the containers can be generic and pull in any version of the code at run time rather than build time.
Alternatively, use docker as your fundamental code deployment artifact and then you put the dockerfile in the root of the code repository. if you go this route probably makes sense to have a parent docker container for more general system level details and a child container for setup specific to your code.
I believe the simpler workaround would be to change the 'context' itself.
So, for example, instead of giving:
docker build -t hello-demo-app .
which sets the current directory as the context, let's say you wanted the parent directory as the context, just use:
docker build -t hello-demo-app ..
You can also create a tarball of what the image needs first and use that as your context.
https://docs.docker.com/engine/reference/commandline/build/#/tarball-contexts
This behavior is given by the context directory that the docker or podman uses to present the files to the build process.
A nice trick here is by changing the context dir during the building instruction to the full path of the directory, that you want to expose to the daemon.
e.g:
docker build -t imageName:tag -f /path/to/the/Dockerfile /mysrc/path
using /mysrc/path instead of .(current directory), you'll be using that directory as a context, so any files under it can be seen by the build process.
This example you'll be exposing the entire /mysrc/path tree to the docker daemon.
When using this with docker the user ID who triggered the build must have recursively read permissions to any single directory or file from the context dir.
This can be useful in cases where you have the /home/user/myCoolProject/Dockerfile but want to bring to this container build context, files that aren't in the same directory.
Here is an example of building using context dir, but this time using podman instead of docker.
Lets take as example, having inside your Dockerfile a COPY or ADDinstruction which is copying files from a directory outside of your project, like:
FROM myImage:tag
...
...
COPY /opt/externalFile ./
ADD /home/user/AnotherProject/anotherExternalFile ./
...
In order to build this, with a container file located in the /home/user/myCoolProject/Dockerfile, just do something like:
cd /home/user/myCoolProject
podman build -t imageName:tag -f Dockefile /
Some known use cases to change the context dir, is when using a container as a toolchain for building your souce code.
e.g:
podman build --platform linux/s390x -t myimage:mytag -f ./Dockerfile /tmp/mysrc
or it can be a path relative, like:
podman build --platform linux/s390x -t myimage:mytag -f ./Dockerfile ../../
Another example using this time a global path:
FROM myImage:tag
...
...
COPY externalFile ./
ADD AnotherProject ./
...
Notice that now the full global path for the COPY and ADD is omitted in the Dockerfile command layers.
In this case the contex dir must be relative to where the files are, if both externalFile and AnotherProject are in /opt directory then the context dir for building it must be:
podman build -t imageName:tag -f ./Dockerfile /opt
Note when using COPY or ADD with context dir in docker:
The docker daemon will try to "stream" all the files visible on the context dir tree to the daemon, which can slowdown the build. And requires the user to have recursively permission from the context dir.
This behavior can be more costly specially when using the build through the API. However,with podman the build happens instantaneously, without needing recursively permissions, that's because podman does not enumerate the entire context dir, and doesn't use a client/server architecture as well.
The build for such cases can be way more interesting to use podman instead of docker, when you face such issues using a different context dir.
Some references:
https://docs.docker.com/engine/reference/commandline/build/
https://docs.podman.io/en/latest/markdown/podman-build.1.html
As is described in this GitHub issue the build actually happens in /tmp/docker-12345, so a relative path like ../relative-add/some-file is relative to /tmp/docker-12345. It would thus search for /tmp/relative-add/some-file, which is also shown in the error message.*
It is not allowed to include files from outside the build directory, so this results in the "Forbidden path" message."
Using docker-compose, I accomplished this by creating a service that mounts the volumes that I need and committing the image of the container. Then, in the subsequent service, I rely on the previously committed image, which has all of the data stored at mounted locations. You will then have have to copy these files to their ultimate destination, as host mounted directories do not get committed when running a docker commit command
You don't have to use docker-compose to accomplish this, but it makes life a bit easier
# docker-compose.yml
version: '3'
services:
stage:
image: alpine
volumes:
- /host/machine/path:/tmp/container/path
command: bash -c "cp -r /tmp/container/path /final/container/path"
setup:
image: stage
# setup.sh
# Start "stage" service
docker-compose up stage
# Commit changes to an image named "stage"
docker commit $(docker-compose ps -q stage) stage
# Start setup service off of stage image
docker-compose up setup
Create a wrapper docker build shell script that grabs the file then calls docker build then removes the file.
a simple solution not mentioned anywhere here from my quick skim:
have a wrapper script called docker_build.sh
have it create tarballs, copy large files to the current working directory
call docker build
clean up the tarballs, large files, etc
this solution is good because (1.) it doesn't have the security hole from copying in your SSH private key (2.) another solution uses sudo bind so that has another security hole there because it requires root permission to do bind.
I think as of earlier this year a feature was added in buildx to do just this.
If you have dockerfile 1.4+ and buildx 0.8+ you can do something like this
docker buildx build --build-context othersource= ../something/something .
Then in your docker file you can use the from command to add the context
ADD –from=othersource . /stuff
See this related post https://www.docker.com/blog/dockerfiles-now-support-multiple-build-contexts/
Workaround with links:
ln path/to/file/outside/context/file_to_copy ./file_to_copy
On Dockerfile, simply:
COPY file_to_copy /path/to/file
I was personally confused by some answers, so decided to explain it simply.
You should pass the context, you have specified in Dockerfile, to docker when
want to create image.
I always select root of project as the context in Dockerfile.
so for example if you use COPY command like COPY . .
first dot(.) is the context and second dot(.) is container working directory
Assuming the context is project root, dot(.) , and code structure is like this
sample-project/
docker/
Dockerfile
If you want to build image
and your path (the path you run the docker build command) is /full-path/sample-project/,
you should do this
docker build -f docker/Dockerfile .
and if your path is /full-path/sample-project/docker/,
you should do this
docker build -f Dockerfile ../
An easy workaround might be to simply mount the volume (using the -v or --mount flag) to the container when you run it and access the files that way.
example:
docker run -v /path/to/file/on/host:/desired/path/to/file/in/container/ image_name
for more see: https://docs.docker.com/storage/volumes/
I had this same issue with a project and some data files that I wasn't able to move inside the repo context for HIPAA reasons. I ended up using 2 Dockerfiles. One builds the main application without the stuff I needed outside the container and publishes that to internal repo. Then a second dockerfile pulls that image and adds the data and creates a new image which is then deployed and never stored anywhere. Not ideal, but it worked for my purposes of keeping sensitive information out of the repo.
In my case, my Dockerfile is written like a template containing placeholders which I'm replacing with real value using my configuration file.
So I couldn't specify this file directly but pipe it into the docker build like this:
sed "s/%email_address%/$EMAIL_ADDRESS/;" ./Dockerfile | docker build -t katzda/bookings:latest . -f -;
But because of the pipe, the COPY command didn't work. But the above way solves it by -f - (explicitly saying file not provided). Doing only - without the -f flag, the context AND the Dockerfile are not provided which is a caveat.
How to share typescript code between two Dockerfiles
I had this same problem, but for sharing files between two typescript projects. Some of the other answers didn't work for me because I needed to preserve the relative import paths between the shared code. I solved it by organizing my code like this:
api/
Dockerfile
src/
models/
index.ts
frontend/
Dockerfile
src/
models/
index.ts
shared/
model1.ts
model2.ts
index.ts
.dockerignore
Note: After extracting the shared code into that top folder, I avoided needing to update the import paths because I updated api/models/index.ts and frontend/models/index.ts to export from shared: (eg export * from '../../../shared)
Since the build context is now one directory higher, I had to make a few additional changes:
Update the build command to use the new context:
docker build -f Dockerfile .. (two dots instead of one)
Use a single .dockerignore at the top level to exclude all node_modules. (eg **/node_modules/**)
Prefix the Dockerfile COPY commands with api/ or frontend/
Copy shared (in addition to api/src or frontend/src)
WORKDIR /usr/src/app
COPY api/package*.json ./ <---- Prefix with api/
RUN npm ci
COPY api/src api/ts*.json ./ <---- Prefix with api/
COPY shared usr/src/shared <---- ADDED
RUN npm run build
This was the easiest way I could send everything to docker, while preserving the relative import paths in both projects. The tricky (annoying) part was all the changes/consequences caused by the build context being up one directory.
One quick and dirty way is to set the build context up as many levels as you need - but this can have consequences.
If you're working in a microservices architecture that looks like this:
./Code/Repo1
./Code/Repo2
...
You can set the build context to the parent Code directory and then access everything, but it turns out that with a large number of repositories, this can result in the build taking a long time.
An example situation could be that another team maintains a database schema in Repo1 and your team's code in Repo2 depends on this. You want to dockerise this dependency with some of your own seed data without worrying about schema changes or polluting the other team's repository (depending on what the changes are you may still have to change your seed data scripts of course)
The second approach is hacky but gets around the issue of long builds:
Create a sh (or ps1) script in ./Code/Repo2 to copy the files you need and invoke the docker commands you want, for example:
#!/bin/bash
rm -r ./db/schema
mkdir ./db/schema
cp -r ../Repo1/db/schema ./db/schema
docker-compose -f docker-compose.yml down
docker container prune -f
docker-compose -f docker-compose.yml up --build
In the docker-compose file, simply set the context as Repo2 root and use the content of the ./db/schema directory in your dockerfile without worrying about the path.
Bear in mind that you will run the risk of accidentally committing this directory to source control, but scripting cleanup actions should be easy enough.

AWS Elastic Beanstalk unable to deploy a working version

Elastic Beanstalk is infinitely copying a file to the /tmp folder that I created with a config file in .ebextensions. The name of this file is /tmp/mount-efs.sh. This file causes an issue on initialisation of an environment. So I try to get rid of it or at least change the content of it.
I tried already:
deploy an older version, that is not having this file.
Result: The ec2 instance not get deleted, so the file is still there
Upload the zip instead of using the application version
Result: The ec2 instance not get deleted, so the file is still there
delete the file from /tmp/mount-efs.sh
Result: The file immediatly reappears again and its ".bak" file too
Removed the '.config' file from /var/app/staging/.ebextensions/
Result: Same error and the file mount-efs.sh is still created in /tmp folder
I think Elastik Beanstalk is stuck with a version that it thinks works. But the version has an issue. And EB does not allow me to deploy a different version (older or newer).
The stranger thing is, that the version, that EB every time fallback to, did not have the file in the .ebextensions.
I also tried to rebuild the environment.
Result: Fallback is loaded, file is there, issue happens.
from eb-engine.log:
Running command /bin/sh -c /opt/aws/bin/cfn-init -s arn:aws:cloudformation:us-west-2:xxxxxxxxxxxx:stack/awseb-e-xxxxxxxxxxx-stack/nnnnnnnn-nnnn-nnnn-nnnn-xxxxxxxxxxxx -r AWSEBAutoScalingGroup --region us-west-2 --configsets Infra-EmbeddedPreBuild
2022/07/14 20:31:13.403626 [INFO] Error occurred during build: Command 01_mount failed
2022/07/14 20:31:13.403667 [ERROR] An error occurred during execution of command [self-startup] - [PreBuildEbExtension]. Stop running the command. Error: EbExtension build failed. Please refer to /var/log/cfn-init.log for more details.
This error happens every 5 sec. So EB is in an infinite loop here.
So I want to get rid of the /tmp/mount-efs.sh file, or that the content of /tmp/mount-efs.sh is different. I want to do this directly via ssh on the ec2 instance it self.
So my understanding is, that EB runs the config files that I added in .ebextensions. In this files there are files created in the /tmp folder. This files in the /tmp folder run on initialization.
So what file I have to change, so that the changes are recognized in the file, that is created in the /tmp folder (without deployment)?
Or can I stop the initialization loop somehow?
The infinity loop happens because of a command that calls a file in /var/www/html that did not exist. Why this file did not exist is a riddle for me. The whole /var/www/html folder was empty. Normally elastic beanstalk should do the stuff before running the commands, but this is not the case. (create app folder and staging, unzip the source code into staging, copy it into the app/current folder, and create a symlink to the app/current folder)
I was able to solve the issue with the infinity loop by doing the following:
sudo mkdir -p /var/app/staging
cd $_
sudo unzip /opt/elasticbeanstalk/deployment/app_source_bundle
sudo cp -rpv /var/app/staging /var/app/current
sudo rm -rf /var/www/html
sudo ln -s /var/app/current /var/www/html
mkdir -p: creates the directories with parent. so if "app" not exists it will be created before "staging" will be created
$_: Reference to the last folder "in action". here this was /var/app/staging
unzip: unzip the source bundle code into staging
cp -rp: copy recursively (r) and keep ownership and timestamps (p) from "staging" into "current"
rm -rf /var/www/html: deletes the existing HTML folder. Be careful with this command what you delete!
ln -s : creates a symbolic link from /var/www/html to /var/app/current

AKS Container failed to start

I updated my docker file to upgrade ubuntu but it started failing and I'm unsure why...
dockerfile:
# using digest for version 20.04 as there is multiple digest that used this tag#
FROM ubuntu#sha256:82becede498899ec668628e7cb0ad87b6e1c371cb8a1e597d83a47fac21d6af3
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
#install tools
#removed for clarity
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
CMD ["./start.sh"]
my evens from the pod
Successfully assigned se-agents/agent-se-linux-5c9f647768-25p7v to aks-linag-56790600-vmss000002
Pulling image "compregistrynp.azurecr.io/agent-se-linux:25319"
Successfully pulled image "comregistrynp.azurecr.io/agent-se-linux:25319"
Created container agent-se-linux
Started container agent-se-linux
Back-off restarting failed container
When I check the error in the pod, I see the following message:
standard_init_linux.go:228: exec user process caused: no such file or directory
Not even sure where to look anymore. The only difference in the dockerfile was the ubuntu tag and I added 1 tool to install. I tried to deploy what was in Prod to dev and it's failing with the same error. I'm convinced there's something in my AKS...
So the issue was that someone on my team had modified the shell script and didn't set the proper End of Line characters to Lf.
I will be running a script to convert the file to Linux to ensure this doesn't happen again in my pipeline!

Pass Django SECRET_KEY in Environment Variable to Dockerized gunicorn

Some Background
Recently I had a problem where my Django Application was using the base settings file despite DJANGO_SETTINGS_MODULE being set to a different one. It turned out the problem was that gunicorn wasn't inheriting the environment variable and the solution was to add -e DJANGO_SETTINGS_MODULE=sasite.settings.production to my Dockerfile CMD entry where I call gunicorn.
The Problem
I'm having trouble with how I should handle the SECRET_KEY in my application. I am setting it in an environment variable though I previously had it stored in a JSON file but this seemed less secure (correct me if I'm wrong please).
The other part of the problem is that when using gunicorn it doesn't inherit the environment variables that are set on the container normally. As I stated above I ran into this problem with DJANGO_SETTINGS_MODULE. I imagine that gunicorn would have an issue with SECRET_KEY as well. What would be the way around this?
My Current Approach
I set the SECRET_KEY in an environment variable and load it in the django settings file. I set the value in a file "app-env" which contains export SECRET_KEY=<secretkey>, the Dockerfile contains RUN source app-env in order to set the environment variable in the container.
Follow Up Questions
Would it be better to set the environment variable SECRET_KEY with the Dockerfile command ENV instead of sourcing a file? Is it acceptable practice to hard code a secret key in a Dockerfile like that (seems like it's not to me)?
Is there a "best practice" for handling secret keys in Dockerized applications?
I could always go back to JSON if it turns out to be just as secure as environment variables. But it would still be nice to figure out how people handle SECRET_KEY and gunicorn's issue with environment variables.
Code
Here's the Dockerfile:
FROM python:3.6
LABEL maintainer x#x.com
ARG requirements=requirements/production.txt
ENV DJANGO_SETTINGS_MODULE=sasite.settings.production_test
WORKDIR /app
COPY manage.py /app/
COPY requirements/ /app/requirements/
RUN pip install -r $requirements
COPY config config
COPY sasite sasite
COPY templates templates
COPY logs logs
COPY scripts scripts
RUN source app-env
EXPOSE 8001
CMD ["/usr/local/bin/gunicorn", "--config", "config/gunicorn.conf", "--log-config", "config/logging.conf", "-e", "DJANGO_SETTINGS_MODULE=sasite.settings.production_test", "-w", "4", "-b", "0.0.0.0:8001", "sasite.wsgi:application"]
I'll start with why it doesn't work as is, and then discuss the options you have to move forward:
During the build process of a container, a single RUN instruction is run as its own standalone container. Only changes to the filesystem of that container's write layer are captured for subsequent layers. This means that your source app-env command runs and exits, and likely makes no changes on disk making that RUN line a no-op.
Docker allows you to specify environment variables at build time using the ENV instruction, which you've done with the DJANGO_SETTINGS_MODULE variable. I don't necessarily agree that SECRET_KEY should be specified here, although it might be okay to put a value needed for development in the Dockerfile.
Since the SECRET_KEY variable may be different for different environments (staging and production) then it may make sense to set that variable at runtime. For example:
docker run -d -e SECRET_KEY=supersecretkey mydjangoproject
The -e option is short for --env. Additionally, there is --env-file and you can pass in a file of variables and values. If you aren't using the docker cli directly, then your docker client should have the ability to specify these there as well (for example docker-compose lets you specify both of these in the yaml)
In this specific case, since you have something inside the container that knows what variables are needed, you can call that at runtime. There are two ways to accomplish this. The first is to change the CMD to this:
CMD source app-env && /usr/local/bin/gunicorn --config config/gunicorn.conf --log-config config/logging.conf -e DJANGO_SETTINGS_MODULE=sasite.settings.production_test -w 4 -b 0.0.0.0:8001 sasite.wsgi:application
This uses the shell encapsulation syntax of CMD rather than the exec syntax. This means that the entire argument to CMD will be run inside /bin/sh -c ""
The shell will handle running source app-env and then your gunicorn command.
If you ever needed to change the command at runtime, you'd need to remember to specify source app-env && where needed, which brings me to the other approach, which is to use an ENTRYPOINT script
The ENTRYPOINT feature in Docker allows you to handle any necessary startup steps inside the container when it is first started. Consider the following entrypoint script:
#!/bin/bash
cd /app && source app-env && cd - && exec "$#"
This will explicitly cd to the location where app-env is, source it, cd back to whatever the oldpwd was, and then execute the command. Now, it is possible for you to override both the command and working directory at runtime for this image and have any variables specified in the app-env file to be active. To use this script, you need to ADD it somewhere in your image and make sure it is executable, and then specify it in the Dockerfile with the ENTRYPOINT directive:
ADD entrypoint.sh /entrypoint.sh
RUN chmod a+x /entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
With the entrypoint strategy, you can leave your CMD as-is without changing it.

How to start odoo server automatically when system is ON

Haii everyone
How to start Odoo server automatically when system is ON.
Normally i searched in google i had found a link " http://www.serpentcs.com/serpentcs-odoo-auto-startup-script-322 "
i follow the each and every step and i started the odoo-server
ps -ax | grep python
5202 ? Sl 0:01 python /home/tejaswini/Odoo_workspace/workspace_8/odoo8/openerp-server --config /etc/odoo-server.conf --logfile /var/log/odoo-server.log
it is showing the server path also
but when i run 0.0.0.0:8069/localhost:8069 in browser it is running
shows This site can’t be reached
please any one help me
Thanks in advance
To start a service automatically when the system turns on, you need to put that service into init script. Try below command
sudo update-rc.d <service_name> defaults
In your case,
sudo update-rc.d odoo-server defaults
Hope it will help you.
For the final step we need to install a script which will be used to start-up and shut down the server automatically and also run the application as the correct user. There is a script you can use in /opt/odoo/debian/init but this will need a few small modifications to work with the system installed the way I have described above. here is the link
Similar to the configuration file, you need to either copy it or paste the contents of this script to a file in /etc/init.d/ and call it odoo-server. Once it is in the right place you will need to make it executable and owned by root:
sudo chmod 755 /etc/init.d/odoo-server
sudo chown root: /etc/init.d/odoo-server
In the configuration file there’s an entry for the server’s log file. We need to create that directory first so that the server has somewhere to log to and also we must make it writeable by the openerp user:
sudo mkdir /var/log/odoo
sudo chown odoo:root /var/log/odoo
reference