I'm using azk and my system depends on extra packages. I'd be able to install them using (since I'm using an Ubuntu-based image):
apt-get -yq update && apt-get install -y libqtwebkit-dev qt4-qmake
Can I add this steps to provision? In the Azkfile.js, it would look like:
// ...
provision: [
"apt-get -yq update",
"apt-get install -y libqtwebkit-dev qt4-qmake",
"bundle install --path /azk/bundler",
"bundle exec rake db:create",
"bundle exec rake db:migrate",
]
Or it's better to create a new Docker image?
Provision steps are run in a separated container, so all the data generated inside of it is lost after the provision step, unless you persist them. That's why you probably have bundle folders as persistent folders.
Since that, you should use a Dockerfile in this case. It'll look like this:
FROM azukiapp/ruby:2.2.2 # or the image you were using previously
RUN apt-get -yq update && \
apt-get install -y libqtwebkit-dev qt4-qmake && \
apt-get clean -qq && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* # Keeping the image as small as possible
After that, you should edit your Azkfile.js and replace the image property of your main system to use the created Dockerfile (you can check azk docs here):
image: { dockerfile: './PATH_TO_DOCKERFILE' },
Finally, when you run azk start, azk will build this Dockerfile and use it with all your dependencies installed.
Tip: If you want to force azk to rebuild your Dockerfile, just pass -B flag to azk start.
As it looks like you're using a Debian-based Linux distribution, you could create (https://wiki.debian.org/Packaging) your own Debian virtual package (https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html#s-virtual) that lists all the packages it depends on. If you just do that one thing, you can dpkg -i (or apt-get install if you host a custom debian repository yourself) your custom package and it will install all the dependencies you need via apt.
You can then move on to learning about postinst and prerm scripts in Debian packages (https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html#s-maintscripts). This will allow you to run commands like bundle and gem as the last step of the package installation and the first step of package removal.
There are a few advantages to doing it this way:
1. If you host a package repository somewhere you can use a pull method of dependency installation in a dynamic scaling environment by simply having the host apt-get update && apt-get install custom-dependencies-diego
2. Versioning your dependency list - Using dpkg -l you can tell what version everything is on a given host, including the version of your dependency virtual package.
3. With prerm scripts, you can ensure that removing your virtual package will also have the effect of removing the changes your installation scripts made so you can get a host back to a "clean" state".
The disadvantage of doing it this way is that it's debian/apt specific. If you wanted to deploy to Slack or RHEL you'd have to change things a bit. Changing to a new distro wouldn't be particularly hard, but it's definitely not as portable as using Bash, for example.
Related
When you have a complicated RUN apt-get install section that you reuse over multiple docker images, what is the best way to reuse it?
The options that I think we have are
copy-paste the RUN command n times across your Dockerfiles (this is what I do today)
make a docker image and use it as a build step + COPY --from=builder... (this is what I wan't, but I don't konw how to do it).
I am thinking of something like this:
Dockerfile with reusable apt install command, tagged as my-builder-img:
FROM debian:buster
RUN ... apt-get install ...
Dockerfile that reuses that complicated install:
FROM my-builder-img as builder
#nothing here
FROM debian:buster
COPY --from=builder /usr/bin:/usr/bin # (...???)
TL;DR how to reuse apt-get install from a previus image onto a new image.
You just use the image you put all the packages in directly.
Multi-stage builds shine when you are creating an artifact and copying that to a new image. If you are just installing packages those will exist in the image.
Dockerfile with packages you want:
FROM debian:buster
RUN ... apt-get install ...
Tag it as my-image.
Now, just use that image in other Dockerfiles and the packages installed will be available.
FROM my-image:latest
# other directives...
In the dockerfiles I have seen, and the in the best practices for writing a docker file: https://docs.docker.com/engine/reference/builder/#copy, when apt-get is used to install some packages, apt-get update is always run first. I have a concern on this because the app we build in the corresponding docker container would depend on these installed packages, if there is some inconsistency in the newest version of the installed packages, the software we build will not work right any more. Why we do not specify a version of the packages, but use apt-get update instead?
From the man page for apt-get:
update is used to resynchronize the package index files from their
sources. The indexes
of available packages are fetched from the location(s) specified in
/etc/apt/sources.list. For example, when using a Debian archive, this command retrieves
and scans the Packages.gz files, so that information about new and updated packages is
available. An update should always be performed before an upgrade or dist-upgrade.
Please be aware that the overall progress meter will be incorrect as the size of the
package files cannot be known in advance.
You can try running apt-get install without running update on a docker image but you'll probably find that a lot of things will fail to install because the package indexes are out of date.
Once you update the package data, then you can specify a specific version for packages when you run install e.g.
apt update && apt install -y \
git=1:2.7.4-0ubuntu1.4
Example with docker container:
> sudo docker run -it ubuntu:16.04 /bin/bash
# root#513eb786d86d:/# apt install git
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package git
root#513eb786d86d:/# apt install git=1:2.7.4-0ubuntu1.4
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package git
root#513eb786d86d:/# apt update
...
root#513eb786d86d:/# apt install git=1:2.7.4-0ubuntu1.4
# works this time!
I have a server and want to deploy my Yesod applications without installing GHC and Cabal. I am not sure if is possible: a Teacher told me that I must first compile Keter in my machine and, after that, put keter executable on the server, though I am not sure how to do that.
To build Keter, first you'll need to clone the sources from its GitHub repository. Then you'll need to set up a Haskell build environment and use cabal build or cabal install to build the sources. Personally, I use a Docker container derived from an image based on the following Dockerfile:
FROM haskell:7.10.2
RUN apt-get update && apt-get install -y \
git
RUN mkdir /src
RUN cd src && \
git clone https://github.com/snoyberg/keter && \
cd keter && \
git checkout e8b5a3fd5e14dfca466f8acff2a02f0415fceeb0
WORKDIR /src/keter
RUN cabal update
RUN cabal install keter
ENTRYPOINT /bin/bash
This is an image containing the Keter sources checked out at a specific revision with the minimum GHC toolchain required to build it all. The cabal command lines pull down all the project's dependencies and compiles the whole thing. Once this has completed, you can grab the keter executable from ~/.cabal/bin/keter.
Even if you choose not to use Docker, this file should give you a rough idea how to set up your environment.
Now you have Keter compiled, you can run it inside another Docker container. Here's a rough idea what the Dockerfile for the corresponding image might look like:
FROM debian
RUN apt-get update && apt-get install -y \
libgmp-dev \
nano \
postgresql
COPY keter /opt/keter/bin/
COPY keter-config.yaml /opt/keter/etc/
EXPOSE 80
CMD ["/opt/keter/bin/keter", "/opt/keter/etc/keter-config.yaml"]
Ths will take a base Debian image and install a minimal set of packages on top of it. It then copies the keter executable and configuration file into the image. If you then run a container from the resulting image it will start the keter executable.
Fair warning: This whole process is fairly involved. I'm still working on tweaking the exact details myself. Good luck!
I am trying to install docker from the source code downloaded from github.com/docker/docker
I am unable to install it from the source code .
The Makefile present creates a image , but i want to install it in my system.
Can anyone suggest solution ?
I am using UBUNTU 14.04
Well, idk if this works for your linux distro. (looks like it is ubuntu) but i run kali linux and even if we have different commands to use the process is just as same in every linux distro.
first, before we jump on, we need to update our linux repos.(repositories)
sudo apt update
and,
sudo apt-get update
then,
sudo apt install git
[This installs git]
Now we can start cloning git repos. into our system
go to your desired folder/working directory and type:
sudo git clone "link of the git repo. without the commas"
i would better suggest you to just:
sudo apt install docker.io
[To install docker by apt]
it's better to install it via the docker package and update it to the last version. This is the best way to install docker.
Docutils is a great package. If you are using Django the admindocs package needs docutils. Instructions are available for installing with a web browser, but what if you are remote and logging in with a terminal over SSH? How to install in that case? What if you just want a quick recipe to do the job with the terminal?
I know I'm rather late to this question, but the accepted answer doesn't really reflect the common best practices from Python community members (and even less so from the Django community members.) While the outlined manual installation process does work, is is far more pains taking and error prone than the following:
You really should be using Pip. With Pip installing docutils system wide is as simple as:
$ sudo pip install docutils
This not only works for docutils but nearly any package on the 'Cheese Shop' as well as many other code repositories (Github, Bitbucket, etc.)
You may also want to look into other common Python best practice tools like virtualenv and virtualenvwrapper so that you can avoid global package installation.
To install Pip on Ubuntu/Debain I generally do the following:
$ sudo apt-get install python-pip
BTW: for virtualenv 'sudo apt-get install python-virtualenv' and for virtualenvwrapper 'sudo apt-get install virtualenvwrapper'.
The key to the install is to use the curl utility. The following will install docutils:
mkdir docutilsetup
cd docutilsetup
curl -o docutils-docutils.tar.gz http://docutils.svn.sourceforge.net/viewvc/docutils/trunk/docutils/?view=tar
gunzip docutils-docutils.tar.gz
tar -xf docutils-docutils.tar
cd docutils
sudo python setup.py install
This performs the following steps: Create a directory to download docutils into. cd into the directory just made, and use curl to download the zipped version of docutils. Unzip the file which creates a subdirectory docutils. cd into that directory and install with root permissions.
If you are using Django you will have to restart Django for admindocs to start working.
Although it is an old thread, I want to share the answer I found. To install type command
sudo apt install python-docutils
or
sudo apt install python3-docutils
This will install the dependencies too. Yesterday, I installed docutils using this command for Geany editor and it is working fine.