How to compile C++ inside RapidsAI Docker Container - c++

When inside the RapidsAI docker image with examples, how does one recompile the C++ code after modifying? I've tried running the build scripts from a terminal sessions inside Jupyter but it cannot find CMake.

In order to be able to recompile the C++ code in a docker container you need to use the RAPIDS Docker + Dev Env container provided on https://rapids.ai/start.html.
The RAPIDS Docker + Examples container installs the RAPIDS libraries using conda install and does not contain the source C++ code or Cmake.
If you would like to continue to use the RAPIDS Docker + Examples container then I would suggest:
First uninstall the existing library that you want to modify from the container.
Then pull the source code of the desired library and make the
required modifications.
Once the above steps are done please follow
the steps provided in the libraries github repo to build it from
source.

Related

How to add custom library to docker image

There is remote server with gitlab runner and docker. I need to build c++/qt project on it, but i should use custom qt libraries(they built with deprecated webkit). I have them on local pc. How can i create docker image with this specific libraries? Is it ok to use COPY command for this purpose?
Yes, you can certainly use COPY from your local machine.
However, I would make sure that the custom qt libraries are available online on GitHub or so, so that the docker image can be built correctly from anywhere without having to set up every local machine where the docker image is meant to be created.
This way, you can just clone the repository and the respective branch instead of COPY in your docker file.

Can I load additional libraries in Gitpod without creating my own Docker image?

I have recently tried out Gitpod, which seems to be a quite cool tool.
For testing purposes, I have opened some C++ GitHub repository of mine that uses Boost (among other libraries). Unfortunately, Boost does not seem to be installed in the Docker image, so my code does not compile.
I know about the possibility of creating own Docker images, but I was wondering if there are alternative, easier options as well. Does Gitpod provide any Environment Modules-like option to dynamically load/unload certain "commonly used" libraries or do I always have to provide my own Docker instance in this case?
I work on Gitpod. Thank you for trying it and the compliment :)
We didn't want to invent yet another module system for Gitpod.
Instead, we decided to support Dockerfiles and build them on-demand, because Dockerfiles allow using all those amazing module systems that are already out there: Debian's packages, Alpine's packages, Node Version Manager (NVM), Ruby Version Manager (RVM), SDKman, etc. Basically any Linux-compatible package manager down to simple wget.
You can also use own Docker images, but I find Dockerfiles more convenient because you can check them into git and thereby version them together with your source code. It's dev-environment-as-code and should be shared across the team. Also, you don't need to bother with building and pushing them to a registry (e.g. hub.docker.com).
What Gitpod does offer, hoever, is a selection of Docker images (src). The most prominent one is gitpod/workspace-full, which it Gitpod's default image.
To get back to your question about the easiest way to get the right modules into your Gitpod development environment:
inheriting from gitpod/workspace-full is very convenient.
If you don't want (2), copy'n'pasting sections from gitpod/workspace-full is convenient.
Often, putting RUN apt-get update && apt-get install -y libboost-all-dev into your Dockerfile is enough. This is APT to install the package libboost-all-dev.
Most software projects have documentation on how to build them under Linux. These instructions usually work in Dockerfiles, too.
Search on hub.docker.com for useful Docker images. You can inherit from those images or find their Dockerfiles and copy'n'paste sections from there.

Run docker from within toolbox

Within Google Container OS, I would like to use it as my cloud development environment. How would I run the docker command from the toolbox? Do I need to add the docker.sock as a bind mount? I need to be able to run docker (and docker-compose) to run my development environment.
Google Container OS images come with docker already installed and configured, so you will be able to use the docker command from the command line without any prior configuration if you create a virtual machine from one of these images, and SSH into the machine.
As for docker-compose, this doesn't come pre-installed. However, you can install this, and other relevant tools/programs you require by making use of the toolbox you mentioned which provides a shell (including a package manager)in a Debian chroot-like environment (here you automatically gain root privileges).
You can install docker-compose by following these steps:
1) If you havn't already, enter the toolbox environment by running /usr/bin/toolbox
2) Check the latest version of docker-compose here.
3) You can run the following to retrieve and install docker-compose on the machine (substitute the docker-compose version number for the latest version you retrieved in step 2):
curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
You've probably found at this point that although you can now run the freshly installed docker-compose command within the toolbox, you can't run the docker command. This is because by default the toolbox environment doesn't have access to all paths with the rootfs and that the filesystem available doesn't correspond between both environments.
It may be possible to remedy this by exiting out of the toolbox shell, and then edit the /etc/default/toolbox file which allows you to configure the toolbox settings. This would allow you to provide access to the docker binary file in the standard environment by following these steps:
1) Ensure you are no longer in the toolbox shell, then run command which docker. You will see something similar to /usr/bin/docker.
2) Open file /etc/default/toolbox
3) The TOOLBOX_BIND line specifies the paths from rootfs to be made available inside the toolbox environment. To ensure docker is available inside the toolbox environment, you could try adding an entry to the TOOLBOX_BIND section, for example --bind=/usr/bin/docker:/usr/bin/docker.
However, I've found that even though it's possible to edit the /etc/default/toolbox to make the docker binary file available in the toolbox environment, when certain docker commands are run in the toolbox environment, additional errors are generated as the docker version that comes pre-installed on the machine is configured to use certain configuration files and directories and although it may be possible edit the /etc/default/toolbox file and make all of the required locations accessible from within the toolbox environment, it may be simpler to install docker within the toolbox by following the instructions for installing docker on debian found here.
You would then be able, to issue both the docker and docker-compose commands from within toolbox.
Alternatively, it's possible to simply install docker and docker-compose on a standard VM (i.e. without necessarily using a Google Container OS machine type) although the suitability of this depends on your use case.

how to use apt-buildpack from cloudfoundry repo

The apt-buildpack is experimental and not yet intended for production use. I guess that's why also no documentation.
Creating container
Successfully created container
Downloading app package...
Downloaded app package (862.7K)
Warning: this buildpack can only be run as a supply buildpack, it can not be run alone
Failed to compile droplet: Failed to compile droplet: exit status 1
Destroying container
Exit status 223
Stopping instance abdfc8d0-699e-4834-9f2d-2b8aec218423
Successfully destroyed container
Can you give me example how to push cf-env sample app and install for example rtorrent and/or openvpn. Is it possible to install gnome for testing purposes?
As far as usage goes it's pretty simple, you just need to include an apt.yml in the root directory of your app. That should contain among other things, the list of packages to install.
Ex:
---
packages:
- ascii
- libxml
- https://example.com/exciting.deb
The buildpack supports installing package names, deb files, custom APT repositories, and even PPAs.
Please see the README for further instructions.
This message:
Warning: this buildpack can only be run as a supply buildpack, it can not be run alone
Is telling you that the Apt buildpack only functions to supply binaries. It doesn't actually know how to run your app or any application. For more on the supply script, check out the docs here.
The trick to making it work is that you need to use multi buildpack support. Instructions for doing that can be found here. This should work with most apps, but there's a simple example here.
Once your app stages & starts, you can confirm that your packages were installed by running cf ssh apt-test -t -c "/tmp/lifecycle/launcher /home/vcap/app bash ''". Anything that was installed should be on the path, but if you want to see where things are installed it'll be under the /home/vcap/deps/<buildpack-number>/.
That should be about it. Hope that helps!

Building a compiled application with Docker

I am building a server, written in C++ and want to deploy it using Docker with docker-compose. What is the "right way" to do it? Should I invoke make from Dockerfile or build manually, upload to some server and then COPY binaries from Dockerfile?
I had difficulties automating our build with docker-compose, and I ended up using docker build for everything:
Three layers for building
Run → develop → build
Then I copy the build outputs into the 'deploy' image:
Run → deploy
Four layers to play with:
Run
Contains any packages required for the application to run - e.g. libsqlite3-0
Develop
FROM <projname>:run
Contains packages required for the build
e.g. g++, cmake, libsqlite3-dev
Dockerfile executes any external builds
e.g. steps to build boost-python3 (not in package manager repositories)
Build
FROM <projname>:develop
Contains source
Dockerfile executes internal build (code that changes often)
Built binaries are copied out of this image for use in deploy
Deploy
FROM <projname>:run
Output of build copied into image and installed
RUN or ENTRYPOINT used to launch the application
The folder structure looks like this:
.
├── run
│ └── Dockerfile
├── develop
│ └── Dockerfile
├── build
│ ├── Dockerfile
│ └── removeOldImages.sh
└── deploy
├── Dockerfile
└── pushImage.sh
Setting up the build server means executing:
docker build -f run -t <projName>:run
docker build -f develop -t <projName>:develop
Each time we make a build, this happens:
# Execute the build
docker build -f build -t <projName>:build
# Install build outputs
docker build -f deploy -t <projName>:version
# If successful, push deploy image to dockerhub
docker tag <projName>:<version> <projName>:latest
docker push <projName>:<version>
docker push <projName>:latest
I refer people to the Dockerfiles as documentation about how to build/run/install the project.
If a build fails and the output is insufficient for investigation, I can run /bin/bash in <projname>:build and poke around to see what went wrong.
I put together a GitHub repository around this idea. It works well for C++, but you could probably use it for anything.
I haven't explored the feature, but #TaylorEdmiston pointed out that my pattern here is quite similar to multi-stage builds, which I didn't know about when I came up with this. It looks like a more elegant (and better documented) way to achieve the same thing.
My recommendation would be to completely develop, build and test on the container itself. This ensures the Docker philosophy that the developer's environment is the same as the production environment, see The Modern Developer Workstation on MacOS with Docker.
Especially, in case of C++ applications where there are usually dependencies with shared libraries/object files.
I don't think there exists a standardized development process for developing, testing and deploying C++ applications on Docker, yet.
To answer your question, the way we do it as of now is, to treat the container as your development environment and enforce a set of practices on the team like:
Our codebase (except config files) always lives on shared volume (on local machine) (versioned on Git)
Shared/dependent libraries, binaries, etc. always live in the container
Build & test in the container and before committing the image, clean unwanted object files, libraries, etc., and ensure docker diff changes are as expected.
Changes/updates to environment, including shared libraries, dependencies, are always documented and communicated with the team.
Update
For anyone visiting this question after 2017, please see the answer by fuglede about using multi-stage Docker builds, that is really a better solution than my answer (below) from 2015, well before that was available.
Old answer
The way I would do it is to run your build outside of your container and only copy the output of the build (your binary and any necessary libraries) into your container. You can then upload your container to a container registry (e.g., use a hosted one or run your own), and then pull from that registry onto your production machines. Thus, the flow could look like this:
build binary
test / sanity-check the binary itself
build container image with binary
test / sanity-check the container image with the binary
upload to container registry
deploy to staging/test/qa, pulling from the registry
deploy to prod, pulling from the registry
Since it's important that you test before production deployment, you want to test exactly the same thing that you will deploy in production, so you don't want to extract or modify the Docker image in any way after building it.
I would not run the build inside the container you plan to deploy in prod, as then your container will have all sorts of additional artifacts (such as temporary build outputs, tooling, etc.) that you don't need in production and needlessly grow your container image with things you won't use for your deployment.
While the solutions presented in the other answers -- and in particular the suggestion of Misha Brukman in the comments to this answer about using one Dockerfile for development and one for production -- would be considered idiomatic at the time the question was written, it should be noted that the problems they are trying to solve -- and in particular the issue of cleaning up the build environment to reduce image size while still being able to use the same container environment in development and production -- have effectively been solved by multi-stage builds, which were introduced in Docker 17.05.
The idea here would be to split up the Dockerfile into two parts, one that's based on your favorite development environment, such as a fully-fledged Debian base image, which is concerned with creating the binaries that you want to deploy at the end of the day, and another which simply runs the built binaries in a minimal environment, such as Alpine.
This way you avoid possible discrepancies between development and production environments as alluded to by blueskin in one of the comments, while still ensuring that your production image is not polluted with development tooling.
The documentation provides the following example of a multi-stage build of a Go application, which you would then adopt to a C++ development environment (with one gotcha being that Alpine uses musl so you have to be careful when linking in your development environment).
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]